url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.lessonplanet.com/teachers/fractions-to-twelfths
# Fractions to Twelfths For this fractions worksheet, 2nd graders study and analyze 4 bars with lines and determine how many equal parts are in each shape.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927396178245544, "perplexity": 3623.471695738718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696653.69/warc/CC-MAIN-20170926160416-20170926180416-00128.warc.gz"}
https://direct.mit.edu/rest/article-abstract/94/1/88/57990/An-Alternative-Asymptotic-Analysis-of-Residual
This paper presents an alternative method to derive the limiting distribution of residual-based statistics. Our method does not impose an explicit assumption of (asymptotic) smoothness of the statistic of interest with respect to the model's parameters and thus is especially useful in cases where such smoothness is difficult to establish. Instead, we use a locally uniform convergence in distribution condition, which is automatically satisfied by residual-based specification test statistics. To illustrate, we derive the limiting distribution of a new functional form specification test for discrete choice models, as well as a runs-based tests for conditional symmetry in dynamic volatility models. This content is only available as a PDF. You do not currently have access to this content.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039971590042114, "perplexity": 276.90940060546035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00139.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-section-2-4-dividing-polynomials-remainder-and-factor-theorems-exercise-set-page-363/25
## Precalculus (6th Edition) Blitzer quotient $x^3-10x^2+51x-260$ and remainder $r(x)=\frac{1300}{x+5}$ Step 1. The coefficients of the dividend can be identified as $\{1,-5,1,-5,0\}$ and the divisor is $x+5$; use synthetic division as shown in the figure to get the quotient and the remainder. Step 2. We can identify the result as $\frac{x^4-5x^3+x^2-5x}{x+5}=x^3-10x^2+51x-260+\frac{1300}{x+5}$ with the quotient as $x^3-10x^2+51x-260$ and the remainder as $r(x)=\frac{1300}{x+5}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368386030197144, "perplexity": 167.8533712257995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00117.warc.gz"}
https://www.toktol.com/notes/section/827/physics/electromagnetism/magnetic-flux-density
# Supercharge your learning! Use adaptive quiz-based learning to study this topic faster and more effectively. # Magnetic flux density ## Magnetic flux density The magnetic flux density is a physical quantity that indicates the strength of the magnetic field at a particular point in space. The magnitude of magnetic force on a moving charge is dependent on the magnetic flux density at a point. The magnetic flux density $B$ is formally defined as the force per unit charge acting on a particle moving with a unit velocity in a direction perpendicular to the direction of a magnetic field. $$B=\frac{F_{\text{magnetic}}}{qv_{\text{perpendicular}}}$$ $q=$charge of the particle moving with speed $v_{\text{perpendicular}}$ perpendicular to the magnetic field $\vecphy{B}$. It is equivalent to the force per unit length acting on a straight conductor carrying a unit current placed perpendicular to the direction of a magnetic field (note that current is comprised of moving charges and is hence subject to magnetic forces). ## Magnetic-electric force comparison and units of magnetic flux density Both magnetic and electric forces are dependent on the respective field strengths at a point. However, the magnetic force is unique in that it acts in a direction perpendicular to both the direction of the motion of charges as well as the magnetic field. Magnetic forces also only act on moving charges. The SI unit of magnetic flux density is the tesla ($\text{T}$). One tesla ($\text{T}$) is equivalent to one newton per ampere per metre ($\text{N A}^{-1}\text{m}^{-1}$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978792667388916, "perplexity": 215.3752212186048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103891.56/warc/CC-MAIN-20170817170613-20170817190613-00497.warc.gz"}
https://inquiryintoinquiry.com/2014/05/28/peirces-1870-logic-of-relatives-%E2%80%A2-comment-11-18/
## Peirce’s 1870 “Logic Of Relatives” • Comment 11.18 An order-preserving map is a special case of a structure-preserving map and the idea of preserving structure, as used in mathematics, means preserving some but not necessarily all of the structure of the source domain in the transition to the target domain. In that vein, we may speak of structure preservation in measure, the suggestion being that a property able to be qualified in manner is potentially able to be quantified in degree, admitting answers to questions like, “How structure-preserving is it?” Let’s see how this applies to the “number of” function $v : S \to \mathbb{R}.$ Let $-\!\!\!<\!"$ denote the implication relation on logical terms, let $\!\!\le\!\!"$ denote the less than or equal to relation on real numbers, and let $x, y$ be any pair of absolute terms in the syntactic domain $S.$ Then we observe the following relationships: $\begin{array}{lll} x ~-\!\!\!< y & \Rightarrow & v(x) \le v(y) \end{array}$ Equivalently: $\begin{array}{lll} x ~-\!\!\!< y & \Rightarrow & [x] \le [y] \end{array}$ Nowhere near the number of logical distinctions that exist on the left hand side of the implication arrows can be preserved as one passes to the linear ordering of real numbers on the right hand side of the implication arrows, but that is not required in order to call the map $v : S \to \mathbb{R}$ order-preserving, or what is known as an order morphism. This entry was posted in Graph Theory, Logic, Logic of Relatives, Logical Graphs, Mathematics, Peirce, Relation Theory, Semiotics and tagged , , , , , , , . Bookmark the permalink. ### 1 Response to Peirce’s 1870 “Logic Of Relatives” • Comment 11.18 This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766066670417786, "perplexity": 534.7770720118724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494424.70/warc/CC-MAIN-20190220024254-20190220050254-00230.warc.gz"}
http://mathhelpforum.com/advanced-statistics/185682-simple-random-sampling-e-x-print.html
# simple random sampling- E(X) • Aug 5th 2011, 09:20 PM canger simple random sampling- E(X) What does the terminology E(Xi) = u(i=1,...,n) mean? Where u is mu (the mean of all the values for x.) I haven't seen this terminology before...does it imply u times all the i's? • Aug 5th 2011, 10:11 PM CaptainBlack Re: simple random sampling- E(X) Quote: Originally Posted by canger What does the terminology E(Xi) = u(i=1,...,n) mean? Where u is mu (the mean of all the values for x.) I haven't seen this terminology before...does it imply u times all the i's? We will need the rest of the context to be sure but assuming that $X_i,\ i=1,..n$ denotes a random sample from the some population with mean $\mu$ then the expectation of each of the $X_i$'s is $\mu$. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963445663452148, "perplexity": 3156.5130945130227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822992.27/warc/CC-MAIN-20171018142658-20171018162658-00487.warc.gz"}
https://iitutor.com/kinematics-using-integration/
Kinematics using Integration Distances from Velocity Graphs Suppose a car travels at a constant positive velocity $80 \text{ km h}^{-1}$ for $2$ hours. \begin{align} \displaystyle \text{distance travelled} &= \text{speed} \times \text{time} \\ &= 80 \text{ km h}^{-1} \times 2 \text{ h} \\ &= 160 \text{ km} \end{align} We we sketch the graph velocity against time, the graph is a horizontal line, and we can see that the distance travelled is the area shaded in green. Therefore the distance travelled is found by the definite integral: \begin{align} \displaystyle \int_{0}^{2}{80}dt &= \big[80t\big]_{0}^{2} \\ &= 80 \times 2 – 80 \times 0 \\ &= 160 \text{ km} \end{align} Now the speed decreased at a constant rate so taht the car, initially travelling at $80 \text{ km h}^{-1}$, stops in 2 hours. In this case, the average speed is $40 \text{ km h}^{-1}$. \begin{align} \displaystyle \text{distance travelled} &= \text{speed} \times \text{time} \\ &= 40 \text{ km h}^{-1} \times 2 \text{ h} \\ &= 80 \text{ km} \end{align} \begin{align} \displaystyle \text{area of the triangle} &= \dfrac{1}{2} \times \text{based} \times \text{height} \\ &= \dfrac{1}{2} \times 2 \times 80 \\ &= 80 \text{ km} \end{align} Again, the area shaded in green is the distance travelled, and we can find it using the difinite integral: \displaystyle \begin{align} \int_{0}^{2}{(80-40t)}dt &= \big[80t-20t^2\big]_{0}^{2} \\ &= (80 \times 2 – 20 \times 2^2) – (80 \times 0 – 20 \times 0^2) \\ &= 80 \text{ km} \end{align} $$\displaystyle \therefore \text{distance travelled} = \int_{t_1}^{t_2}{v(t)}dt$$ Example 1 The velocity-time graph for a car journey is shown in the graph. Find the total distance travelled by the car, where $t$ is in hours and $v$ is in $\text{ km h}^{-1}$. \begin{align} \displaystyle &\text{Total distance travelled} \\ &= \text{total area under the curve} \\ &= A+B+C+D+E \\ &= \dfrac{1}{2} \times 1 \times 80 + 2 \times 80 + \dfrac{80+30}{2} \times 1 + 1 \times 30 + \dfrac{1}{2} \times 30 \times 1 \\ &= 40 + 160 + 55 + 15 \\ &= 270 \text{ km} \end{align} Displacement and Velocity Functions For some displacement $s(t)$, the velocity function is $v(t)=s'(t)$. So, given a velocity function it is determined that the displacement function by the integral: $$s(t) = \int{v(t)}dt$$ The constant of integration determines where on the line the object begins, called the initial potition. Using the displacement function we can determine the change in displacement in a time interval $t_1 \le t \le t_2$. \displaystyle \begin{align} \text{Displacement} &= s(t_2) – s(t_1) \\ &= \int_{t_1}^{t_2}{v(t)}dt \end{align} To find the total distance travelled given a velocity function $v(t)=s'(t)$ on $t_1 \le t \le t_2$: • Draw an accurate sign diagram for $v(t)$ to determine any changes of direction • Find $s(t)$ by integration, including a constant of integration • Find $s(t)$ at each time the direction chagnes. • Draw a motion diagram. • Calculate the total distance travelled from the motion diagram. Example 2 Find the distance travelled by a car with a velocity $v(t) = 2t+5$, where $t$ is in hours and $v$ is in $\text{km h}^{-1}$ for the first $5$ hours. \begin{align} \displaystyle \text{distance travelled} &= \int_{0}^{5}{(2t+5)}dt \\ &= \big[t^2+5t\big]_{0}^{5} \\ &= (5^2 + 5 \times 5) – (0^2 + 5 \times 0) \\ &= 50 \text{ km} \end{align} Displacement and Velocity Functions The acceleration functin is the derivative of the velocity function, so $a(t) = v'(t)$. Given an acceleration function, we can determine the velocity function by integration.: $$v(t) = \int{a(t)}dt$$ Example 3 A particle moves in a straight line with velocity $v(t)=6t^2-18t+12 \text{ m s}^{-1}$. Find the total distance travelled that the particle moves in the first $3$ seconds of motion. \begin{align} \displaystyle v(t) &= 0 \\ (t-1)(t-2) &= 0 \\ t &= 1 \text{ and } t=2 \\ x(t) &= \int{(6t^2-18t+12)}dt \\ &= 2t^2 -9t^2 + 12t +c \\ x(0) &= c \\ x(1) &= 5+c \\ x(2) &= 4+c \\ x(3) &= 9+c \\ \end{align} Since the signs change, the particle reverses direction at $t=1$ and $t=2$ seconds. Sign Diagram: \begin{array}{|c|c|c|} \hline \text{time} & \text{movement} & \text{distance travelled} \\ \hline t=0 & \text{ moving positive direction} & 0 \\ \hline 0 \lt t \lt 1 & \text{moving from } x=c \text{ to } x=5+c & 5 \\ \hline t=1 & \text{ the particle changes its direction} & 0 \\ \hline 1 \lt t \lt 2 & \text{moving from } x=5+c \text{ to } x=4+c & 1 \\ \hline t=2 & \text{ the particle changes its direction} & 0\\ \hline 2 \lt t \lt 3 & \text{moving from } x=4+c \text{ to } x=9+c & 5 \\ \hline t=3 & x=9+c & 0 \\ \hline \end{array} Motion Graph: \begin{align} \displaystyle \text{total distance travelled} &= 5+1+5 \\ &= 11 \text{ m} \end{align}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1316.609067852668}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00637.warc.gz"}
https://pascalkieslich.github.io/mousetrap/reference/mt_distmat.html
Computes the point- or vector-wise dissimilarity between each pair of trajectories. mt_distmat( data, use = "sp_trajectories", save_as = "distmat", dimensions = c("xpos", "ypos"), weights = rep(1, length(dimensions)), pointwise = TRUE, minkowski_p = 2, na_rm = FALSE ) ## Arguments data a mousetrap data object created using one of the mt_import functions (see mt_example for details). Alternatively, a trajectory array can be provided directly (in this case use will be ignored). a character string specifying which trajectory data should be used. a character string specifying where the resulting data should be stored. a character vector specifying which trajectory variables should be used. Can be of length 2 or 3 for two-dimensional or three-dimensional trajectories respectively. numeric vector specifying the relative importance of the variables specified in dimensions. Defaults to a vector of 1s implying equal importance. Technically, each variable is rescaled so that the standard deviation matches the corresponding value in weights. To use the original variables, set weights = NULL. boolean specifying the way dissimilarity between the trajectories is measured (see Details). If TRUE (the default), mt_distmat measures the average dissimilarity and then sums the results. If FALSE, mt_distmat measures dissimilarity once (by treating the various points as independent dimensions). an integer specifying the distance metric. minkowski_p = 1 computes the city-block distance, minkowski_p = 2 (the default) computes the Euclidian distance, minkowski_p = 3 the cubic distance, etc. logical specifying whether trajectory points containing NAs should be removed. Removal is done column-wise. That is, if any trajectory has a missing value at, e.g., the 10th recorded position, the 10th position is removed for all trajectories. This is necessary to compute distance between trajectories. ## Value A mousetrap data object (see mt_example) with an additional object added (by default called distmat) containing the distance matrix. If a trajectory array was provided directly as data, only the distance matrix will be returned. ## Details mt_distmat computes point- or vector-wise dissimilarities between pairs of trajectories. Point-wise dissimilarity refers to computing the distance metric defined by minkowski_p for every point of the trajectory and then summing the results. That is, if minkowski_p = 2 the point-wise dissimilarity between two trajectories, each defined by a set of x and y coordinates, is calculated as sum(sqrt((x_i-x_j)^2 + (y_i-y_j)^2)). Vector-wise dissimilarity, on the other hand refers to computing the distance metric once for the entire trajectory. That is, vector-wise dissimilarity is computed as sqrt(sum((x_i-x_j)^2 + (y_i-y_j)^2)). ## Examples # Spatialize trajectories mt_example <- mt_spatialize(mt_example) # Compute distance matrix mt_example <- mt_distmat(mt_example, use="sp_trajectories")
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313063383102417, "perplexity": 2841.43598215188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00116.warc.gz"}
http://www.ugrad.math.ubc.ca/coursedoc/math101/notes/applications/velocity.html
## Acceleration, velocity, and Position The connections between position, velocity, and acceleration formed one of the important themes of differential calculus. We will find that these relationships also form an important application of the definite integral, especially in cases in which one of the quantities varies with time. To discuss these concepts, we will use the notation: Relating velocity to acceleration Remembering that the acceleration is defined by the derivative we can apply the Fundamental Theorem of Calculus to write this relationship in the form If we pick call the initial time and the final time , then this integral has the form Relating Position to Velocity The velocity is defined by the derivative By the Fundamental Theorem of Calculus, We call this difference in the final and the starting positions, , the displacement. As before, we can chose to start the clock so that the process takes place between and , so that the integral becomes: Example 1: The case of constant acceleration When the acceleration is constant, , we can use the general results above to compute an expression for the velocity and the position at a given time. Here is how we would do this. First to find the velocity at time t, we would integrate the acceleration: This implies that Now, to find the position, we integrate this velocity, as follows: Rearranging algebraically, leads to the result A simple application (constant acceleration) We can apply the general result above to the following simple problem: The speed of a car increases from rest up to 100 km/hr in 30 seconds. Find the distance that the car has travelled during this time. To solve this problem, we first observe that the initial velocity is , and the final velocity is . Also note that . Care must be taken to use consistent units for time and distance here. Since the acceleration is constant we might observe that, for this case, (where the fraction 1000/3600 has been used to convert the kilometres to metres and the hours to seconds). From the previous results we can now express the velocity at some time t as We now find the displacement Thus the car has travelled 414 metres during the first 30 seconds when it is accelerating • (1) Suppose that at the car is travelling at velocity and that the brakes is applied so that it has stopped after 30 seconds. Find the deceleration and the distance that the car has travelled during that time. • (2) Suppose that you are driving your car at 100 km/hr when a pedestrian dashes across the street. It takes you 0.5 second to react (during which the car continues to move forward at the same speed). You then slam on the brakes and decelerate at the rate . How far will the car move before coming to a complete stop ? Example 2: cases in which the acceleration is not constant When a skydiver jumps from an airplane, once the parachute is open, the speed of falling "levels off" to some constant, safe speed, called the terminal velocity. We have seen last term that the velocity of the skydiver can be described by the differential equation where is acceleration due to gravity, and is the slowing down due to the friction between the parachute and the air. You may or may not remember the details, but the conclusion we had found by studying this differential equation was that that
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455966353416443, "perplexity": 323.60592192975616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
https://proceedings.neurips.cc/paper/2019/hash/39555391eb0624a439c5131b1bb8a2e0-Abstract.html
#### Authors Joshua Hanson, Maxim Raginsky #### Abstract <p>There has been a recent shift in sequence-to-sequence modeling from recurrent network architectures to convolutional network architectures due to computational advantages in training and operation while still achieving competitive performance. For systems having limited long-term temporal dependencies, the approximation capability of recurrent networks is essentially equivalent to that of temporal convolutional nets (TCNs). We prove that TCNs can approximate a large class of input-output maps having approximately finite memory to arbitrary error tolerance. Furthermore, we derive quantitative approximation rates for deep ReLU TCNs in terms of the width and depth of the network and modulus of continuity of the original input-output map, and apply these results to input-output maps of systems that admit finite-dimensional state-space realizations (i.e., recurrent models).</p>
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666164517402649, "perplexity": 1342.4243395314804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00481.warc.gz"}
http://wavescotedazur.org/contributions/talk/nonlinear_waves_and_turbulence_in_space_plasma/Luca_Franci/
## Interpreting spacecraft observations of plasma turbulence with kinetic numerical simulations in the low electron beta regime Luca Franci [email protected] Queen Mary University of London, 327 Mile End Road, E1 4NS, London, United Kingdom We present numerical results from high-resolution hybrid and fully kinetic simulations of plasma turbulence, following the development of the energy cascade from large magnetohydrodynamic scales down to electron characteristic scales. We explore a regime of plasma turbulence where the electron plasma beta is low, typical of environments where the ions are much hotter than the electrons, e.g., the Earth’s magnetosheath and the solar corona, as well as regions downstream of collisionless shocks. In such range of scales, recent theoretical models predict a different behaviour in the nonlinear interaction of dispersive wave modes with respect to what is typically observed in the solar wind, i.e., the presence of so-called inertial kinetic Alfvén waves. We also extend our analysis to scales around and smaller than the electron gyroradius, where hints of a further steepening of the magnetic and electric field spectra have been recently observed by the NASA’s Magnetospheric Multiscale mission, although not yet supported by theoretical models. Our numerical simulations exhibit a remarkable quantitative agreement with recent observations by MMS in the magnetosheath, allowing us to investigate simultaneously the spectral break around ion scales and the two spectral breaks at electron scales, the magnetic compressibility, and the nature of fluctuations at kinetic scales.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849062323570251, "perplexity": 1321.6089099536775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00530.warc.gz"}
https://math.stackexchange.com/questions/1141964/kernel-and-image-of-a-polynomial-linear-transformation/1141977
# Kernel and Image of a polynomial linear transformation I have here a linear transformation $T : P_3(\mathbb{R})\rightarrow P_3(\mathbb{R})$ defined by: $T(at^3 + bt^2 + ct + d) = (a-b)t^3 + (c-d)t$ I'm very very new in this subject and I'm not going well with polynomials. I need find the $Kernel$ and the $Image$ of the transformation. Look what I've been thinking: $Ker(T) = \{ T(p) = 0 / p \in P_3\}$ $T(at^3 + bt^2 + ct + d) = (a-b)t^3 + (c-d)t = 0$ $(a-b) = 0 \ ;\ \ (c-d) = 0 \ ;\ \ a = b \ ; \ \ c = d$ $Ker(T) = \{ at^3 + at^2 + ct +c\ /\ a,c \in \mathbb{R} \}$ And what about the $Image$? I know that $Im(T) = \{ T(p) / p \in P_3 \}$, but how can I show it? And how can I test if a polynomial such as $p(t) = t^3 + t^2 + t -1 \in Im(t)$? • Can you write $t^3 + t^2 + t - 1$ as $(a - b) t^3 + (c - d) t$? – user66081 Feb 10 '15 at 12:00 • When are two polynomials equal? – Tomás Feb 10 '15 at 12:06 • I can't. So isn't It in Im(T)? – João Paulo Feb 10 '15 at 12:07 • Right. So that should give you an idea what is in Im(T) – user66081 Feb 10 '15 at 12:08 The kernel is correct. Additionally, since the kernel depends on only two coefficients $a$ and $c$, it has dimension 2. For the image: Take any polynomial $p(t)=At^3+Bt^2+Ct+E$. The question now is: How do $A,B,C,E$ have to look for there to exist some $a,b,c,d$ such that $T(at^3+bt^2+ct+d)=p(t)$? The question is equivalent to solving for $A, B, C, E$ in the equation: $(a-b)t^3+0t^2+(c-d)t+0=At^3+Bt^2+Ct+E$. We now have: $A=a-b$, $B=0$, $C=c-d$ $E=0$ We can take: $a=A$, $b=0$, $c=C$ $d=0$ Consequently, $p$ is in the image, iff $B=0=E$. The image, then, is: \begin{align*} \mbox{Im}(T)=\{At^3+Ct\ |\ A,C\in\mathbb R\}. \end{align*} • The Image also has dim = 2, so $dim(Ker) + dim(Im) = dim(P_3) = 2 + 2 = 4$... I think I got it... thanks!! – João Paulo Feb 10 '15 at 12:12 We can set up the matrix of the linear transformation $T:P_3(\mathbb{R})\rightarrow P_3(\mathbb{R})$, then find its null space and column space, respectively. First, if we agree to represent the third-order polynomial $P_3=at^3 + bt^2 + ct + d$ by the column vector $\begin{pmatrix}a &b& c& d\end{pmatrix}^T$, then $$T=\begin{pmatrix} 1 & -1 & 0 &0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 1 & -1\\ 0 & 0 & 0 & 0 \end{pmatrix}.$$ It is obvious that the pivots of $T$ is on the first and the third columns, so the kernel of T (i.e., the null space) is $$\mathrm{ker}\, T=\mathrm{span}\{\begin{pmatrix} 1\\1\\0\\0 \end{pmatrix},\begin{pmatrix} 0\\0\\1\\1 \end{pmatrix}\},$$ and the image of $T$ is spanned by the pivot columns of $T$: $$\mathrm{im}\, T=\mathrm{span}\{\begin{pmatrix} 1\\0\\0\\0 \end{pmatrix},\begin{pmatrix} 0\\0\\1\\0 \end{pmatrix}\}.$$ The answers are identical to what have been given. This is just to show we can get the answers by a slightly different approach.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8897906541824341, "perplexity": 256.95399860901034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00045.warc.gz"}
http://physics.stackexchange.com/questions/20907/what-does-activation-energy-actually-do
# What does activation energy actually do? Spontaneous (exothermic) chemical reactions often require a push from the addition of externally supplied energy. This energy is often called activation energy. What does activation energy actually do? What is the energy hill that the activation energy surmounts? -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923453152179718, "perplexity": 1542.3779468496014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099758.82/warc/CC-MAIN-20150627031819-00242-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/how-to-solve-for-the-distance-traveled-given-force-mass-and-speed.958298/
# How to solve for the distance traveled given force, mass and speed • Start date • #1 2 0 ## Homework Statement A car has a mass of 1500kg. If the driver applies the breaks, the maximum amount of friction force that can be applies without skidding is 7000N. If the car is traveling at 25m/s, what is the shortest distance the car can stop safely? ## Homework Equations I'm assuming it has something to do with F=mα and using that answer to solve for distance using the kinematics equation: Vfinal(squared)=Vinitial+2α[(X-X[initial] ## The Attempt at a Solution I divided 7000N by 1500kg to cancel out the kg, to get 4.67 m/s(squared?) I assumed that was what my acceleration and said my final velocity should be 0. So my equation worked out to be 0=25+2×4.67(X-0). then I tried to solve for x and I got 2.67m, then I transposed 25 for acceleration and 4.67 for initial velocity and got .0467m which is way too small, the correct answer is 67m (according to masteringphysics). Even after getting the solution and plugging that into the equation for X ; the solution doesn't equal 0 like it should. I tried to solve using the position kinematics equation X=Xinitial+V(initial)×T(squared)+.5α×T(squared) but I couldn't make it work, since I wasn't given a time interval to work with. • #2 stockzahn Homework Helper Gold Member 498 137 Ahoi at PF! It's really not easy to read mathematical work in words, but therefore symbols and signs can be used to make it more clear. So I try to translate what you've written at point 2): ## Homework Equations I'm assuming it has something to do with F=mα and using that answer to solve for distance using the kinematics equation: Vfinal(squared)=Vinitial+2α[(X-X[initial] ##v_{final}^2 =v_{initial} +2as## First of all, there is a small mistake in this formula - but I suppose it is a typo. Secondly, do you know where die formula comes from? • #3 Merlin3189 Homework Helper Gold Member 1,684 784 car mass =1500kg. maximum friction force = 7000N. I car is traveling at 25m/s, what is the shortest distance the car can stop safely? ## Homework Equations Vfinal(squared)=Vinitial+2α[(X-X[initial] Not a correct formula m2/sec2 ≠ m/sec + m2/sec2 ## The Attempt at a Solution I divided 7000N by 1500kg to cancel out the kg, to get 4.67 m/s(squared?) So used F=ma transposed to a=F/m Ok to get 4.67 m/sec2 Ok I assumed that was ...my acceleration and said my final velocity should be 0. Ok So my equation worked out to be 0=25+2×4.67(X-0). but the formula is wrong, so this gives the wrong answer then I tried to solve for x and I got 2.67m, ditto then I transposed 25 for acceleration and 4.67 for initial velocity which is crazy! You don't just swap numbers around for no reason. and got .0467m which is way too small, the correct answer is 67m (according to masteringphysics). I agree Even after getting the solution and plugging that into the equation for X ; the solution doesn't equal 0 like it should. because that equation was wrong I tried to solve using the position kinematics equation X=Xinitial+V(initial)×T(squared)+.5α×T(squared) but I couldn't make it work, since I wasn't given a time interval to work with. You were after the right formula originally, but just copied down wrongly. • #4 2 0 I looked back at the kinematics formula and realized the initial velocity was also supposed to be squared (which I hadn't done, originally) and that's why my answer didn't match up. I had transposed the numbers because I still didn't understand which calculation gave me the initial velocity and which gave me the acceleration, so I had hoped I had written them down wrong and what I had thought was my initial velocity was actually the acceleration and vice versa. • #5 stockzahn Homework Helper Gold Member 498 137 I looked back at the kinematics formula and realized the initial velocity was also supposed to be squared (which I hadn't done, originally) and that's why my answer didn't match up. That's good. I had transposed the numbers because I still didn't understand which calculation gave me the initial velocity and which gave me the acceleration, so I had hoped I had written them down wrong and what I had thought was my initial velocity was actually the acceleration and vice versa. There are some principles forming the basis for these calculations., one of them is energy conservation. In your case there are two forms of energy relevant: The kinetic energy of the car and the work done by the friction force. Since energy must be conserved, the two must be transformed into each other, so the difference of kinetic energy equals the work done by the friction force: $$m_{car}\frac{v_{final}^2-v_{init}^2}{2} = F_{friction}s_{stop}$$ The friction force decelerates the car, that means its mass is negatively accelerated: $$m_{car}\frac{v_{final}^2-v_{init}^2}{2} = m_{car}a_{deceleration}s_{stop}$$ Re-arrainging yields $$v_{final}^2 = v_{init}^2+2a_{deceleration}s_{stop}$$ which is the formula you used. So it can be derived from the energy conservation principle, but what you need to use is its "original" form to solve the task. I hope that helped. • Last Post Replies 2 Views 46K • Last Post Replies 1 Views 2K • Last Post Replies 10 Views 5K • Last Post Replies 8 Views 6K • Last Post Replies 11 Views 34K • Last Post Replies 3 Views 7K • Last Post Replies 2 Views 9K • Last Post Replies 18 Views 8K • Last Post Replies 9 Views 102K • Last Post Replies 1 Views 4K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496601223945618, "perplexity": 793.0702366944031}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00773.warc.gz"}
https://www.dsprelated.com/freebooks/filters/Constant_Peak_Gain_Resonator.html
Constant Peak-Gain Resonator It is surprisingly easy to normalize exactly the peak gain in a second-order resonator tuned by a single coefficient [94]. The filter structure that accomplishes this is the one we already considered in §B.6.1: (B.14) That is, the two-pole resonator normalized by zeros at has the constant peak-gain property when it has resonant peaks in its response at all. Note, however, that the peak-gain frequency and the pole-resonance frequency (cf. §B.6.3), are generally two different things, as elaborated below. This structure has the added bonus that its difference equation requires only one more addition relative to the unnormalized two-pole resonator, and no new multiply. Real-time audio plugins'' based on the constant-peak-gain resonator are developed in Appendix K. The peak gain is , so multiplying the transfer function by normalizes the peak gain to one for all tunings. It can also be shown [94] that the peak gain coincides with the variance gain when the resonator is driven by white noise. That is, if the variance of the driving noise is , the variance of the noise at the resonator output is . Therefore, scaling the resonator input by will normalize the resonator such that the output signal power equals the input signal power when the input signal is white noise. Frequency response overlays for the constant-peak-gain resonator are shown in Fig.B.23 (), Fig.B.20 (), and Fig.B.21 (). While the peak frequency may be far from the resonance tuning in the more heavily damped examples, the peak gain is always normalized to unity. The normalized radian frequency at which the peak gain occurs is related to the pole angle by [94] (B.15) When the right-hand side of the above equation exceeds 1 in magnitude, there is no (real) solution for the pole frequency . This happens, for example, when is less than 1 and is too close to 0 or . Conversely, given any pole angle , there always exists a solution for the peak frequency , since when . However, when is small, the peak frequency can be far from the pole resonance frequency, as shown in Fig.B.22. Thus, must be close to 1 to obtain a resonant peak near dc (a case commonly needed in audio work) or half the sampling rate (rarely needed in practice). When is much less than 1, the peak frequency cannot leave a small interval near one-fourth the sampling rate, as can be seen at the far left in Fig.B.22. Figure B.22 predicts that for , the lowest peak-gain frequency should be around radian per sample. Figure B.21 agrees with this prediction. As Figures B.23 through B.25 show, the peak gain remains constant even at very low and very high frequencies, to the extent they are reachable for a given . The zeros at dc and preclude the possibility of peaks at exactly those frequencies, but for near 1, we can get very close to having a peak at dc or , as shown in Figures B.19 and B.20. Next Section: Four-Pole Tunable Lowpass/Bandpass Filters Previous Section: Peak Gain Versus Resonance Gain
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635813236236572, "perplexity": 1040.7589693648868}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00287.warc.gz"}
https://www.studysmarter.us/textbooks/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th/relativity/q-71-the-nuclear-reaction-that-powers-the-sun-is-the-fusion-/
Q. 71 Expert-verified Found in: Page 1062 ### Physics for Scientists and Engineers: A Strategic Approach with Modern Physics Book edition 4th Author(s) Randall D. Knight Pages 1240 pages ISBN 9780133942651 # The nuclear reaction that powers the sun is the fusion of four protons into a helium nucleus. The process involves several steps, but the net reaction is simply . The mass of a proton, to four significant figures, is , and the mass of a helium nucleus is known to be . a. How much energy is released in each fusion? b. What fraction of the initial rest mass energy is this energy? a. The energy will released is b. fraction percentage is See the step by step solution We have given, reaction, Mass of proton = Mass of helium = ## Step 2: Simplify The mass difference is , Then, the energy will be, ## Part 9b) Step 1: Given information We have given, Mass of proton = Mass of Helium = We have to find the fraction of the initial rest mass energy. ## Step 2: Simplify The initial rest mass energy is , Then the fraction will be,
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895495772361755, "perplexity": 1910.816286252166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00629.warc.gz"}
http://mathhelpforum.com/advanced-algebra/99413-please-verify.html
Vectors $a, b, c$ ( $\vec{i}=i$) make up tetrahedron with volume $V=3$. What's the volume of a tetrahedron made up of vectors $a\times b, b\times c, a\times c$? Solutions aren't given, so I kindly ask for a recheck. Setup: $V=\frac{1}{6}(a,b,c)=3\Rightarrow a\cdot(b\times c)=18\Rightarrow b\times c=\frac{18}{a}$ Attempt: $(a\times b, b\times c, a\times c)=((a\times b)\times(a\times c))\cdot b\times c=(-\langle b, a\times c\rangle\cdot a)\cdot b\times c=(-\langle b, a\times c\rangle\cdot a)\frac{18}{a}=$ $=-18\langle b, a\times c \rangle=-18(b\cdot (a\times c))=-18(b,a,c)=18(a,b,c)=54$ Isn't the volume of the second tetrahedron too large? 2. What does it mean to divide by a vector? What does the expression $\frac{18}{a}$ mean? 3. Let $[\mathbf a,\mathbf b,\mathbf c]$ refer to the scalar triple product $\mathbf a\times\mathbf b\cdot\mathbf c$ so that the volume of the tetrahedron is $V=|[\mathbf a,\mathbf b,\mathbf c]|/6$. Volume of new tetrahedron is $V'=|[\mathbf b\times\mathbf c,\mathbf c\times\mathbf a,\mathbf a\times\mathbf b]|/6$. As is well known, $[\mathbf b\times\mathbf c,\mathbf c\times\mathbf a,\mathbf a\times\mathbf b]=[\mathbf a,\mathbf b,\mathbf c]^2$. Hence $V'=6V^2=54$ as claimed. Although it came close, the method shown by courteous was ultimately incorrect -- that answer is right for the wrong reasons. 4. Originally Posted by halbard As is well known, $[\mathbf b\times\mathbf c,\mathbf c\times\mathbf a,\mathbf a\times\mathbf b]=[\mathbf a,\mathbf b,\mathbf c]^2$. Can you expand the "well known" (google is useless)? BTW, did you intentionally write $c\times a$ instead of $a\times c$ (or it doesn't matter for given problem)?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8760951161384583, "perplexity": 520.0446912747572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00721.warc.gz"}
https://www.dreamincode.net/forums/topic/321825-linear-algebra-primer-part-2-linear-independence-and-matroids/
# Linear Algebra Primer: Part 2- Linear Independence and Matroids Page 1 of 1 ## 0 Replies - 10482 Views - Last Post: 25 May 2013 - 06:51 PMRate Topic: //<![CDATA[ rating = new ipb.rating( 'topic_rate_', { url: 'https://www.dreamincode.net/forums/index.php?app=forums&module=ajax&section=topics&do=rateTopic&t=321825&amp;s=418cd8b1301ad30c65b2157ffa0801da&md5check=' + ipb.vars['secure_hash'], cur_rating: 0, rated: 0, allow_rate: 0, multi_rate: 1, show_rate_text: true } ); //]]> ### #1 macosxnerd101 • Games, Graphs, and Auctions Reputation: 12683 • Posts: 45,866 • Joined: 27-December 08 # Linear Algebra Primer: Part 2- Linear Independence and Matroids Posted 25 May 2013 - 06:51 PM Linear Algebra Primer: Part 2- Linear Independence and Matroids This tutorial will introduce the concept Linear Independence. It will also briefly introduce Matroid Theory, which branches into the realms of graph theory and abstract algebra as well. Independence In Graph Theory, Abstract Algebra, and Linear Algebra, there is a concept known as (in)dependence. The idea behind independence is if a structure is acyclic. In Graph Theory, that is pretty easy to see. Any forest or tree is independent, as they are acyclic. Another way to look at independence from a Graph Theory perspective is that there exists a unique path between two vertices. If there is a cycle, there are at least two paths between two vertices. Consider the trivial cycle C3 with vertices labeled A, B, and C. From vertex A-C, there are two paths: A-C and A-B-C. There are two paths from A-B and B-C as well. Consider the examples below of independent and dependent graphs. It is clear that the first graph is independent, as it has no cycles. The second graph is dependent, as it contains a cycle as a subgraph. The last graph is a circuit itself, so it is clearly dependent. Let's now discuss Linear Independence, which is what pertains to Linear Algebra. A set of vectors (set S) is defined to be linearly independent when no vector in the set can be formed from a linear combination of the other vectors. Another way to describe linear independence is in terms of span. So if S is linearly independent, then all the vectors in span(S) (the span of S is the set of all vectors formed from linear combinations of the vectors in S) are formed from a unique linear combination of the vectors in S. A set of vectors that is not linearly independent is called linearly dependent. This concept of linear independence is a little abstract, so let's decompose it. Consider the following vector sets: • S = {(1, 2, 3), (4, 5, 6)}: Here, S is linearly independent. There is no way to form (4, 5, 6) from multiples of (1, 2, 3). • S = {(1, 2, 3), (2, 4, 6), (3, 4, 5)}: Here S is linearly dependent. It is clear that (2, 4, 6) = 2(1, 2, 3) + 0(3, 4, 5), so a vector in S is formed from a linear combination of the other two vectors. Let's back up and revisit the graph theory intuition. A structure is graphically independent if there exists a unique path for all pairs of vertices. Regarding linear independence, if there is a unique linear combination for each vector in span(S), that could be thought of as a unique path. Similarly, if S is linearly dependent, then the linear combination to form an arbitrary vector v in S could be substituted in for v to create two linear combinations. Thus, there are two paths to the same end result. It is easy for small sets to determine independence. It gets trickier to eyeball and construct linear combinations for larger vector sets, especially when the Vector Space is bigger than a 2-3 dimensions. Let's talk about some heuristics to use: • The Multiples Test: If there are two vectors in the set, a and b, such that ka = b, for some constant k, then the set of vectors is linearly dependent. • Determinant Test: If the Vector Space is of dimension n and |S| = n, then the determinant test can be used. Consider a matrix M whose column vectors are the vectors in S. The set S is linearly independent if and only if det(M) != 0. • Dimension Test: If the Vector Space is of dimension n and |S| > n, then S is linearly dependent. Let's explore the intuition behind this a little more. If there are n dimensions, then there are n independent coordinate axes. So a subset of independent axes (or vectors) is independent. However, any extra axes produce a dependence. It isn't necessary to have two y-axes when only one will do. • Linear Combinations Test: When all else fails, this is a good test to fall back on. Row reduction can help expedite the process. Consider a matrix M whose column vectors are the vectors in S. If |S| = n and there are n distinct vectors that will satisfy the equation Mx = 0, where x and 0 are vectors, then S is linearly independent. Otherwise, S is linearly dependent. Introduction to Matroid Theory Matroids are structures that encapsulate this concept of independence found in graph theory, abstract algebra, and linear algebra. A Matroid is constructed from a ground set G. From this ground set, a second set I is constructed that contains all the independent subsets of the ground set. Thus, if the ground set is independent, then the I = P(G), where P(G) is the power set of G. Matroids allow for isomorphisms between Linear Algebra and Graph Theory. When dealing with Graphs, Matroids are constructed from the edge sets. Thus, the intuition developed above regarding graph theory and independence is more than just intuition. Matroids are a tool to answer graph questions using linear algebra, and linear algebra questions using graph theory. Some of these applications include path finding, matchings, scheduling, spanning trees, and planarity. Matroids have three fundamental properties or axioms. This first property states that no subset of a circuit is a circuit. Think about it this way. If a set of vectors is linearly independent, that means that no vector in the set can be formed from a linear combination of the other vectors. So removing a vector from the set won't change this fact. From a graph theory perspective, a circuit is formed by adding edges, not removing them. Thus, this first property makes sense. The second property states that the null set is independent, which follows from the first property. The null set has no elements; thus, no circuits. The final property states that all maximally independent subsets of the ground set all have the same cardinality. Consider the graph Cn. Clearly, removing one edge from the graph leaves a spanning tree, which is independent. Any arbitrary edge can be removed for the same result- a maximally independent subgraph. The same argument can be made for a set of Vectors. Conclusion I hope that this tutorial has been helpful in introducing the concept of Linear Independence. The introduction of Matroids is but the beginning of where Linear Algebra overlaps with Graph Theory. There will be future tutorials on Algebraic Graph Theory, which utilizes Linear Algebra to analyze graphs. Is This A Good Question/Topic? 1 Page 1 of 1 .related ul { list-style-type: circle; font-size: 12px; font-weight: bold; } .related li { margin-bottom: 5px; background-position: left 7px !important; margin-left: -35px; } .related h2 { font-size: 18px; font-weight: bold; } .related a { color: blue; }
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8571181893348694, "perplexity": 418.2059490842165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147647.2/warc/CC-MAIN-20200228200903-20200228230903-00390.warc.gz"}
https://www.physicsforums.com/threads/thermo-question-two-methods-of-changing-volume-temp.632322/
Homework Help: Thermo question, two methods of changing volume/temp 1. Aug 30, 2012 Greger does this look right? for the quasistatic case you can use thermodynamics to find the temperature at any time, for the other case you have to use dU=dW and so on since its not quasistatic 2. Aug 30, 2012 ehild It is correct for the constant external pressure. But what is the final temperature in the quasi-static case? It is true that Tf=PVf/(NKB) but P changes during the process. How do you get the final pressure Pf? ehild 3. Aug 31, 2012 Greger Oh right, so the quasistatic case should be: Tf=PfVf/(NKB) for some reason i forgot that the pressure was changing in that case as well, this makes more sense, in the quasistatic case you can find the state of the system at any point in terms of the state variables, and P is one of those too! thanks ehild =] 4. Aug 31, 2012 ehild Can you show what you got? Just to complete the solution, so as other people learn from it. ehild 5. Aug 31, 2012 Greger quasistatic case : Tf=PfVf/(NKB) under constant pressue: Tf=2Pex(Vi-Vf)/3NkB + Ti 6. Aug 31, 2012 ehild Well, it is not the solution yet. You need to give Tf in terms of the initial and final volumes and the initial temperature. Do you know the equation that governs a quasistatic adiabatic process? ehild 7. Aug 31, 2012 Greger Oh, do you mean something like where gamma is the ratio of heat capacities under constant pressure / volume? 8. Aug 31, 2012 ehild Yes. But you can combine it with the ideal gas law to give an equation between V and T. ehild 9. Aug 31, 2012 Greger yea theres afew like TV^(gamma - 1)=constant PV^gamma = const but wouldnt mucking around with these and introducing them into my problem make it like Tf=PfVf/(NKB)=PiVf1-λViλ/(NKB) using λ as gamma PV^gamma = const = C say Tf=CVf1-λ/NKB then i have this unknown constant C 10. Aug 31, 2012 ehild That is a good start. You can find that constant as you know the initial volume and temperature: TfVfγ-1=TiViγ-1. From the equation for the internal energy you can see that it is a mono-atomic gas, so you know Cv, and you also know the relation between Cv and Cp. What is γ-1 then? ehild 11. Aug 31, 2012 Greger C=NKb Tf Vf^(γ-1)=NKb TiVi^(γ-1) Tf= Ti(Vf Vi)^(γ-1) for γ=Cp/Cv Tf= Ti(Vf Vi)^(Cp/Cv-1) is that kind of what you mean? 12. Aug 31, 2012 ehild Yes, but you made a little error: Tf=Ti(Vi/Vf)γ-1. And you know the numerical value of Cp/Cv what is it? ehild 13. Aug 31, 2012 Greger oh whoops, Tf=Ti(Vi/Vf)5/3-1. Tf=Ti(Vi/Vf)2/3. thanks ehild 14. Aug 31, 2012 ehild It is the SOLUTION now ehild
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944125771522522, "perplexity": 1869.903156326318}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160568.87/warc/CC-MAIN-20180924145620-20180924170020-00151.warc.gz"}
https://www.groundai.com/project/kerr-black-holes-with-proca-hair/
Kerr black holes with Proca hair # Kerr black holes with Proca hair Departamento de Física da Universidade de Aveiro and Center for Research and Development in Mathematics and Applications (CIDMA) Campus de Santiago, 3810-183 Aveiro, Portugal July 16, 2019March 2016 July 16, 2019March 2016 ###### Abstract Bekenstein proved that in Einstein’s gravity minimally coupled to one (or many) real, Abelian, Proca field, stationary black holes (BHs) cannot have Proca hair. Dropping Bekenstein’s assumption that matter inherits spacetime symmetries, we show this model admits asymptotically flat, stationary, axi-symmetric, regular on and outside an event horizon BHs with Proca hair, for an even number of real (or an arbitrary number of complex) Proca fields. To establish it, we start by showing that a test, complex Proca field can form bound states, with real frequency, around Kerr BHs: stationary Proca clouds. These states exist at the threshold of superradiance. It was conjectured in [1, 2], that the existence of such clouds at the linear level implies the existence of a new family of BH solutions at the non-linear level. We confirm this expectation and explicitly construct examples of such Kerr black holes with Proca hair (KBHsPH). For a single complex Proca field, these BHs form a countable number of families with three continuous parameters (ADM mass, ADM angular momentum and Noether charge). They branch off from the Kerr solutions that can support stationary Proca clouds and reduce to Proca stars [3] when the horizon size vanishes. We present the domain of existence of one family of KBHsPH, as well as its phase space in terms of ADM quantities. Some physical properties of the solutions are discussed; in particular, and in contrast with Kerr BHs with scalar hair, some spacetime regions can be counter-rotating with respect to the horizon. We further establish a no-Proca-hair theorem for static, spherically symmetric BHs but allowing the complex Proca field to have a harmonic time dependence, which shows BHs with Proca hair in this model require rotation and have no static limit. KBHsPH are also disconnected from Kerr-Newman BHs with a real, massless vector field. ## 1 Introduction In vacuum General Relativity (GR) black holes (BH) are remarkably simple. The Carter-Robinson theorem [4, 5], supplemented by the rigidity theorem [6, 7], established that asymptotically flat, stationary, non-singular (on and outside an event horizon) vacuum BHs of GR have only two degrees of freedom – see [8] for a review. The most general BH solution in this context is the Kerr metric [9] and the two degrees of freedom are the ADM mass, , and angular momentum, , both of which can be determined by an observer at infinity. The natural question of how this result generalizes in the presence of matter led to the no-hair hypothesis [10]: regardless of the matter involved, the end-point of gravitational collapse – in GR and in an astrophysical context – is characterized solely by conserved charges associated to Gauss laws, including , and no further parameters (hair). Thus, an observer at infinity should be able to fully compute all relevant “charges” of an equilibrium BH. Evidence in favour of this hypothesis has been presented in terms of no-hair theorems for particular matter models in GR. A collection of such theorems for the much studied case of scalar matter can be found in the recent review [11]. Of relevance for the present paper, Bekenstein established a no-Proca hair theorem for stationary BH solutions of Einstein’s gravity minimally coupled to one (or more) real, Abelian Proca field [12, 13], which will be reviewed in Section 3.1. Evidence against the no-hair hypothesis in asymptotically flat spacetimes, on the other hand, has been presented in the form of hairy BH solutions, starting with the pioneering examples in Yang-Mills theory [14] (see also the reviews [15, 16, 11, 17]). Such counter-examples, however, typically either 1) violate some energy condition ( [18, 19, 20, 21, 22]); or 2) have non-minimal couplings between matter and geometry ( [23, 24, 25, 26, 27, 28]); or 3) have non-canonical/non-linear kinetic terms ( [29, 30, 31, 32]); or 4) the hair is not independent of other fields, such as an electromagnetic field (secondary hair,  [33, 34]); or 5) involve higher curvature terms ( [35, 36, 37, 38, 39, 40, 41, 42]); or 6) several of the above. It is unclear, moreover, if any of these counter-examples violates the dynamical spirit of the no-hair hypothesis; that is, if there are dynamically stable hairy BHs that can be the end-point (or be sufficiently long lived) in a dynamical evolution. In a qualitatively novel development, a class of BH solutions with scalar hair was found in 2014 bifurcating from the Kerr metric [1]: Kerr BHs with scalar hair (KBHsSH). These are solutions of the simple model S=∫d4x√−g[R16πG−gαβ2(Ψ∗,αΨ,β+Ψ∗,βΨ,α)−μ2Ψ∗Ψ] , (1.1) that 1) obey all energy conditions; 2) have minimal couplings with the geometry; 3) have canonical kinetic terms; 4) have an independent (primary) hair; 5) exist in GR, without higher curvature terms. KBHsSH, moreover, are asymptotically flat, regular on and outside the event horizon, reduce to (specific) Kerr solutions in the limit of vanishing hair, and to gravitating solitons known as boson stars [43, 44] in the limit of vanishing horizon. The scalar hair is described by an independent conserved Noether charge but without an associated Gauss law. Thus, an observer at infinity cannot determine this charge – which must be computed by a volume integral – and hence does not have access to all the relevant spacetime charges. The matter content for the original example in [1] (see also [45]) was a massive complex scalar field,  (1.1). In GR minimally coupled to this type of matter the Kerr BH is a solution, together with a vanishing scalar field, but it is unstable against superradiance [46, 47, 48]. At the threshold of the instability, there are bound states of the scalar field on the Kerr background, found in a test field analysis, corresponding to linear hair. The existence of these stationary scalar clouds [49, 50, 1, 51, 52, 53, 54] determines the bifurcation point of the hairy solutions from vacuum Kerr. Moreover, since the latter solution is unstable against superradiant scalar perturbations, there is an expectation that the BHs of [1] play a role in the non-linear development of the instability and can effectively form dynamically, thus providing a true counter-example to the physical implications of the no-hair hypothesis – see [55, 56] for recent discussions of the non-linear development of superradiant instabilities into hairy BHs. The connection between KBHsSH and superradiance led to the suggestion that, underlying the example of KBHsSH, there is a more general mechanism [1, 2] (see also [11, 45]): Conjecture: 1) If a “hairless” stationary BH spacetime is afflicted by superradiant instabilities triggered by a given test field ; 2) If the field modes at the threshold of the instability (zero modes), , yield an energy-momentum tensor which is time-independent , where is the time-like Killing vector field (at infinity) that preserves the metric ; Then: there is a new family of stationary BH “hairy solutions” bifurcating from , denoted by . Actually, may be a countable set of families. In the case of KBHsSH, one encounters a family with three continuous and two discrete degrees of freedom. The former are the ADM mass and angular momentum and the Noether charge ; the latter, which define a countable set of families, are the node number, and the azimuthal harmonic index, of the scalar field. A formal proof of the existence of these solutions was recently reported in [57]. KBHsSH were generalized to include self-interactions of the scalar field in [58] and to scalar-tensor gravity in [59]. As further evidence for the above conjecture we consider, in this paper, Einstein’s gravity minimally coupled to Abelian Proca fields, hereafter referred simply as Proca fields444Gravitating non-Abelian () Proca fields have been studied in [63], wherein spherically symmetric solitons and BHs have been discussed. The properties of these solutions are rather distinct from the solutions discussed in this paper and, moreover, the former have not been generalized to include rotation.. Massive Proca fields trigger, in much the same way as massive scalar fields, superradiant instabilities of Kerr BHs – see [60, 61, 62] for recent studies of Proca-induced superradiant instabilities in asymptotically flat BHs. Firstly, we shall perform a test field analysis of a Proca field on the Kerr background. We observe that, at the threshold of the unstable modes, one can find stationary Proca clouds. If the Proca field is complex, moreover, the energy-momentum tensor sourced by these stationary clouds is time-independent. Hence, we are in the conditions of the above conjecture. Secondly, we address the fully non-linear system of a complex-Proca field minimally coupled to GR and construct stationary BH solutions which are the non-linear realization of the aforementioned stationary Proca clouds: Kerr BHs with Proca hair (KBHsPH). When the horizon of these BHs vanishes, the solutions reduce to the rotating Proca stars recently constructed in [3]. These are vector boson stars which share many of the properties of the scalar boson stars that have been studied for decades [43, 44]. The introduction of the mass term for the vector fields is central for the existence of KBHsPH, since it is crucial for both the existence of the stationary Proca clouds and Proca stars. In the (Proca field) massless limit these BHs trivialize; they are not connected to Kerr-Newman BHs. The presence of such mass terms implies that there is no Gauss law associated to the vector field; massive fields have no Gauss law since there is no flux conservation. Indeed, in asymptotically flat spacetimes, a massive field which decays towards spatial infinity will do so exponentially. Thus, the integral of its flux density over a sphere at infinity will necessarily vanish. This does not mean, however, that massive fields cannot be locally conserved. Both the complex Proca field and the complex, massive scalar field enjoy a global symmetry which implies a conserved current and a conserved Noether charge. There is a local continuity equation, but no Gauss law. Thus, according to the no-hair hypothesis there should be no Proca hair around stationary BHs. Here, however, we show that there can be. And again, an observer at infinity does not have access to all relevant spacetime charges. This paper is organized as follows. In Section 2 we exhibit the Einstein–complex-Proca model and its basic properties. In Section 3 we review the classic no-Proca-hair theorem by Bekenstein [12, 13] and also present a novel no-Proca-hair theorem applying to spherically symmetric solutions and allowing the Proca field to have a harmonic time dependence. The latter is a generalization of the theorem presented in [64] for the scalar case and it establishes that rotation is crucial for the existence of KBHsPH. In Section 4 we consider the construction of stationary Proca clouds around Kerr and obtain one existence line for a particular set of “quantum” numbers. In Section 5 we shall briefly review some of the main features of Proca stars, that form a limiting case of KBHsPH, and discuss some of their physical properties in the rotating case. In Section 6 we finally construct KBHsPH, discussing the ansatz, boundary conditions and solving numerically the field equations. Then we exhibit the domain of existence and phase space of one family of solutions, we discuss the Proca energy spacetime distribution and some other physical features of these BHs. We close with a discussion of the results of this paper and some of the open directions for future related research. In the Appendices we provide some technical results, including the explicit expression for the Einstein tensor, Proca energy-momentum tensor and Proca field equations. ## 2 Einstein–complex-Proca model The field equations for a massive vector field were introduced by A. Proca [65] in the 1930s. Much more recently, gravitating Proca fields have been discussed by various authors – see  [66, 67, 68]. Here, we shall consider two real Proca fields, both with mass , but our discussion can be easily generalized to an arbitrary even number of real Proca fields (or an arbitrary number of complex ones). The two fields are described by the potential 1-forms , , and field strengths . It is convenient to organize them into a single complex Proca field: A=A(1)+iA(2) ,F=F(1)+iF(2) . (2.1) We denote the complex conjugate by an overbar, ¯A=A(1)−iA(2) ,¯F=F(1)−iF(2) . (2.2) Considering that the two Proca fields do not couple to each other and couple minimally to gravity, one obtains the minimal Einstein–complex-Proca model, which is described by the action: S=∫d4x√−g(116πGR−14Fαβ¯Fαβ−12μ2Aα¯Aα) . (2.3) This (or its version with real) is the action considered by previous studies of the Einstein-Proca model, see refs. [66, 69].555We remark that these works did not succeed in finding regular particle-like solutions or BHs with Proca hair. Varying (2.3) the potential yields the Proca field equations ∇αFαβ=μ2Aβ . (2.4) Observe that these equations completely determine once is known. Thus, the Proca potential is not subject to gauge transformations, unlike the Maxwell potential, and it is as physical as the field strength. In particular (2.4) imply the Lorentz condition, thus a dynamical requirement, rather than a gauge choice: ∇αAα=0 . (2.5) As usual, the Einstein equations are found by taking the variation of (2.3) the metric tensor Rαβ−12Rgαβ=8πGTαβ , (2.6) Tαβ=12(Fασ¯Fβγ+¯FασFβγ)gσγ−14gαβFστ¯Fστ+12μ2[Aα¯Aβ+¯AαAβ−gαβAσ¯Aσ] . (2.7) The action possesses a global symmetry, since it is invariant under the transformation , with constant; this implies the existence of a 4-current, jα=i2[¯FαβAβ−Fαβ¯Aβ] , (2.8) which is conserved by virtue of the field equations (2.4): . Consequently, there exists a Noether charge, , obtained integrating the temporal component of the 4-current on a space-like slice : Q=∫Σd3xj0 . (2.9) We emphasize that unlike the massless limit of the theory, wherein the global symmetry becomes local, the last integral cannot be converted into a surface integral. In other words, there is no Gauss law. ## 3 No Proca-hair theorems If one considers Maxwell’s equations for a test field with a spherically symmetric ansatz (a purely radial electric field) on the Schwarzschild background one finds a regular solution on and outside the Schwarzschild horizon ( Section 2.1 in [11]). This is a smoking gun that a spherically symmetric field can be added, non-linearly, to the Schwarzschild solution, which indeed yields the well-known Reissner-Nordström BH. Adding a mass term to the Maxwell field – hence converting it into a Proca field – drastically alters the behaviour of the test field solution: it is not possible to find a solution which is both finite at the horizon and at spatial infinity, no matter how small is. In particular, for the asymptotically (exponentially) decaying solution, the Proca potential squared diverges at the horizon [70] - see Section 3.2. Thus, requiring any amount of Proca field in equilibrium outside the horizon implies an infinite pile up of Proca invariants at the horizon. This behaviour parallels that of a scalar field (massless or massive) discussed in [11] and it is intimately connected with the existence/absence of a Gauss law for the Maxwell/Proca field. Moreover, it shows one cannot find a regular, spherically symmetric BH solution with Proca (time-independent) hair bifurcating from the Schwarzschild solution. We shall review in Section 3.1 a more robust argument for the inexistence of stationary BHs with Proca hair, due to Bekenstein [12, 13], and that applies to our model (2.3). A fundamental assumption in the argument is that the Proca field and the background share the same symmetries. This symmetry inheritance of the spacetime symmetries by the matter fields is precisely the assumption that the KBHsPH presented later in this paper will violate. Then, in Section 3.2, we show that even dropping the symmetry inheritance assumption one can establish a no-hair theorem, for spherically symmetric BHs. This is compatible with the KBHsPH solutions presented here, which are stationary and axi-symmetric, and shows that these solutions cannot have a static limit. This fact is in agreement with the domain of existence of KBHsPH, Section 6.2. ### 3.1 Bekenstein’s theorem Following Bekenstein [12, 13], we consider a rotating, stationary, asymptotically flat BH spacetime. For matter obeying the null energy condition, the rigidity theorem implies that the spacetime is also axi-symmetric [6]. We write the spacetime metric in coordinates adapted to these symmetries , so that the two Killing vector fields read , . For simplicity we consider the Proca field to be real. But the proof generalizes straightforwardly for an arbitrary number of real Proca fields, and in particular for a complex Proca field. We denote the real Proca potential and field strength as and , respectively. We assume that this field inherits the spacetime symmetries. In particular for the coordinates chosen above this means that: LkAα=LmAα=0=LkFαβ=LmFαβ . (3.1) The proof proceeds as follows. We contract the Proca equation with and integrate over the BH exterior space-time: ∫d4x√−g[Aβ∇αFαβ−μ2AβAβ]=0 . (3.2) Next, integrating the first term by parts: ∫d4x√−g[FαβFαβ2+μ2AβAβ]−∫Hd3σnαAβF βα=0 , (3.3) where the boundary term is computed on the (spatial section of the) horizon, , and the other boundary term (at infinity) vanishes since the Proca field falls off exponentially fast. Now we argue that the boundary term in (3.3) is zero. To do so, we first observe that defining , then . This results from the symmetries imposed, which imply .666 follows immediately from (3.1). Non-vanishing would imply non vanishing components and of the energy momentum tensor (2.7), which are incompatible with the symmetries of the problem. Since, the event horizon of a stationary, asymptotically flat spacetime is a Killing horizon, the normal to , , is a linear combination of the Killing vector fields. Then . We conclude that777We are implicitly assuming that and are finite on . This assumption actually breaks down for the massless case (Maxwell field) due to gauge invariance. ∫d4x√−g[FαβFαβ2+μ2AβAβ]=0 . (3.4) Contrary to the scalar field case (see  [11]) this integrand is not positive definite. Thus, a further argument is necessary, which can be constructed by using an orthonormal basis, which we denote as . Flat (underlined) indices are raised and lowered with the standard Cartesian Minkowski metric. Taking into account the allowed components by symmetry of the Proca potential and field strength, (3.4) becomes: ∫d4x√−g[(Ft–r–)2+(Ft–θ–)2+(At–)2]=∫d4x√−g[(Fϕ––r–)2+(Fϕ––θ–)2+(Aϕ––)2] . (3.5) Analysing the time-reversal invariance of the Proca equation, shows that are even, whereas are odd, under time-reversal. Thus, expanding the Proca potential and field strength in a power series of the angular momentum of the background, the first (second) set of field/potential components contains only even (odd) powers. The zeroth order terms only get contributions from the left hand side of (3.5); since the corresponding integrand is strictly positive and the integral is zero, the zeroth order terms must vanish. Then, the first order terms only get contributions from the right hand side of (3.5); since the corresponding integrand is strictly positive and the integral is zero, the first order terms must vanish. In this way one shows iteratively that the Proca field/potential must vanish, and hence there is no Proca hair. Observe that this theorem did not use the Einstein equations. A different proof of the no Proca-hair theorem, possibly including a cosmological constant and making use of the Einstein equations, has been given in [71]. ### 3.2 A modified Peña–Sudarsky theorem The theorem of the previous subsection relied on the symmetry inheritance of the spacetime isometries by the Proca field. In particular the stationarity of the geometry implied a time-independence of the Proca potential/field. Recently, however, gravitating solitons composed by self-gravitating Proca fields were found by allowing the complex Proca field to have a harmonic time dependence: Proca stars [3]. This time-dependence vanishes at the level of the energy momentum tensor and it is therefore compatible with a stationary geometry (see [72] for recent discussions of symmetry inheritance). Thus one may wonder if allowing the Proca field to have such harmonic time dependence allows for BHs with Proca hair. The situation just described parallels closely the well-known picture for complex scalar fields. The existence of scalar boson stars led Peña and Sudarsky to consider the possibility of spherically symmetric BH geometries with a scalar field possessing a harmonic time dependence. In this setup it was possible to establish a no-scalar-hair theorem, ruling out BHs with scalar hair even if the hair has such harmonic time-dependence [64]. In the following we shall establish a no-Proca-hair theorem, allowing the complex Proca field to have a harmonic time dependence, for the case of spherical symmetry, by using a modified version of the arguments in [64]. We consider a spherically symmetric line element with the parametrization (see  [3]): ds2=−σ2(r)N(r)dt2+dr2N(r)+r2dΩ2 , N(r)≡1−2m(r)r . (3.6) The Ansatz we consider for the complex Proca potential is also the one introduced in [3] for discussing spherical Proca stars and it is the most general one compatible with spherical symmetry and staticity: A=e−iwt[f(r)dt+ig(r)dr] . (3.7) In the above relations, are all real functions of the radial coordinate only and is the frequency parameter, which we take to be positive without any loss of generality. The Proca field equations (2.4) yield ddr{r2[f′(r)−wg(r)]σ(r)}=μ2r2f(r)σ(r)N(r) , (3.8) and f′(r)=wg(r)(1−μ2σ2(r)N(r)w2) , (3.9) where “prime” denotes radial derivative. The Lorentz condition, (2.5), determines in terms of the other functions: f(r)=−σ(r)N(r)wr2ddr[r2σ(r)N(r)g(r)] ; (3.10) this can be rewritten as ddr[r2σ(r)N(r)g(r)]=−wr2f(r)σ(r)N(r) . (3.11) Observe that (3.8)-(3.9) imply (3.11), as they should. The essential Einstein equations, (2.6), read (there is a further Einstein equation which is a differential consequence of these) m′=4πGr2[(f′−wg)22σ2+12μ2(g2N+f2Nσ2)], σ′σ=4πGrμ2(g2+f2N2σ2). (3.12) We also note that the component of the energy-momentum tensor – the energy density – reads −Ttt=(f′−wg)22σ2+12μ2(g2N+f2Nσ2) . (3.13) To establish the no-Proca-hair theorem, let us assume the existence of a regular BH solution of the above equations. Then the geometry would possess a non-extremal horizon at, say, , which requires that N(rH)=0 , (3.14) since is a null surface. Since we are assuming that there are no more exterior horizons, then constant are timelike surfaces and . Also, we can choose without loss of generality that , since the equations of motion are invariant under . It follows that and are strictly positive functions for any , as a consequence of the Einstein equations (3.2) and the assumption that there are no further more exterior horizons. The regularity of the horizon implies that the energy density of the Proca field is finite there. From (3.13) one can see that this implies f(rH)=0 . (3.15) Then the function starts from zero at the horizon and remains strictly positive (or negative) for some -interval. Now, let us assume for . Thus , in this interval, is a strictly increasing (and positive) function (the case can be discussed in a similar way). Next, we consider the expression (which appears in (3.9)) P(r)≡1−μ2σ2(r)N(r)w2 . (3.16) One can see that ; actually becomes negative for large , since , as , while , which is a bound state condition necessary for an exponential decay of the Proca field at infinity. But the important point is the existence of an interval where is a strictly positive function. Let be the minimum between and . Then we observe that (3.11) implies r2σ(r)N(r)g(r)=−w∫rrH dxx2σ(x)N(x)f(x)<0 (3.17) for any in the interval . Consequently, in this interval, since are positive everywhere outside the horizon. The last conclusion implies a contradiction: is not compatible with , in that interval. In fact, together with and , from (3.9), that . Thus we conclude that is the only solution compatible with a BH geometry (). One final observation concerning static fields (). In such cases, one has only an electric potential, . Then, the Proca equations on a Schwarzschild background – taking the line element (3.6) with , , – can be solved in closed form by taking the ansatz [70] f(r)=e−μrrS(r) , (3.18) where is a solution of the Kummer equation [73] zd2S(z)dz2−zdS(z)dz−MμS(z)=0 ,  with  z≡2μ(r−2M) . (3.19) This equation possesses a solution which is regular on and outside the horizon. In particular, takes a constant nonzero value at ( ). This implies, however, that the invariant diverges at the horizon. ## 4 Stationary Proca clouds around Kerr The theorem of subsection 3.2 leaves open the possibility that stationary (rather than static and spherically symmetric) BHs with Proca hair, possessing a harmonic time dependence, may exist. There is, moreover, a new physical ingredient in the stationary case which, indeed, makes their existence not only possible, but also natural: superradiance. Sufficiently low frequency modes of a test Proca field, are amplified when scattering off a co-rotating Kerr BH, by extracting rotational energy and angular momentum from the BH, in a purely classical process. This process was studied in the slow rotation limit of Kerr in [60, 61], where it was used for placing bounds on the photon mass. Sufficiently high frequency modes, on the other hand (or any non-co-rotating mode), are partly absorbed in a similar scattering. The same two behaviours occur for gravitationally bound modes, with frequency lower than the Proca mass. These modes are generically quasi-bound states, they have a complex frequency. Then, the amplified modes become an instability of the background. Moreover, at the threshold between the two behaviours (growing and decaying modes), one finds bound states with a real frequency, which we dub stationary Proca clouds around Kerr BHs. We shall now sketch the study of the Proca bound states around Kerr BHs in a way suitable for the computation of KBHsPH. A more detailed account of stationary Proca clouds will appear elsewhere. We use the parametrization of the Kerr metric introduced in [45]: ds2=−e2F0Ndt2+e2F1(dr2N+r2dθ2)+e2F2r2sin2θ(dφ−Wdt)2 , (4.1) where N≡1−rHr , (4.2) and are functions of the spheroidal coordinates , which read, explicitly,888The parameters (here) and (in [45]) relate as . e2F1=(1+br)2+b(b+rH)cos2θr2 , e2F2=e−2F1⎧⎨⎩[(1+br)2+b(b+rH)r2]2−b(b+rH)(1−rHr)sin2θr2⎫⎬⎭ , F0=−F2 , W=e−2(F1+F2)√b(b+rH)(rH+2b)(1+br)r3 . (4.3) The relation between these coordinates and the standard Boyer-Lindquist coordinates is simply a radial shift: r=R−a2RH , (4.4) where is the event horizon Boyer-Lindquist radial coordinate, , for a Kerr BH with mass and angular momentum . In the new coordinate system , the Kerr solution is parameterized by and , which relate to the Boyer-Lindquist parameters as rH=RH−a2RH ,b=a2RH . (4.5) Clearly, fixes the event horizon radius; is a spheroidal prolateness parameter (see Appendix A), and can be taken as a measure of non-staticity, since yields the Schwarzschild limit. The ADM mass, ADM angular momentum and horizon angular velocity read, in terms of the parameters (we set ) M=12(rH+2b) ,   J=12√b(b+rH)(rH+2b) ,   ΩH=1rH+2b√brH+b . (4.6) The choice yields Minkowski spacetime expressed in spheroidal prolate coordinates (Appendix A). Extremality occurs when . One considers the Proca field equations (2.4) on the background (4.1), using an ansatz given in terms of four functions , all of which depends on , and with a harmonic time and azimuthal dependence, which introduce a (positive) frequency, , and the azimuthal harmonic index, :999We recall that in the scalar field case, the anstaz was for the stationary scalar clouds [49, 52] and for the fully non-linear solutions [1]. In this case the test field analysis admits separation of variables, which does not occur for the Proca case. A=ei(mφ−wt)(iVdt+H1dr+H2dθ+iH3sinθdφ) . (4.7) Here we shall only address the case with . The corresponding Proca equations are given in Appendix B. These equations are solved with the following set of boundary conditions: i) at infinity, Hi|r=∞=V|r=∞=0 ; (4.8) ii) on the symmetry axis, H1|θ=0,π=∂θH2∣∣θ=0,π=∂θH3∣∣θ=0,π=V|θ=0,π=0 ; (4.9) iii) Êat the event horizon the boundary conditions become simpler by introducing a new radial coordinate , such that the horizon is located at . Then one imposes H1|x=0=∂xH2|x=0=∂xH3|x=0=0 ,(V+wmH3sinθ)|x=0=0 . (4.10) These boundary conditions are compatible with an approximate construction of the solutions on the boundary of the domain of integration. All such solutions we have constructed so far are symmetric a reflection along the equatorial plane. This symmetry is imposed by taking ∂θH1|θ=π/2=H2∣∣θ=π/2=∂θH3∣∣θ=π/2=∂θV|θ=π/2=0 . (4.11) We remark, however, that odd-parity composite configurations are also likely to exist. Moreover, we observe that for the boundary conditions satisfied by some of the gauge potentials at are different. We have solved the equations for , with the above boundary conditions, for a fixed Kerr BH background, by using the numerical approach described in [75] for non-linear stationary scalar clouds. The input parameters are for the Proca functions and for the geometry. Regularity of the Proca fields at the horizon imposes the synchronization condition (see the discussions in [52, 76]) w=mΩH , (4.12) which precisely means the scalar clouds are modes at the threshold of the superradiant instability (unstable modes obey ). Observe that with (4.12), the last condition in (4.10) becomes ξαAα∣∣rH=0 , (4.13) where is the event horizon null generator.101010Observe that for a massless vector field, a Maxwell field, corresponds to –the co-rotating electric potential on the horizon, which is non-zero in a gauge where the gauge potential vanishes asymptotically [74]. Observe also that is preserved by the action of : . This is analogous to what occurs in the scalar case (), but it is in contrast to the assumptions of Bekenstein’s theorem, where it is required that the components of the Proca potential are invariant under and separately,  (3.1). For a fixed ( for the case here), for a given in some interval , one finds a solution, the numerical iteration converges, for a single value of . Since is determined by (4.12), the corresponding mass is determined by (4.6). In other words, the regularity of the bound state implies a quantization condition of the background parameters; for each , there is an existence line in a diagram representation of Kerr BHs, corresponding to a 1-dimensional subspace of the 2-dimensional Kerr parameter space. In Fig. 1 we exhibit the existence line (blue dotted line), which forms one of the boundaries of the domain of existence of KBHsPH. As we shall see in Section 6.2, this line is one of the boundaries of the domain of existence of KBHsPH, which demonstrates that in the limit of small Proca field, KBHsPH reduce to the Kerr solutions that can support stationary Proca clouds, and hence that they are the non-linear realization of the clouds we have just discussed. It is interesting to compare the location of the existence lines for the Proca and scalar case in the Kerr diagram, Fig. 1. Comparing the existence line for stationary Proca clouds with the existence line for stationary scalar clouds,111111The stationary scalar clouds are labelled by 3 quantum numbers . The , line is the one with smaller values of for fixed for all possible values of and  [52]. one observes that the former has smaller values of for the same mass. This means, in particular, that there are Kerr BHs that are superradiantly stable against all scalar perturbations but are superradiantly unstable against Proca modes. A similar feature has been observed comparing the existence lines for Maxwell and scalar stationary clouds in Kerr-AdS [77]. Finally let us remark that it was observed in [75] that including certain classes of self-interactions in the scalar field model, stationary scalar clouds can exist in an open set of the , rather than just a 1-dimensional line. It is likely a similar result applies to self-interacting Proca fields, in view of the results in [78]. We close this section by commenting on the node number of these stationary Proca clouds. In the scalar case, the number of nodes of the radial function defining the scalar field profile, is for fundamental states and for excited states. This issue becomes more subtle for Proca clouds (and Proca stars), since one has more than one potential component. Nevertheless, we remark that the all states we have obtained so far have always (only) one node for the temporal component of the Proca potential , and thus are likely to represent the fundamental modes of the problem.121212The electric potential of the spherically symmetric Proca stars necessarily possesses at least one node [3]. Although the proof there cannot be generalized to the axially symmetric case, we could not find any numerical indication for the existence of nodeless solutions. ## 5 Spinning Proca stars The stationary Proca clouds described in the previous section form one of the central ingredients to understand KBHsPH. They also form a part of the boundary of the domain of existence of these BHs, as we shall see in the next section. The other central ingredient corresponds to Proca stars, which again will form a part of the boundary of the domain of existence of KBHsPH. We shall now briefly review the relevant properties of these solutions, recently found in [3], for understanding KBHsPH. Proca stars can be either spherically symmetric and static or axially symmetric and stationary. The former are found by taking the ansatz (3.6) for the line element and (3.7) for the Proca field. With this ansatz, however, there are no BH solutions as shown in subsection 3.2. The latter are found by taking a metric ansatz of the form (4.1), with , with unspecified functions and the Proca potential ansatz (4.7), with unspecified functions . The remaining two (unspecified) functions are replaced as W→Wr ,H1→H1r . (5.1) We find it preferable to work with the new when dealing with stars, due to their boundary conditions at the origin (rather than at a horizon). In the remaining of this section we shall always refer to these new functions. Solving the corresponding field equations with the following boundary conditions: i) at infinity, (4.8), together with Fi∣∣r=∞=W∣∣r=∞=0 , (5.2) ii) on the symmetry axis, (4.9), together with ∂θFi∣∣θ=0,π=∂θW∣∣θ=0,π=0 (5.3) iii) at the origin, ∂rFi∣∣r=0=W∣∣r=0=Hi|r=0=V|r=0=0 . (5.4) Then, one finds a countable number of families of rotating Proca stars, labelled by , of which the cases with were discussed in [3]. Therein, it was also found that, as for the scalar rotating boson stars, the ADM angular momentum and the Noether charge obey the simple relation J=mQ . (5.5) In Appendix C we give a detailed derivation and discussion of this relation, which is more subtle in the case of Proca stars than for scalar boson stars. Thus, following [1], we define the normalized Noether charge, , as q≡mQJ , (5.6) which is obviously for all Proca stars, but will be for KBHsPH. For , the case in which we focus here, the Proca star solutions appear to form a spiral in an ADM mass, , Proca field frequency, , diagram, starting from for , in which limit the Proca field becomes very diluted and the solution trivializes. At some intermediate frequency, a maximal ADM mass is attained. For this frequency is and the maximal mass is , a slightly larger value than for the corresponding scalar rotating boson star (for which [3]. In Fig. 1, we display the Proca star and scalar boson star curves (red solid and dotted lines). Comparing them, we observe: the slightly larger maximal mass for the Proca stars; that the backbending of the inspiraling curve occurs, for Proca stars, for a larger value of the frequency parameter, and hence they exist in a narrower frequency interval; that whereas for scalar boson stars with it was possible to obtain a third branch of solutions (after the second backbending) numerics become very difficult for Proca stars already on the second branch;131313In the spherically symmetric case, the results in [3] show the existence of a very similar picture for both Proca stars and scalar boson stars, with the occurance of secondary branches (together with the corresponding spiral in a -diagram) also in the former case. for example, the function takes very large, negative values. Finally, in complete analogy with the scalar boson star case, the Proca star line yields the second boundary of the domain of existence of KBHsPH; the latter reduce to Proca stars when the horizon size vanishes, as will be seen in the next section. Although spinning Proca stars are quite similar to spinning scalar boson stars in many aspects, the energy and angular momentum density of the former exhibit novel features with respect to the latter. Spinning scalar boson stars for generic are often described as an effective mass torus in general relativity [79], since surfaces of constant energy density present a toroidal topology sufficiently close to the centre of the star (see the plots in [2]). Spinning Proca stars, on the other hand, have a different structure for and as shown in Figs. 24 for illustrative cases (with and along the first branch for all examples). For the Proca star’s energy density has a maximum at the origin and a second maximum (smaller) at some radial distance, thus presenting a composite-like structure, Fig. 2 (top left panel): instead of being toroidal some constant energy surfaces are Saturn-like - Fig. 4 (left panel). The angular momentum density, on the other hand, is zero at the origin and has two local positive maxima at some radii and one local negative minimum between them – Fig. 2 (top right panel); in particular this means there is a counter-rotating toroidal-like region. For the Proca star’s energy density vanishes at the origin and two local maxima arise at different radial values, Fig. 3 (top left panel). Thus some constant energy density surfaces are di-ring-like - Fig. 4 (right panel). The angular momentum density is similar to the case – Fig. 3 (top right panel). Finally, we discuss how ‘compact’ these Proca stars are. Proca stars, like their scalar cousins, have no surface, the Proca field decays exponentially towards infinity. Thus, there is no unique definition of the Proca star’s ‘radius’. To obtain an estimate we follow the discussion in [80, 45]. Using the ‘perimeteral’ radius, , a radial coordinate such that a circumference along the equatorial plane has perimeter , we compute , the perimeteral radius containing 99% of the Proca star mass, . Then, we define the inverse compactness by comparing with the Schwarzschild radius associated to 99% of the Proca star’s mass, : Compactness−1≡R992M99 . (5.7) The result for the inverse compactness of Proca stars with is exhibited in Figure 5. With this measure, the inverse compactness is always greater than unity; , Proca stars are less compact than BHs, as one would expect, but they are also less compact than comparable scalar boson stars. ## 6 Kerr BHs with Proca Hair We are now finally ready to tackle KBHsPH. The parallelism with the scalar case for both the stationary clouds and the solitonic limit is striking and one anticipates a high degree of similarity also at the level of the hairy BH solutions. The metric ansatz for constructing KBHsPH is the same as it was used for KBHsSH in [1], and is precisely of the form (4.1) with (4.2), where now all four (unspecified) functions depend on and, again, is a constant. If is finite, then surfaces are timelike for and become null for . Thus, is the location of the event horizon if the metric is regular therein. For , this ansatz reduces to the one discussed in the previous section for Proca stars, except for the replacement (5.1). The line element form used for Proca stars is useful to tackle the behaviour at the origin, whereas the one used for BHs is useful to tackle the behaviour on a rotating horizon wherein reduces to the horizon angular velocity, . Indeed, following null geodesic generators () on the horizon (), assuming is finite therein, implies and thus , the angular velocity as measured by the observer at infinity. The Proca field ansatz is the same as for the stationary Proca clouds (and Proca stars up to the replacement (5.1)), (4.7). This, again, introduces two parameters: , . As for Proca stars we shall focus here on , and take the sychronization condition (4.12) that we can rewrite in this context as (for general ) wm=W(rH)=ΩH . (6.1) This condition was deduced in the context of a test field on the Kerr background and can be related to the threshold of superradiance. But it also has a different origin. In Appendix B, we present the Einstein tensor and the Proca energy-momentum tensor associated to the ansatz discussed in this section. A careful inspection of the components of the energy-momentum tensor that have inverse powers of ,141414A similar analysis can be made at the level of the components in an orthonormal frame, with similar conclusions. and hence may diverge at the horizon, shows that, taking into account (4.10), finiteness of the energy-momentum tensor components presented at requires w−mW(rH)N(rH) (6.2) to be finite and hence it requires (6.1) (the same can be observed in the Einstein equations presented in [45]). It is interesting to remark that this finiteness condition (6.1) is not necessarily related to superradiance, as the higher dimensional examples in [76, 85] illustrate. The Einstein-Proca equations are solved with the following boundary conditions (which again we have found to be compatible with an approximate construction of the solutions on the boundary of the domain of integration): i) at infinity, the same as for Proca stars, (4.8) and (5.2); ii) on the symmetry axis, the same as for Proca stars, (4.9) and (5.3); iii) at the horizon, using again the new radial coordinate , a power series expansion near implies (4.10), together with ∂xFi∣∣x=0=0 ,W∣∣x=0=ΩH . (6.3) The Einstein-Proca equations for KBHsPH are quite involved (Appendix B). They are solved numerically, subject to the above boundary conditions, by using the elliptic PDE solver fidisol/cadsol [81] based on a finite differences method in conjunction with the Newton-Raphson procedure. A description of the method for the case of KBHsSH can be found in [45]. The procedure in the case at hand is analogous. ### 6.1 Physical Quantities In the following we shall describe some physical quantities that will be monitored from the numerical solutions we have obtained. The ADM mass, , and ADM angular momentum, , are read off from the asymptotic expansion of the appropriate metric components: gtt=−1+2Mr+… ,  gφt=−2Jrsin2θ+… . (6.4) We also compute the horizon mass and angular momentum by using the appropriate Komar integrals associated to the corresponding Killing vector fields and : MH=−18π∮HdSαβDαkβ ,JH=116π∮HdSαβDαmβ . (6.5) Of course, and can also be computed as Komar integrals at infinity. Then, applying Gauss’s law, one obtains a relation with and together with volume integrals on a spacelike surface with a boundary at the (spatial section of the) horizon. By making use of the Killing identity and the Einstein equations one obtains: M=MH−2∫ΣdSα(Tαβkβ−12Tkα)≡MH+M(P) (6.6) This defines the energy stored in the Proca field (outside the horizon): M(P)≡−∫Σdrdθdφ(2Ttt−Tαα)√−g . (6.7) Proceeding similarly for the angular momentum one obtains: J=JH+J(P) , J(P)≡∫ΣdrdθdφTtφ√−g , (6.8) which defines the angular momentum stored in the Proca field. At this point, an interesting distinction arises, with respect to the scalar case. Whereas for KBHsSH the angular momentum stored in the scalar field relates to the Noether charge in precisely the same way as for rotating scalar boson stars , for KBHsPH the relation between and the Noether charge (2.9) includes an extra boundary term (see Appendix C and eq. (C.6)) (6.9) which generalizes relation (5.5) to the case of hairy BHs. A similar relation can be written for (see Appendix C and eq. (C.8)) M(P)=2wQ−μ2U+∮H[12(Aβ¯Frβ+¯AβFrβ)−(At¯Frt+¯AtFrt)]dSr , (6.10) with U≡∫ΣdrdθdφAα¯Aα√−g . (6.11) The horizon temperature and event horizon area of the KBHsPH solutions are computed by standard relations, that specialize to: TH=14πrHe(F0−F1)|r=rH ,AH=2πr2H∫π0dθsinθe(F1+F2)|r=rH . (6.12) Then, the ADM quantities are related with , where is the horizon entropy, through a Smarr formula M=2THS+2ΩHJH+M(P) . (6.13) Also, the variation of can be expressed by the first law: dM=THdS+ΩHdJ . (6.14) We note that by making use of the relations (6.9) and (6.10), the Smarr formula (6.13) can be written in a Kerr-like form M=2THS+2ΩHJ−μ2U , (6.15) which renders explicit the fact that the solutions are supported by a nonzero mass term of the Proca field. Finally, we observe that Proca stars satisfy a simple relation, which results again from (6.9), (6.10):151515One can similarly show that KBHsSH and scalar boson stars satisfy relations analogous to (6.15) and (6.16), respectively. M=2wQ−μ2U=2wmJ−μ2U . (6.16) ### 6.2 The domain of existence and phase space We have scanned the domain of existence of KBHsPH by varying for fixed lines (or vice-versa), in between the minimum frequency and the maximal one . The result for the family of KBHsPH is shown in Fig. 6 (left panel), together with the analogous family of KBHsSH (right panel), the former obtained from over five thousand numerical points. Based on the discussions of KBHsSH [1, 45, 58], and as already partly discussed, the domain of existence of KBHsPH should be bounded by three lines: the Proca clouds existence line discussed in Section 4, the Proca star line discussed in Section 5 and the line of extremal KBHsPH ( zero temperature). So far, the last of the three were only obtained by extrapolating to the non-extremal solutions, as our attempts to construct the extremal KBHsPH solutions by directly solving the Einstein-Proca field equations were unsuccessful (unlike the scalar case, as reported in [45]). For this reason we have chosen not to display this line in Fig. 6, for the Proca case. Another technical difficulty arises in trying to connect the set of (extrapolated) extremal solutions with the set of Proca stars. As for the case of KBHsSH, these two curves are likely to meet in a critical point at the center of the Proca stars spiral; however, validation of this hypothesis is a numerical challenge (also for KBHsSH). Concerning numerical errors, the PDE solver we have used provides error estimates for each unknown function, which allows judging the quality of the computed solution. The numerical error for the solutions reported in this work is estimated to be typically . As a further check of the numerical procedure, we have verified that the families of solutions satisfy with a very good accuracy the first law of thermodynamics and also the Smarr relation, typically at that same order. We have also monitored the violation of the gauge condition together with the constraint Einstein equations; typically, these provide much lower estimates for the numerical errors. As a comparative comment, the overall quality of the solutions is, however, not as high for KBHsPH as for KBHsSH. Additionally, the source of the difficulties we have encountered in constructing extremal and close to extremal solutions are absent in the scalar case. Typically, for the Proca case, the solver stops to converge in the near extremal case, although the error estimates for the last solutions is still small. It is likely that another metric parametrization is required to tackle this issue. We also remark that there may be a more involved landscape of excited solutions in view of the four vector potentials.161616 In fact, we have observed that the solver frequently “jumps” to one of these excited configurations which is not too far in the parameter space. In Fig. 6 we have singled out four particular solutions for each case, denoted I,III,IV and V. The numerical data for these four solutions, together with the data for a vacuum Kerr solution with the same ADM mass and angular momentum as that of configuration III, for each case, has been made publicly available for community use [82, 83]. The corresponding parameters are detailed in Appendix D. In Fig. 7 we exhibit the phase space, ADM mass ADM angular momentum diagram for solutions of KBHsPH (left panel) and as a comparison, the corresponding diagram for KBHsSH (right panel). The two plots are quite similar and the features we wish to emphasize is that, as for the scalar case, one observes violation of the Kerr bound (in terms of ADM quantities) and non-uniqueness, there are both hairy and vacuum Kerr BHs with the same ADM mass and angular momentum ( Appendix D). The violation of the Kerr bound also occurs in terms of horizon quantities, as shown in Fig. 8 (right panel). For these solutions the conjecture put forward in [84] concerning the horizon linear velocity , as defined therein, holds: despite violating the Kerr bound both in terms of ADM and horizon quantities, never exceeds the speed of light. We recall is defined as follows, for asymptotically flat, stationary and axi-symmetric spacetimes. On a spatial section of the event horizon one computes the proper length of all closed orbits of . Let be the maximum of all such proper lengths; the corresponding circumferencial radius, , is . The horizon linear velocity is [84]. ### 6.3 Energy distribution and horizon quantities As for their scalar cousins, KBHsPH can be thought of as a bound state of a horizon with a Proca star. Thus, the matter energy density distribution around the horizon will resemble that of (some) Proca stars. In Fig. 9 we exhibit the energy density and the angular momentum density as a function of the radial coordinate for different angular sections for an example of KBHPH. As for the Proca stars, both the energy density and the angular momentum density can have more than one maximum outside the horizon and the latter can also have regions with a different sign. Thus, outside KBHsPH there are counter-rotating regions. In Fig. 10 a constant Proca energy density surface is exhbited in a 3D plot. The behaviour of the energy density and angular momentum density on the horizon is more clearly seen in Fig. 11. Finally, in Fig. 12 we exhibit the variation of the horizon area with the horizon temperature along sequences of solutions with constant horizon angular velocity (or frequency). For both KBHsPH (left panel) and KBHsSH (right panel) one can see three different types of behaviour, which are easy to interpret referring back to Fig. 6. For large values of , the solutions interpolate between the Kerr existence line and the corresponding (Proca or scalar boson) star line (for which ). For intermediate values of , the solutions interpolate between the extremal BHs line (for which ) and the corresponding star line. Finally, for sufficiently small values of , the solutions interpolate between two stars, and thus start and end for . ## 7 Discussion It has long been established that stationary, asymptotically flat BHs in Einstein’s gravity minimally coupled to one or many real, Abelian Proca fields cannot have Proca hair. The basic theorem supporting this idea, due to Bekenstein [12, 13], assumes, however, that the Proca field inherits the spacetime isometries. In this paper we have shown that dropping this assumption Kerr BHs with Proca hair exist under two conditions: i) The Proca field is complex, or equivalently there are two real Proca fields with the same mass. Solutions in this paper can be, moreover, generalized to an arbitrary number of complex Proca fields (any even number of real Proca fields), without mutual interactions, and all of them minimally coupled to gravity. Here, however, we focus on a model with a single complex Proca field. ii) The complex Proca field has a harmonic time dependence, as in the ansatz (4.7), with the frequency and azimuthal harmonic index obeying the synchronization condition (4.12). These two assumptions, together, allow the two real Proca fields to oscillate, with the same frequency but opposite phases, hence cancelling out gravitational radiation emission (as well as Proca radiation emission). It remains as an open question if the same could be achieved with a single real Proca field, especially in view of the result in [77], since such real Proca field already has two independent modes. The existence of KBHsPH – to the best of our knowledge the first example of (fully non-linear) BHs with (Abelian) vector hair – is anchored in the synchronization/superradiance zero mode condition ( the field should co-rotate with the black hole horizon). All previously constructed examples which employed this mechanism have scalar hair, both in four spacetime dimensions [1, 45, 59, 58] and in higher dimensions [76, 85], including the example in five dimensional asymptotically Anti-de-Sitter space found in [86]. This further shows the generality of the mechanism and lends support to the conjecture in [1, 2]. We also remark that the Proca model considered here can be regarded as a proxy for more realistic models with a gauged scalar field, where the gauge fields acquire a mass dynamically, via the Higgs mechanism. A familiar example in this direction is the non-Abelian Proca model, whose solutions contain already all basic properties of the Yang-Mills–Higgs sphalerons in the Standard Model [63]. As such, the results in this work suggest that one should reconsider the no-hair theorem for the Abelian-Higgs model [88]. Several direct generalizations/applications of these solutions are possible. At the level of constructing further solutions, we anticipate that self-interacting Proca hair will lead to new solutions, which, if the scalar field case is a good guide [58], can have a much larger ADM mass (but not horizon mass) and hybrid solutions with scalar plus Proca hair are possible. At the level of possible astrophysics phenomenology, it would be interesting to look in detail to the geodesic flow, in particular to the frequency at the innermost stable circular orbit (ISCO), quadrupoles as well as to the lensing and shadows of these new BHs, following [87] (see also the review [89]). Work in this direction is underway. Finally, it is still a common place to find in the current literature statements that stationary BHs in GR are described solely by mass, angular momentum and charge. We want to emphasize that the examples of Kerr BHs with scalar and Proca hair show that this is not true as a generic statement for GR, even if physical matter – i.e. obeying all energy conditions – is required. These examples show that Noether charges, rather than charges associated to Gauss laws, are also permitted in non-pathological stationary, asymptotically flat, BH solutions. The main outstanding questions is if in a real dynamical process these Noether charges can survive. ## Acknowledgements We would like to thank Richard Brito and Vitor Cardoso for a fruitful collaboration on Proca stars. We also thank J. Rosa, M. Sampaio and M. Wang for discussions on Proca fields. C. H. and E. R. acknowledge funding from the FCT-IF programme. H.R. is supported by the grant PD/BD/109532/2015 under the MAP-Fis Ph.D. programme. This work was partially supported by the H2020-MSCA-RISE-2015 Grant No. StronGrHEP-690904, and by the CIDMA project UID/MAT/04106/2013. Computations were performed at the Blafis cluster, in Aveiro University. ## Appendix A Spheroidal prolate coordinates for Kerr The new coordinate system for Kerr (4.1), with the functions (4.3), first introduced in [45], actually reduce to spheroidal prolate coordinates in the Minkowski space limit, but with a non-standard radial coordinate. To see this, we observe that, from (4.6), occurs when . Then, from the expressions (4.3), the metric (4.1) becomes ds2=−dt2+[N(r)+b2r2sin2θ][dr2N(r)+r2dθ2]+N(r)r2sin2θdφ2 ,N(r)≡1+2br . (A.1) This can be converted to the standard Minkowski Cartesian quadratic form by the spatial coordinate transformation ⎧⎪ ⎪⎨⎪ ⎪⎩x=r√N(r)sinθcosφ ,y=r√N(r)sinθsinφ
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9659748673439026, "perplexity": 735.1018398537839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147116.85/warc/CC-MAIN-20200228073640-20200228103640-00051.warc.gz"}
https://nbviewer.org/github/bmclab/bmc/blob/master/notebooks/KinematicChain.ipynb
# Kinematic chain in a plane (2D)¶ Marcos Duarte, Renato Naville Watanabe Laboratory of Biomechanics and Motor Control Federal University of ABC, Brazil Kinematic chain refers to an assembly of rigid bodies (links) connected by joints that is the mathematical model for a mechanical system which in turn can represent a biological system such as the human arm (Wikipedia). The term chain refers to the fact that the links are constrained by their connections (typically, by a hinge joint which is also called pin joint or revolute joint) to other links. As consequence of this constraint, a kinematic chain in a plane is an example of circular motion of a rigid object. Chapter 16 of Ruina and Rudra's book is a good formal introduction on the topic of circular motion of a rigid object. However, in this notebook we will not employ the mathematical formalism introduced in that chapter - the concept of a rotating reference frame and the related rotation matrix - we cover these subjects in the notebooks Time-varying frame of reference and Rigid-body transformations (2D). Now, we will describe the kinematics of a chain in a Cartesian coordinate system using trigonometry and calculus. This approach is simpler and more intuitive but it gets too complicated for a kinematic chain with many links or in the 3D space. For such more complicated problems, it would be recommended using rigid transformations (see for example, Siciliano et al. (2009)). We will deduce the kinematic properties of kinematic chains algebraically using Sympy, a Python library for symbolic mathematics. And in Sympy we could have used the mechanics module, a specific module for creation of symbolic equations of motion for multibody systems, but let's deduce most of the stuff by ourselves to understand the details. ## Properties of kinematic chains¶ For a kinematic chain, the base is the extremity (origin) of a kinematic chain which is typically considered attached to the ground, body or fixed. The endpoint is the other extremity (end) of a kinematic chain and typically can move. In robotics, the term end-effector is used and usually refers to a last link (rigid body) in this chain. In topological terms, a kinematic chain is termed open when there is only one sequence of links connecting the two ends of the chain. Otherwise it's termed closed and in this case a sequence of links forms a loop. A kinematic chain can be classified as serial or parallel or a mixed of both. In a serial chain the links are connected in a serial order. A serial chain is an open chain, otherwise it is a parallel chain or a branched chain (e.g., hand and fingers). Although the definition above is clear and classic in mechanics, it is unfortunately not the definition used by health professionals (clinicians and athletic trainers) when describing human movement. They refer to human joints and segments as a closed or open kinematic (or kinetic) chain simply if the distal segment (typically the foot or hand) is fixed (closed chain) or not (open chain). In this text we will be consistent with mechanics, but keep in mind this difference when interacting with clinicians and athletic trainers. Another important term to characterize a kinematic chain is degree of freedom (DOF). In mechanics, the degree of freedom of a mechanical system is the number of independent parameters that define its configuration or that determine the state of a physical system. A particle in the 3D space has three DOFs because we need three coordinates to specify its position. A rigid body in the 3D space has six DOFs because we need three coordinates of one point at the body to specify its position and three angles to to specify its orientation in order to completely define the configuration of the rigid body. For a link attached to a fixed body by a hinge joint in a plane, all we need to define the configuration of the link is one angle and then this link has only one DOF. A kinematic chain with two links in a plane has two DOFs, and so on. The mobility of a kinematic chain is its total number of degrees of freedom. The redundancy of a kinematic chain is its mobility minus the number of degrees of freedom of the endpoint. First, let's study the case of a system composed by one planar hinge joint and one link, which technically it's not a chain but it will be useful to review (or introduce) key concepts. First, let's import the necessary libraries from Python and its ecosystem: In [1]: import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2, "lines.markersize": 10}) from IPython.display import display, Math from sympy import Symbol, symbols, Function from sympy import Matrix, simplify, lambdify, expand, latex from sympy import diff, cos, sin, sqrt, acos, atan2, atan, Abs from sympy.vector import CoordSys3D from sympy.physics.mechanics import dynamicsymbols, mlatex, init_vprinting init_vprinting() import sys sys.path.insert(1, r'./../functions') # add to pythonpath We need to define a Cartesian coordinate system and the symbolic variables, $t$, $\ell$, $\theta$ (and make $\theta$ a function of time): In [2]: G = CoordSys3D('') t = Symbol('t') l = Symbol('ell', real=True, positive=True) # type \theta and press tab for the Greek letter θ θ = dynamicsymbols('theta', real=True) # or Function('theta')(t) Using trigonometry, the endpoint position in terms of the joint angle and link length is: In [3]: r_p = l*cos(θ)*G.i + l*sin(θ)*G.j + 0*G.k r_p Out[3]: $\displaystyle (\ell \cos{\left(\theta{\left(t \right)} \right)})\mathbf{\hat{i}_{}} + (\ell \sin{\left(\theta{\left(t \right)} \right)})\mathbf{\hat{j}_{}}$ With the components: In [4]: r_p.components Out[4]: $\displaystyle \left\{ \mathbf{\hat{i}_{}} : \ell \operatorname{cos}\left(\theta\right), \ \mathbf{\hat{j}_{}} : \ell \operatorname{sin}\left(\theta\right)\right\}$ ### Forward and inverse kinematics¶ Computing the configuration of a link or a chain (including the endpoint location) from the joint parameters (joint angles and link lengths) as we have done is called forward or direct kinematics. If the linear coordinates of the endpoint position are known (for example, if they are measured with a motion capture system) and one wants to obtain the joint angle(s), this process is known as inverse kinematics. For the one-link system above: $$\theta = arctan\left(\frac{y_P}{x_P}\right)$$ ### Matrix representation of the kinematics¶ The mathematical manipulation will be easier if we use the matrix formalism (and let's drop the explicit dependence on $t$): In [6]: r = Matrix((r_p.dot(G.i), r_p.dot(G.j))) r Out[6]: $\displaystyle \left[\begin{matrix}\ell \operatorname{cos}\left(\theta\right)\\\ell \operatorname{sin}\left(\theta\right)\end{matrix}\right]$ Using the matrix formalism will simplify things, but we will loose some of the Sympy methods for vectors (for instance, the variable r_p has a method magnitude and the variable r does not. If you prefer, you can keep the pure vector representation and just switch to matrix representation when displaying a variable: In [7]: r_p.to_matrix(G) Out[7]: $\displaystyle \left[\begin{matrix}\ell \operatorname{cos}\left(\theta\right)\\\ell \operatorname{sin}\left(\theta\right)\\0\end{matrix}\right]$ The third element of the matrix above refers to the $\hat{\mathbf{k}}$ component which is zero for the present case (planar movement). ## Differential kinematics¶ Differential kinematics gives the relationship between the joint velocities and the corresponding endpoint linear velocity. This mapping is described by a matrix, termed Jacobian matrix, which depends on the kinematic chain configuration and it is of great use in the study of kinematic chains. First, let's deduce the endpoint velocity without using the Jacobian and then we will see how to calculate the endpoint velocity using the Jacobian matrix. The velocity of the endpoint can be obtained by the first-order derivative of the position vector. The derivative of a vector is obtained by differentiating each vector component: $$\frac{\mathrm{d}\overrightarrow{\mathbf{r}}}{\mathrm{d}t} = \large \begin{bmatrix} \frac{\mathrm{d}x_P}{\mathrm{d}t} \\ \frac{\mathrm{d}y_P}{\mathrm{d}t} \\ \end{bmatrix}$$ Note that the derivative is with respect to time but $x_P$ and $y_P$ depend explicitly on $\theta$ and it's $\theta$ that depends on $t$ ($x_P$ and $y_P$ depend implicitly on $t$). To calculate this type of derivative we will use the chain rule. Chain rule For variable $f$ which is function of variable $g$ which in turn is function of variable $t$, $f(g(t))$ or $(f\circ g)(t)$, the derivative of $f$ with respect to $t$ is (using Lagrange's notation): $$(f\circ g)^{'}(t) = f'(g(t)) \cdot g'(t)$$ Or using what is known as Leibniz's notation: $$\frac{\mathrm{d}f}{\mathrm{d}t} = \frac{\mathrm{d}f}{\mathrm{d}g} \cdot \frac{\mathrm{d}g}{\mathrm{d}t}$$ If $f$ is function of two other variables which both are function of $t$, $f(x(t),y(t))$, the chain rule for this case is: $$\frac{\mathrm{d}f}{\mathrm{d}t} = \frac{\partial f}{\partial x} \cdot \frac{\mathrm{d}x}{\mathrm{d}t} + \frac{\partial f}{\partial y} \cdot \frac{\mathrm{d}y}{\mathrm{d}t}$$ Where $df/dt$ represents the total derivative and $\partial f / \partial x$ represents the partial derivative of a function. Product rule The derivative of the product of two functions is: $$(f \cdot g)' = f' \cdot g + f \cdot g'$$ ### Linear velocity of the endpoint¶ For the planar one-link case, the linear velocity of the endpoint is: In [8]: v = r.diff(t) v Out[8]: $\displaystyle \left[\begin{matrix}- \ell \operatorname{sin}\left(\theta\right) \dot{\theta}\\\ell \operatorname{cos}\left(\theta\right) \dot{\theta}\end{matrix}\right]$ Where we used the Newton's notation for differentiation. Note that $\dot{\theta}$ represents the unknown angular velocity of the joint; this is why the derivative of $\theta$ is not explicitly solved. The magnitude or Euclidian norm of the vector $\overrightarrow{\mathbf{v}}$ is: $$||\overrightarrow{\mathbf{v}}||=\sqrt{v_x^2+v_y^2}$$ In [9]: simplify(sqrt(v[0]**2 + v[1]**2)) Out[9]: $\displaystyle \ell \sqrt{\dot{\theta}^{2}}$ Which is $\ell\dot{\theta}$. We could have used the function norm of Sympy, but the output does not simplify nicely: In [10]: simplify(v.norm()) Out[10]: $\displaystyle \ell \sqrt{\left|{\operatorname{sin}\left(\theta\right) \dot{\theta}}\right|^{2} + \left|{\operatorname{cos}\left(\theta\right) \dot{\theta}}\right|^{2}}$ The direction of $\overrightarrow{\mathbf{v}}$ is tangent to the circular trajectory of the endpoint as can be seen in the figure below where its components are also shown. ### Linear acceleration of the endpoint¶ The acceleration of the endpoint position can be given by the second-order derivative of the position or by the first-order derivative of the velocity. Using the chain and product rules for differentiation, the linear acceleration of the endpoint is: In [11]: acc = v.diff(t, 1) acc Out[11]: $\displaystyle \left[\begin{matrix}- \ell \operatorname{sin}\left(\theta\right) \ddot{\theta} - \ell \operatorname{cos}\left(\theta\right) \dot{\theta}^{2}\\- \ell \operatorname{sin}\left(\theta\right) \dot{\theta}^{2} + \ell \operatorname{cos}\left(\theta\right) \ddot{\theta}\end{matrix}\right]$ Examining the terms of the expression for the linear acceleration, we see there are two types of them: the term (in each direction) proportional to the angular acceleration $\ddot{\theta}$ and other term proportional to the square of the angular velocity $\dot{\theta}^{2}$. #### Tangential acceleration¶ The term proportional to angular acceleration, $a_t$, is always tangent to the trajectory of the endpoint (see figure below) and it's magnitude or Euclidean norm is: In [12]: A = θ.diff(t, 2) simplify(sqrt(expand(acc[0]).coeff(A)**2 + expand(acc[1]).coeff(A)**2))*A Out[12]: $\displaystyle \ell \ddot{\theta}$ #### Centripetal acceleration¶ The term proportional to angular velocity, $a_c$, always points to the joint, the center of the circular motion (see figure below), because of that this term is termed centripetal acceleration. Its magnitude is: In [13]: A = θ.diff(t)**2 simplify(sqrt(expand(acc[0]).coeff(A)**2+expand(acc[1]).coeff(A)**2))*A Out[13]: $\displaystyle \ell \dot{\theta}^{2}$ This means that there will be a linear acceleration even if the angular acceleration is zero because although the magnitude of the linear velocity is constant in this case, its direction varies (due to the centripetal acceleration). Let's plot some simulated data to have an idea of the one-link kinematics. Consider $\ell=1\:m,\theta_i=0^o,\theta_f=90^o$, and $1\:s$ of movement duration, and that it is a minimum-jerk movement. In [14]: θ_i, θ_f, d = 0, np.pi/2, 1 ts = np.arange(0.01, 1.01, .01) mjt = θ_i + (θ_f - θ_i)*(10*(t/d)**3 - 15*(t/d)**4 + 6*(t/d)**5) ang = lambdify(t, mjt, 'numpy'); ang = ang(ts) vang = lambdify(t, mjt.diff(t,1), 'numpy'); vang = vang(ts) aang = lambdify(t, mjt.diff(t,2), 'numpy'); aang = aang(ts) jang = lambdify(t, mjt.diff(t,3), 'numpy'); jang = jang(ts) b, c, d, e = symbols('b c d e') dicti = {l:1, θ:b, θ.diff(t, 1):c, θ.diff(t, 2):d, θ.diff(t, 3):e} r2 = r.subs(dicti); rxfu = lambdify(b, r2[0], modules = 'numpy') ryfu = lambdify(b, r2[1], modules = 'numpy') v2 = v.subs(dicti); vxfu = lambdify((b, c), v2[0], modules = 'numpy') vyfu = lambdify((b, c), v2[1], modules = 'numpy') acc2 = acc.subs(dicti); axfu = lambdify((b, c, d), acc2[0], modules = 'numpy') ayfu = lambdify((b, c, d), acc2[1], modules = 'numpy') jerk = r.diff(t,3) jerk2 = jerk.subs(dicti); jxfu = lambdify((b, c, d, e), jerk2[0], modules = 'numpy') jyfu = lambdify((b, c, d, e), jerk2[1], modules = 'numpy') In [15]: fig, hax = plt.subplots(2, 4, sharex = True, figsize=(14, 7)) hax[0, 0].plot(ts, ang*180/np.pi, linewidth=3) hax[0, 0].set_title('Angular displacement [ $^o$]'); hax[0, 0].set_ylabel('Joint') hax[0, 1].plot(ts, vang*180/np.pi, linewidth=3) hax[0, 1].set_title('Angular velocity [ $^o/s$]'); hax[0, 2].plot(ts, aang*180/np.pi, linewidth=3) hax[0, 2].set_title('Angular acceleration [ $^o/s^2$]'); hax[0, 3].plot(ts, jang*180/np.pi, linewidth=3) hax[0, 3].set_title('Angular jerk [ $^o/s^3$]'); hax[1, 0].plot(ts, rxfu(ang), 'r', linewidth=3, label = 'x') hax[1, 0].plot(ts, ryfu(ang), 'k', linewidth=3, label = 'y') hax[1, 0].set_title('Linear displacement [$m$]'); hax[1, 0].legend(loc='best').get_frame().set_alpha(0.8) hax[1, 0].set_ylabel('Endpoint') hax[1, 1].plot(ts,vxfu(ang, vang), 'r', linewidth=3) hax[1, 1].plot(ts,vyfu(ang, vang), 'k', linewidth=3) hax[1, 1].set_title('Linear velocity [$m/s$]'); hax[1, 2].plot(ts,axfu(ang, vang, aang), 'r', linewidth=3) hax[1, 2].plot(ts,ayfu(ang, vang, aang), 'k', linewidth=3) hax[1, 2].set_title('Linear acceleration [$m/s^2$]'); hax[1, 3].plot(ts, jxfu(ang, vang, aang, jang), 'r', linewidth=3) hax[1, 3].plot(ts, jyfu(ang, vang, aang, jang), 'k', linewidth=3) hax[1, 3].set_title('Linear jerk [$m/s^3$]'); fig.suptitle('Minimum jerk trajectory kinematics of one-link system', fontsize=20); for i, hax2 in enumerate(hax.flat): hax2.locator_params(nbins=5) hax2.grid(True) if i > 3: hax2.set_xlabel('Time [s]'); ### Jacobian matrix¶ The Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function $F$: $$F(q_1,...q_n) = \begin{bmatrix}F_{1}(q_1,...q_n)\\ \vdots\\ F_{m}(q_1,...q_n)\\ \end{bmatrix}$$ In a general form, the Jacobian matrix of the function $F$ is: $$\mathbf{J}= \large \begin{bmatrix} \frac{\partial F_{1}}{\partial q_{1}} & ... & \frac{\partial F_{1}}{\partial q_{n}} \\ \vdots & \ddots & \vdots \\ \frac{\partial F_{m}}{\partial q_{1}} & ... & \frac{\partial F_{m}}{\partial q_{n}} \\ \end{bmatrix}$$ ### Derivative of a vector-valued function using the Jacobian matrix¶ The time-derivative of a vector-valued function $F$ can be computed using the Jacobian matrix: $$\frac{dF}{dt} = \mathbf{J} \cdot \begin{bmatrix}\frac{d q_1}{dt}\\ \vdots\\ \frac{d q_n}{dt}\\ \end{bmatrix}$$ ### Jacobian matrix in the context of kinematic chains¶ In the context of kinematic chains, the Jacobian is a matrix of all first-order partial derivatives of the linear position vector of the endpoint with respect to the angular position vector. The Jacobian matrix for a kinematic chain relates differential changes in the joint angle vector with the resulting differential changes in the linear position vector of the endpoint. For a kinematic chain, the function $F_{i}$ is the expression of the endpoint position in $m$ coordinates and the variable $q_{i}$ is the angle of each $n$ joints. For the planar one-link case, the Jacobian matrix of the position vector of the endpoint $r_P$ with respect to the angular position vector $q_1=\theta$ is: $$\mathbf{J}= \large \begin{bmatrix} \frac{\partial x_P}{\partial \theta} \\ \frac{\partial y_P}{\partial \theta} \\ \end{bmatrix}$$ Which evaluates to: In [15]: J = r.diff(θ) J Out[15]: $\displaystyle \left[\begin{matrix}- \ell \operatorname{sin}\left(\theta\right)\\\ell \operatorname{cos}\left(\theta\right)\end{matrix}\right]$ And Sympy has a function to calculate the Jacobian: In [16]: J = r.jacobian([θ]) J Out[16]: $\displaystyle \left[\begin{matrix}- \ell \operatorname{sin}\left(\theta\right)\\\ell \operatorname{cos}\left(\theta\right)\end{matrix}\right]$ We can recalculate the kinematic expressions using the Jacobian matrix, which can be useful for simplifying the deduction. The linear velocity of the end-effector is given by the product between the Jacobian of the kinematic link and the angular velocity: $$\overrightarrow{\mathbf{v}} = \mathbf{J} \cdot \dot{\theta}$$ Where: In [17]: ω = θ.diff(t) ω Out[17]: $\displaystyle \dot{\theta}$ The angular velocity is also a vector; it's direction is perpendicular to the plane of rotation and using the right-hand rule this direction is the same as of the versor $\hat{\mathbf{k}}$ coming out of the screen (paper). Then: In [18]: velJ = J*ω velJ Out[18]: $\displaystyle \left[\begin{matrix}- \ell \operatorname{sin}\left(\theta\right) \dot{\theta}\\\ell \operatorname{cos}\left(\theta\right) \dot{\theta}\end{matrix}\right]$ And the linear acceleration of the endpoint is given by the derivative of this product: $$\overrightarrow{\mathbf{a}} = \dot{\mathbf{J}} \cdot \overrightarrow{\mathbf{\omega}} + \mathbf{J} \cdot \dot{\overrightarrow{\mathbf{\omega}}}$$ Let's calculate this derivative: In [19]: accJ = J.diff(t)*ω + J*ω.diff(t) accJ Out[19]: $\displaystyle \left[\begin{matrix}- \ell \operatorname{sin}\left(\theta\right) \ddot{\theta} - \ell \operatorname{cos}\left(\theta\right) \dot{\theta}^{2}\\- \ell \operatorname{sin}\left(\theta\right) \dot{\theta}^{2} + \ell \operatorname{cos}\left(\theta\right) \ddot{\theta}\end{matrix}\right]$ These two expressions derived with the Jacobian are the same as the direct derivatives of the equation for the endpoint position. We now will look at the case of a planar kinematic chain with two links, as shown below. The deduction will be similar to the case with one link we just saw. We need to define a Cartesian coordinate system and the symbolic variables $t,\:\ell_1,\:\ell_2,\:\theta_1,\:\theta_2$ (and make $\theta_1$ and $\theta_2$ function of time): In [16]: G = CoordSys3D('') t = Symbol('t') l1, l2 = symbols('ell_1 ell_2', positive=True) θ1, θ2 = dynamicsymbols('theta1 theta2') The position of the endpoint in terms of the joint angles and link lengths is: In [20]: r2_p = (l1*cos(θ1) + l2*cos(θ1 + θ2))*G.i + (l1*sin(θ1) + l2*sin(θ1 + θ2))*G.j r2_p Out[20]: $\displaystyle (\ell_{1} \cos{\left(\theta_{1}{\left(t \right)} \right)} + \ell_{2} \cos{\left(\theta_{1}{\left(t \right)} + \theta_{2}{\left(t \right)} \right)})\mathbf{\hat{i}_{}} + (\ell_{1} \sin{\left(\theta_{1}{\left(t \right)} \right)} + \ell_{2} \sin{\left(\theta_{1}{\left(t \right)} + \theta_{2}{\left(t \right)} \right)})\mathbf{\hat{j}_{}}$ With the components: In [21]: r2_p.components Out[21]: $\displaystyle \left\{ \mathbf{\hat{i}_{}} : \ell_{1} \operatorname{cos}\left(\theta_{1}\right) + \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right), \ \mathbf{\hat{j}_{}} : \ell_{1} \operatorname{sin}\left(\theta_{1}\right) + \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\right\}$ And in matrix form: In [22]: r2 = Matrix((r2_p.dot(G.i), r2_p.dot(G.j))) r2 Out[22]: $\displaystyle \left[\begin{matrix}\ell_{1} \operatorname{cos}\left(\theta_{1}\right) + \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\\\ell_{1} \operatorname{sin}\left(\theta_{1}\right) + \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\end{matrix}\right]$ ### Joint and segment angles¶ Note that $\theta_2$ is a joint angle (referred as measured in the joint space); the angle of the segment 2 with respect to the horizontal is $\theta_1+\theta_2$ and is referred as an angle in the segmental space. Joint and segment angles are also referred as relative and absolute angles, respectively. ### Inverse kinematics¶ Using the cosine rule, in terms of the endpoint position, the angle $\theta_2$ is: $$x_P^2 + y_P^2 = \ell_1^2+\ell_2^2 - 2\ell_1 \ell_2 cos(\pi-\theta_2)$$ $$\theta_2 = \arccos\left(\frac{x_P^2 + y_P^2 - \ell_1^2 - \ell_2^2}{2\ell_1 \ell_2}\;\;\right)$$ To find the angle $\theta_1$, if we now look at the triangle in red in the figure below, its angle $\phi$ is: $$\phi = \arctan\left(\frac{\ell_2 \sin(\theta_2)}{\ell_1 + \ell_2 \cos(\theta_2)}\right)$$ And the angle of its hypotenuse with the horizontal is: $$\theta_1 + \phi = \arctan\left(\frac{y_P}{x_P}\right)$$ Then, the angle $\theta_1$ is: $$\theta_1 = \arctan\left(\frac{y_P}{x_P}\right) - \arctan\left(\frac{\ell_2 \sin(\theta_2)}{\ell_1+\ell_2 \cos(\theta_2)}\right)$$ Note that there are two possible sets of $(\theta_1, \theta_2)$ angles for the same $(x_P, y_P)$ coordinate that satisfy the equations above. The figure below shows in orange another possible configuration of the kinematic chain with the same endpoint coordinate. The other solution is $\theta_2'=2\pi - \theta_2$, but $\sin(\theta_2')=-sin(\theta_{2})$ and then the $arctan()$ term in the last equation becomes negative. Even for a simple two-link chain we already have a problem of redundancy, there is more than one joint configuration for the same endpoint position; this will be much more problematic for chains with more links (more degrees of freedom). ## Differential kinematics¶ The linear velocity of the endpoint is: In [23]: vel2 = r2.diff(t) vel2 Out[23]: $\displaystyle \left[\begin{matrix}- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) \dot{\theta}_{1} - \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\\\ell_{1} \operatorname{cos}\left(\theta_{1}\right) \dot{\theta}_{1} + \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right) \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\end{matrix}\right]$ The linear velocity of the endpoint is the sum of the velocities at each joint, i.e., it is the velocity of the endpoint in relation to joint 2, for instance, $\ell_2cos(\theta_1 + \theta_2)\dot{\theta}_1$, plus the velocity of joint 2 in relation to joint 1, for instance, $\ell_1\dot{\theta}_1 cos(\theta_1)$, and this last term we already saw for the one-link example. In classical mechanics this is known as relative velocity, an example of Galilean transformation. The linear acceleration of the endpoint is: In [24]: acc2 = r2.diff(t, 2) acc2 Out[24]: $\displaystyle \left[\begin{matrix}- (\ell_{1} \operatorname{sin}\left(\theta_{1}\right) \ddot{\theta}_{1} + \ell_{1} \operatorname{cos}\left(\theta_{1}\right) \dot{\theta}_{1}^{2} + \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right)^{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) + \ell_{2} \left(\ddot{\theta}_{1} + \ddot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} + \theta_{2}\right))\\- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) \dot{\theta}_{1}^{2} + \ell_{1} \operatorname{cos}\left(\theta_{1}\right) \ddot{\theta}_{1} - \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right)^{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) + \ell_{2} \left(\ddot{\theta}_{1} + \ddot{\theta}_{2}\right) \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\end{matrix}\right]$ We can separate the equation above for the linear acceleration in three types of terms: proportional to $\ddot{\theta}$ and to $\dot{\theta}^2$, as we already saw for the one-link case, and a new term, proportional to $\dot{\theta}_1\dot{\theta}_2$: In [26]: acc2 = acc2.expand() A = θ1.diff(t, 2) B = θ2.diff(t, 2) tg = A*Matrix((acc2[0].coeff(A),acc2[1].coeff(A)))+B*Matrix((acc2[0].coeff(B),acc2[1].coeff(B))) A = θ1.diff(t)**2 B = θ2.diff(t)**2 ct = A*Matrix((acc2[0].coeff(A),acc2[1].coeff(A)))+B*Matrix((acc2[0].coeff(B),acc2[1].coeff(B))) A = θ1.diff(t)*θ2.diff(t) co = A*Matrix((acc2[0].coeff(A),acc2[1].coeff(A))) display(Math(mlatex(r'Tangential:\:') + mlatex(tg))) display(Math(mlatex(r'Centripetal:') + mlatex(ct))) display(Math(mlatex(r'Coriolis:\;\;\;\;\:') + mlatex(co))) $\displaystyle Tangential:\:\left[\begin{matrix}- \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) \ddot{\theta}_{2} + \left(- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) - \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\right) \ddot{\theta}_{1}\\\ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) \ddot{\theta}_{2} + \left(\ell_{1} \operatorname{cos}\left(\theta_{1}\right) + \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\right) \ddot{\theta}_{1}\end{matrix}\right]$ $\displaystyle Centripetal:\left[\begin{matrix}- \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{2}^{2} + \left(- \ell_{1} \operatorname{cos}\left(\theta_{1}\right) - \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\right) \dot{\theta}_{1}^{2}\\- \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{2}^{2} + \left(- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) - \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\right) \dot{\theta}_{1}^{2}\end{matrix}\right]$ $\displaystyle Coriolis:\;\;\;\;\:\left[\begin{matrix}- 2 \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2}\\- 2 \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{1} \dot{\theta}_{2}\end{matrix}\right]$ This new term is called the Coriolis acceleration; it is 'felt' by the endpoint when its distance to the instantaneous center of rotation varies, due to the links' constraints, and as consequence the endpoint motion is deflected (its direction is perpendicular to the relative linear velocity of the endpoint with respect to the linear velocity at the second joint, $\mathbf{v} - \mathbf{v}_{joint2}$. Let's now deduce the Jacobian for this planar two-link chain: $$\mathbf{J} = \large \begin{bmatrix} \frac{\partial x_P}{\partial \theta_{1}} & \frac{\partial x_P}{\partial \theta_{2}} \\ \frac{\partial y_P}{\partial \theta_{1}} & \frac{\partial y_P}{\partial \theta_{2}} \\ \end{bmatrix}$$ We could manually run: J = Matrix([[r2[0].diff(theta1), r2[0].diff(theta2)], [r2[1].diff(theta1), r2[1].diff(theta2)]]) But it's shorter with the Jacobian function from Sympy: In [27]: J2 = r2.jacobian([θ1, θ2]) J2 Out[27]: $\displaystyle \left[\begin{matrix}- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) - \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) & - \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\\\ell_{1} \operatorname{cos}\left(\theta_{1}\right) + \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) & \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\end{matrix}\right]$ Using the Jacobian, the linear velocity of the endpoint is: $$\mathbf{v_J} = \mathbf{J} \cdot \begin{bmatrix}\dot{\theta_1}\\ \dot{\theta_2}\\ \end{bmatrix}$$ Where: In [28]: ω2 = Matrix((θ1, θ2)).diff(t) ω2 Out[28]: $\displaystyle \left[\begin{matrix}\dot{\theta}_{1}\\\dot{\theta}_{2}\end{matrix}\right]$ Then: In [29]: vel2J = J2*ω2 vel2J Out[29]: $\displaystyle \left[\begin{matrix}- \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{2} + \left(- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) - \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\right) \dot{\theta}_{1}\\\ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{2} + \left(\ell_{1} \operatorname{cos}\left(\theta_{1}\right) + \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\right) \dot{\theta}_{1}\end{matrix}\right]$ This expression derived with the Jacobian is the same as the first-order derivative of the equation for the endpoint position. We can show this equality by comparing the two expressions with Sympy: In [30]: vel2.expand() == vel2J.expand() Out[30]: True Once again, the linear acceleration of the endpoint is given by the derivative of the product between the Jacobian and the angular velocity: $$\mathbf{a} = \dot{\mathbf{J}} \cdot \mathbf{\omega} + \mathbf{J} \cdot \dot{\mathbf{\omega}}$$ Let's calculate this derivative: In [31]: acc2J = J2.diff(t)*ω2 + J2*ω2.diff(t) acc2J Out[31]: $\displaystyle \left[\begin{matrix}- \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right) \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{2} - \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) \ddot{\theta}_{2} + \left(- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) - \ell_{2} \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\right) \ddot{\theta}_{1} + \left(- \ell_{1} \operatorname{cos}\left(\theta_{1}\right) \dot{\theta}_{1} - \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right) \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\right) \dot{\theta}_{1}\\- \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} + \theta_{2}\right) \dot{\theta}_{2} + \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right) \ddot{\theta}_{2} + \left(\ell_{1} \operatorname{cos}\left(\theta_{1}\right) + \ell_{2} \operatorname{cos}\left(\theta_{1} + \theta_{2}\right)\right) \ddot{\theta}_{1} + \left(- \ell_{1} \operatorname{sin}\left(\theta_{1}\right) \dot{\theta}_{1} - \ell_{2} \left(\dot{\theta}_{1} + \dot{\theta}_{2}\right) \operatorname{sin}\left(\theta_{1} + \theta_{2}\right)\right) \dot{\theta}_{1}\end{matrix}\right]$ Once again, the expression above is the same as the second-order derivative of the equation for the endpoint position: In [32]: acc2.expand() == acc2J.expand() Out[32]: True Let's plot some simulated data to have an idea of the two-link kinematics. Consider 1 s of movement duration, $\ell_1=\ell_2=0.5m, \theta_1(0)=\theta_2(0)=0$, $\theta_1(1)=\theta_2(1)=90^o$, and that the endpoint trajectory is a minimum-jerk movement. First, the simulated trajectories: In [25]: t, p0, pf, d = symbols('t p0 pf d') rx = dynamicsymbols('rx', real=True) # or Function('rx')(t) ry = dynamicsymbols('ry', real=True) # or Function('ry')(t) # minimum jerk kinematics mjt = p0 + (pf - p0)*(10*(t/d)**3 - 15*(t/d)**4 + 6*(t/d)**5) rfu = lambdify((t, p0, pf, d), mjt, 'numpy') vfu = lambdify((t, p0, pf, d), diff(mjt, t, 1), 'numpy') afu = lambdify((t, p0, pf, d), diff(mjt, t, 2), 'numpy') jfu = lambdify((t, p0, pf, d), diff(mjt, t, 3), 'numpy') # values d, L1, L2 = 1, .5, .5 #initial values: p0, pf = [-0.5, 0.5], [0, .5*np.sqrt(2)] ts = np.arange(0.01, 1.01, .01) # endpoint kinematics x = rfu(ts, p0[0], pf[0], d) y = rfu(ts, p0[1], pf[1], d) vx = vfu(ts, p0[0], pf[0], d) vy = vfu(ts, p0[1], pf[1], d) ax = afu(ts, p0[0], pf[0], d) ay = afu(ts, p0[1], pf[1], d) jx = jfu(ts, p0[0], pf[0], d) jy = jfu(ts, p0[1], pf[1], d) # inverse kinematics ang2b = np.arccos((x**2 + y**2 - L1**2 - L2**2)/(2*L1*L2)) ang1b = np.arctan2(y, x) - (np.arctan2(L2*np.sin(ang2b), (L1+L2*np.cos(ang2b)))) ang2 = acos((rx**2 + ry**2 - l1**2 - l2**2)/(2*l1*l2)) ang2fu = lambdify((rx ,ry, l1, l2), ang2, 'numpy'); ang2 = ang2fu(x, y, L1, L2) ang1 = atan2(ry, rx) - (atan(l2*sin(acos((rx**2 + ry**2 - l1**2 - l2**2)/(2*l1*l2)))/ \ (l1+l2*cos(acos((rx**2 + ry**2 - l1**2 - l2**2)/(2*l1*l2)))))) ang1fu = lambdify((rx, ry, l1, l2), ang1, 'numpy'); ang1 = ang1fu(x, y, L1, L2) ang2b = acos((rx**2 + ry**2 - l1**2 - l2**2)/(2*l1*l2)) ang1b = atan2(ry, rx) - (atan(l2*sin(acos((rx**2 + ry**2 - l1**2 - l2**2)/(2*l1*l2)))/ \ (l1 + l2*cos(acos((rx**2 + ry**2-l1**2 - l2**2)/(2*l1*l2)))))) X, Y, Xd, Yd, Xdd, Ydd, Xddd, Yddd = symbols('X Y Xd Yd Xdd Ydd Xddd Yddd') dicti = {rx:X, ry:Y, rx.diff(t, 1):Xd, ry.diff(t, 1):Yd, \ rx.diff(t, 2):Xdd, ry.diff(t, 2):Ydd, rx.diff(t, 3):Xddd, ry.diff(t, 3):Yddd, l1:L1, l2:L2} vang1 = diff(ang1b, t, 1) vang1 = vang1.subs(dicti) vang1fu = lambdify((X, Y, Xd, Yd, l1, l2), vang1, 'numpy') vang1 = vang1fu(x, y, vx, vy, L1, L2) vang2 = diff(ang2b, t, 1) vang2 = vang2.subs(dicti) vang2fu = lambdify((X, Y, Xd, Yd, l1, l2), vang2, 'numpy') vang2 = vang2fu(x, y, vx, vy, L1, L2) aang1 = diff(ang1b, t, 2) aang1 = aang1.subs(dicti) aang1fu = lambdify((X, Y, Xd, Yd, Xdd, Ydd, l1, l2), aang1, 'numpy') aang1 = aang1fu(x, y, vx, vy, ax, ay, L1, L2) aang2 = diff(ang2b, t, 2) aang2 = aang2.subs(dicti) aang2fu = lambdify((X, Y, Xd, Yd, Xdd, Ydd, l1, l2), aang2, 'numpy') aang2 = aang2fu(x, y, vx, vy, ax, ay, L1, L2) jang1 = diff(ang1b, t, 3) jang1 = jang1.subs(dicti) jang1fu = lambdify((X, Y, Xd, Yd, Xdd, Ydd, Xddd, Yddd, l1, l2), jang1, 'numpy') jang1 = jang1fu(x, y, vx, vy, ax, ay, jx, jy, L1, L2) jang2 = diff(ang2b, t, 3) jang2 = jang2.subs(dicti) jang2fu = lambdify((X, Y, Xd, Yd, Xdd, Ydd, Xddd, Yddd, l1, l2), jang2, 'numpy') jang2 = jang2fu(x, y, vx, vy, ax, ay, jx, jy, L1, L2) And the plots for the trajectories: In [26]: fig, hax = plt.subplots(2, 4, sharex = True, figsize=(14, 7)) hax[0, 0].plot(ts, x, 'r', linewidth=3, label = 'x') hax[0, 0].plot(ts, y, 'k', linewidth=3, label = 'y') hax[0, 0].set_title('Linear displacement [$m$]') hax[0, 0].legend(loc='best').get_frame().set_alpha(0.8) hax[0, 0].set_ylabel('Endpoint') hax[0, 1].plot(ts, vx, 'r', linewidth=3) hax[0, 1].plot(ts, vy, 'k', linewidth=3) hax[0, 1].set_title('Linear velocity [$m/s$]') hax[0, 2].plot(ts, ax, 'r', linewidth=3) hax[0, 2].plot(ts, ay, 'k', linewidth=3) hax[0, 2].set_title('Linear acceleration [$m/s^2$]') hax[0, 3].plot(ts, jx, 'r', linewidth=3) hax[0, 3].plot(ts, jy, 'k', linewidth=3) hax[0, 3].set_title('Linear jerk [$m/s^3$]') hax[1, 0].plot(ts, ang1*180/np.pi, 'b', linewidth=3, label = 'Ang1') hax[1, 0].plot(ts, ang2*180/np.pi, 'g', linewidth=3, label = 'Ang2') hax[1, 0].set_title('Angular displacement [ $^o$]') hax[1, 0].legend(loc='best').get_frame().set_alpha(0.8) hax[1, 0].set_ylabel('Joint') hax[1, 1].plot(ts, vang1*180/np.pi, 'b', linewidth=3) hax[1, 1].plot(ts, vang2*180/np.pi, 'g', linewidth=3) hax[1, 1].set_title('Angular velocity [ $^o/s$]') hax[1, 2].plot(ts, aang1*180/np.pi, 'b', linewidth=3) hax[1, 2].plot(ts, aang2*180/np.pi, 'g', linewidth=3) hax[1, 2].set_title('Angular acceleration [ $^o/s^2$]') hax[1, 3].plot(ts, jang1*180/np.pi, 'b', linewidth=3) hax[1, 3].plot(ts, jang2*180/np.pi, 'g', linewidth=3) hax[1, 3].set_title('Angular jerk [ $^o/s^3$]') tit = fig.suptitle('Minimum jerk trajectory kinematics of a two-link chain', fontsize=20) for i, hax2 in enumerate(hax.flat): hax2.locator_params(nbins=5) hax2.grid(True) if i > 3: hax2.set_xlabel('Time [$s$]') fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4)) ax1.plot(x, y, 'r', linewidth=3) ax1.set_xlabel('Displacement in x [$m$]') ax1.set_ylabel('Displacement in y [$m$]') ax1.set_title('Endpoint space', fontsize=14) ax1.axis('equal') ax1.grid(True) ax2.plot(ang1*180/np.pi, ang2*180/np.pi, 'b', linewidth=3) ax2.set_xlabel('Displacement in joint 1 [ $^o$]') ax2.set_ylabel('Displacement in joint 2 [ $^o$]') ax2.set_title('Joint sapace', fontsize=14) ax2.axis('equal') ax2.grid(True) • Read pages 477-494 of the 10th chapter of the Ruina and Rudra's book for a review of differential equations and kinematics. ## Problems¶ 1. For the numerical example of the two-link chain plotted above, calculate and plot the values for the each type of acceleration (tangential, centripetal and Coriolis). See solution below. 2. For the two-link chain, calculate and interpret the Jacobian and the expressions for the position, velocity, and acceleration of the endpoint for the following cases: a) When the first joint (the joint at the base) is fixed at $0^o$. b) When the second joint is fixed at $0^o$. 3. For the two-link chain, a special case of movement occurs when the endpoint moves along a line passing through the first joint (the joint at the base). A system with this behavior is known as a polar manipulator (Mussa-Ivaldi, 1986). For simplicity, consider that the lengths of the two links are equal to $\ell$. In this case, the two joint angles are related by: $2\theta_1+\theta_2=\pi$. a) Calculate the Jacobian for this polar manipulator and compare it with the Jacobian for the standard two-link chain. Note the difference between the off-diagonal terms. b) Calculate the expressions for the endpoint position, velocity, and acceleration. c) For the endpoint acceleration of the polar manipulator, identify the tangential, centrifugal, and Coriolis components and compare them with the expressions for the standard two-link chain. 4. Deduce the equations for the kinematics of a two-link pendulum with the angles in relation to the vertical. 5. Deduce the equations for the kinematics of a two-link system using segment angles and compare with the deduction employing joint angles. 6. Calculate the Jacobian matrix for the following function: $$f(x, y) = \begin{bmatrix} x^2 y \\ 5 x + \sin y \end{bmatrix}$$ In [35]: # tangential acceleration A1, A2, A1d, A2d, A1dd, A2dd = symbols('A1 A2 A1d A2d A1dd A2dd') dicti = {θ1:A1, θ2:A2, θ1.diff(t, 1):A1d, θ2.diff(t,1):A2d, \ θ1.diff(t, 2):A1dd, θ2.diff(t, 2):A2dd, l1:L1, l2:L2} tg2 = tg.subs(dicti) tg2fu = lambdify((A1, A2, A1dd, A2dd), tg2, 'numpy'); tg2n = tg2fu(ang1, ang2, aang1, aang2) tg2n = tg2n.reshape((2, 100)).T # centripetal acceleration ct2 = ct.subs(dicti) ct2fu = lambdify((A1, A2, A1d, A2d), ct2, 'numpy'); ct2n = ct2fu(ang1, ang2, vang1, vang2) ct2n = ct2n.reshape((2, 100)).T # coriolis acceleration co2 = co.subs(dicti) co2fu = lambdify((A1, A2, A1d, A2d), co2, 'numpy'); co2n = co2fu(ang1, ang2, vang1, vang2) co2n = co2n.reshape((2, 100)).T # total acceleration (it has to be the same calculated before) acc_tot = tg2n + ct2n + co2n #### And the corresponding plots¶ In [36]: fig, hax = plt.subplots(1, 3, sharex = True, sharey = True, figsize=(12, 5)) hax[0].plot(ts, acc_tot[:, 0], color=(1, 0, 0, .3), linewidth=5, label = 'x total') hax[0].plot(ts, acc_tot[:, 1], color=(0, 0, 0, .3), linewidth=5, label = 'y total') hax[0].plot(ts, tg2n[:, 0], 'r', linewidth=2, label = 'x') hax[0].plot(ts, tg2n[:, 1], 'k', linewidth=2, label = 'y') hax[0].set_title('Tangential') hax[0].set_ylabel('Endpoint acceleration [$m/s^2$]') hax[0].set_xlabel('Time [$s$]') hax[1].plot(ts, acc_tot[:, 0], color=(1,0,0,.3), linewidth=5, label = 'x total') hax[1].plot(ts, acc_tot[:, 1], color=(0,0,0,.3), linewidth=5, label = 'y total') hax[1].plot(ts, ct2n[:, 0], 'r', linewidth=2, label = 'x') hax[1].plot(ts, ct2n[:, 1], 'k', linewidth=2, label = 'y') hax[1].set_title('Centripetal') hax[1].set_xlabel('Time [$s$]') hax[1].legend(loc='best').get_frame().set_alpha(0.8) hax[2].plot(ts, acc_tot[:, 0], color=(1,0,0,.3), linewidth=5, label = 'x total') hax[2].plot(ts, acc_tot[:, 1], color=(0,0,0,.3), linewidth=5, label = 'y total') hax[2].plot(ts, co2n[:, 0], 'r', linewidth=2, label = 'x') hax[2].plot(ts, co2n[:, 1], 'k', linewidth=2, label = 'y') hax[2].set_title('Coriolis') hax[2].set_xlabel('Time [$s$]') tit = fig.suptitle('Acceleration terms for the minimum jerk trajectory of a two-link chain', fontsize=16) for hax2 in hax: hax2.locator_params(nbins=5) hax2.grid(True)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.8627004027366638, "perplexity": 3801.8892259439017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00046.warc.gz"}
https://darrenjw.wordpress.com/2013/10/01/marginal-likelihood-from-tempered-bayesian-posteriors/
# Marginal likelihood from tempered Bayesian posteriors ## Introduction In the previous post I showed that it is possible to couple parallel tempered MCMC chains in order to improve mixing. Such methods can be used when the target of interest is a Bayesian posterior distribution that is difficult to sample. There are (at least) a couple of obvious ways that one can temper a Bayesian posterior distribution. Perhaps the most obvious way is a simple flattening, so that if $\pi(\theta|y) \propto \pi(\theta)\pi(y|\theta)$ is the posterior distribution, then for $t\in [0,1]$ we define $\pi_t(\theta|y) \propto \pi(\theta|y)^t \propto [ \pi(\theta)\pi(y|\theta) ]^t.$ This corresponds with the tempering that is often used in statistical physics applications. We recover the posterior of interest for $t=1$ and tend to a flat distribution as $t\longrightarrow 0$. However, for Bayesian posterior distributions, there is a different way of tempering that is often more natural and useful, and that is to temper using the power posterior, defined by $\pi_t(\theta|y) \propto \pi(\theta)\pi(y|\theta)^t.$ Here we again recover the posterior for $t=1$, but get the prior for $t=0$. Thus, the family of distributions forms a natural bridge or path from the prior to the posterior distributions. The power posterior is a special case of the more general concept of a geometric path from distribution $f(\theta)$ (at $t=0$) to $g(\theta)$ (at $t=1$) defined by $h_t(\theta) \propto f(\theta)^{1-t}g(\theta)^t,$ where, in our case, $f(\cdot)$ is the prior and $g(\cdot)$ is the posterior. So, given a posterior distribution that is difficult to sample, choose a temperature schedule $0=t_0 and run a parallel tempering scheme as outlined in the previous post. The idea is that for small values of $t$ mixing will be good, as prior-like distributions are usually well-behaved, and the mixing of these "high temperature" chains can help to improve the mixing of the "low temperature" chains that are more like the posterior (note that $t$ is really an inverse temperature parameter the way I’ve defined it here…). ## Marginal likelihood and normalising constants The marginal likelihood of a Bayesian model is $\pi(y) = \int_\Theta \pi(\theta)\pi(y|\theta)d\theta.$ This quantity is of interest for many reasons, including calculation of the Bayes factor between two competing models. Note that this quantity has several different names in different fields. In particular, it is often known as the evidence, due to its role in Bayes factors. It is also worth noting that it is the normalising constant of the Bayesian posterior distribution. Although it is very easy to describe and define, it is notoriously difficult to compute reliably for complex models. The normalising constant is conceptually very easy to estimate. From the above integral representation, it is clear that $\pi(y) = E_\pi [ \pi(y|\theta) ]$ where the expectation is taken with respect to the prior. So, given samples from the prior, $\theta_1,\theta_2,\ldots,\theta_n$, we can construct the Monte Carlo estimate $\displaystyle \widehat{\pi}(y) = \frac{1}{n}\sum_{i=1}^n \pi(y|\theta_i)$ and this will be a consistent estimator of the true evidence under fairly mild regularity conditions. Unfortunately, in practice it is likely to be a very poor estimator if the posterior and prior are not very similar. Now, we could also use Bayes theorem to re-write the integral as an expectation with respect to the posterior, so we could then use samples from the posterior to estimate the evidence. This leads to the harmonic mean estimator of the evidence, which has been described as the worst Monte Carlo method ever! Now it turns out that there are many different ways one can construct estimators of the evidence using samples from the prior and the posterior, some of which are considerably better than the two I’ve outlined. This is the subject of the bridge sampling paper of Meng and Wong. However, the reality is that no method will work well if the prior and posterior are very different. If we have tempered chains, then we have a sequence of chains targeting distributions which, by construction, are not too different, and so we can use the output from tempered chains in order to construct estimators of the evidence that are more numerically stable. If we call the evidence of the $i$th chain $z_i$, so that $z_0=1$ and $z_N=\pi(y)$, then we can write the evidence in telescoping fashion as $\displaystyle \pi(y)=z_N = \frac{z_N}{z_0} = \frac{z_1}{z_0}\times \frac{z_2}{z_1}\times \cdots \times \frac{z_N}{z_{N-1}}.$ Now the $i$th term in this product is $z_{i+1}/z_{i}$, which can be estimated using the output from the $i$th and/or $(i+1)$th chain(s). Again, this can be done in a variety of ways, using your favourite bridge sampling estimator, but the point is that the estimator should be reasonably good due to the fact that the $i$th and $(i+1)$th targets are very similar. For the power posterior, the simplest method is to write $\displaystyle \frac{z_{i+1}}{z_i} = \frac{\displaystyle \int \pi(\theta)\pi(y|\theta)^{t_{i+1}}d\theta}{z_i} = \int \pi(y|\theta)^{t_{i+1}-t_i}\times \frac{\pi(y|\theta)^{t_i}\pi(\theta)}{z_i}d\theta$ $\displaystyle \mbox{}\qquad = E_i\left[\pi(y|\theta)^{t_{i+1}-t_i}\right],$ where the expectation is with respect to the $i$th target, and hence can be estimated in the usual way using samples from the $i$th chain. For numerical stability, in practice we compute the log of the evidence as $\displaystyle \log\pi(y) = \sum_{i=0}^{N-1} \log\frac{z_{i+1}}{z_i} = \sum_{i=0}^{N-1} \log E_i\left[\pi(y|\theta)^{t_{i+1}-t_i}\right]$ $\displaystyle = \sum_{i=0}^{N-1} \log E_i\left[\exp\{(t_{i+1}-t_i)\log\pi(y|\theta)\}\right].\qquad(\dagger)$ The above expression is exact, and is the obvious formula to use for computation. However, it is clear that if $t_i$ and $t_{i+1}$ are sufficiently close, it will be approximately OK to switch the expectation and exponential, giving $\displaystyle \log\pi(y) \approx \sum_{i=0}^{N-1}(t_{i+1}-t_i)E_i\left[\log\pi(y|\theta)\right].$ In the continuous limit, this gives rise to the well-known path sampling identity, $\displaystyle \log\pi(y) = \int_0^1 E_t\left[\log\pi(y|\theta)\right]dt.$ So, an alternative approach to computing the evidence is to use the samples to approximately numerically integrate the above integral, say, using the trapezium rule. However, it isn’t completely clear (to me) that this is better than using $(\dagger)$ directly, since there there is no numerical integration error to worry about. ## Numerical illustration We can illustrate these ideas using the simple double potential well example from the previous post. Now that example doesn’t really correspond to a Bayesian posterior, and is tempered directly, rather than as a power posterior, but essentially the same ideas follow for general parallel tempered distributions. In general, we can use the sample to estimate the ratio of the last and first normalising constants, $z_N/z_0$. Here it isn’t obvious why we’d want to know that, but we’ll compute it anyway to illustrate the method. As before, we expand as a telescopic product, where the $i$th term is now $\displaystyle \frac{z_{i+1}}{z_i} = E_i\left[\exp\{-(\gamma_{i+1}-\gamma_i)(x^2-1)^2\}\right].$ A Monte Carlo estimate of each of these terms is formed using the samples from the $i$th chain, and the logs of these are then summed to give $\log(z_N/z_0)$. A complete R script to run the Metropolis coupled sampler and compute the evidence is given below. U=function(gam,x) { gam*(x*x-1)*(x*x-1) } temps=2^(0:3) iters=1e5 chains=function(pot=U, tune=0.1, init=1) { x=rep(init,length(temps)) xmat=matrix(0,iters,length(temps)) for (i in 1:iters) { can=x+rnorm(length(temps),0,tune) logA=unlist(Map(pot,temps,x))-unlist(Map(pot,temps,can)) accept=(log(runif(length(temps)))<logA) x[accept]=can[accept] swap=sample(1:length(temps),2) logA=pot(temps[swap[1]],x[swap[1]])+pot(temps[swap[2]],x[swap[2]])- pot(temps[swap[1]],x[swap[2]])-pot(temps[swap[2]],x[swap[1]]) if (log(runif(1))<logA) x[swap]=rev(x[swap]) xmat[i,]=x } colnames(xmat)=paste("gamma=",temps,sep="") xmat } mat=chains() mat=mat[,1:(length(temps)-1)] diffs=diff(temps) mat=(mat*mat-1)^2 mat=-t(diffs*t(mat)) mat=exp(mat) logEvidence=sum(log(colMeans(mat))) message(paste("The log of the ratio of the last and first normalising constants is",logEvidence)) It turns out that these double well potential densities are tractable, and so the normalising constants can be computed exactly. So, with a little help from Wolfram Alpha, I compute log of the ratio of the last and first normalising constants to be approximately -1.12. Hopefully the above script will output something a bit like that…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 49, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789150357246399, "perplexity": 895.9991307033532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172775.56/warc/CC-MAIN-20170219104612-00014-ip-10-171-10-108.ec2.internal.warc.gz"}
http://repub.eur.nl/pub/32154
Dual distribution in franchising is addressed from an incomplete contracting perspective. We explicitly model cooperative (dual distribution) franchising as an organizational form, next to wholly-owned, wholly-franchised, and dual distribution franchise systems. Key conclusions of the model are: (1) dual distribution as an efficient governance mechanism does not depend on heterogeneous downstream outlets, and (2) whether dual distribution or some other organizational form is efficient depends on the size of the benefits to dual distribution relative to the parties' costs of investing.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915023922920227, "perplexity": 4784.6966068645925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096579.89/warc/CC-MAIN-20150627031816-00135-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.the-cryosphere.net/14/1025/2020/
Journal topic The Cryosphere, 14, 1025–1042, 2020 https://doi.org/10.5194/tc-14-1025-2020 The Cryosphere, 14, 1025–1042, 2020 https://doi.org/10.5194/tc-14-1025-2020 Research article 17 Mar 2020 Research article | 17 Mar 2020 # Quantifying iceberg calving fluxes with underwater noise Quantifying iceberg calving fluxes with underwater noise Oskar Glowacki1,2 and Grant B. Deane1 Oskar Glowacki and Grant B. Deane • 1Marine Physical Laboratory, Scripps Institution of Oceanography, La Jolla, California, USA • 2Institute of Geophysics, Polish Academy of Sciences, Warsaw, Poland Correspondence: Oskar Glowacki ([email protected]) Abstract Accurate estimates of calving fluxes are essential in understanding small-scale glacier dynamics and quantifying the contribution of marine-terminating glaciers to both eustatic sea-level rise (SLR) and the freshwater budget of polar regions. Here we investigate the application of acoustical oceanography to measure calving flux using the underwater sounds of iceberg–water impact. A combination of time-lapse photography and passive acoustics is used to determine the relationship between the mass and impact noise of 169 icebergs generated by subaerial calving events from Hansbreen, Svalbard. The analysis includes three major factors affecting the observed noise: (1) time dependency of the thermohaline structure, (2) variability in the ocean depth along the waveguide and (3) reflection of impact noise from the glacier terminus. A correlation of 0.76 is found between the (log-transformed) kinetic energy of the falling iceberg and the corresponding measured acoustic energy corrected for these three factors. An error-in-variables linear regression is applied to estimate the coefficients of this relationship. Energy conversion coefficients for non-transformed variables are $\mathrm{8}×{\mathrm{10}}^{-\mathrm{7}}$ and 0.92, respectively, for the multiplication factor and exponent of the power law. This simple model can be used to measure solid ice discharge from Hansbreen. Uncertainty in the estimate is a function of the number of calving events observed; 50 % uncertainty is expected for eight blocks dropping to 20 % and 10 %, respectively, for 40 and 135 calving events. It may be possible to lower these errors if the influence of different calving styles on the received noise spectra can be determined. 1 Introduction ## 1.1 The role of iceberg calving in glacier retreat and sea-level rise The contribution of glaciers and ice sheets to the eustatic sea-level rise (SLR) between 2003 and 2008 has been estimated to be 1.51±0.16 mm of sea-level equivalent per year (Gardner et al., 2013). Cryogenic freshwater sources were responsible for approximately 61±19 % of the total SLR observed in the same period. Iceberg calving, defined as mechanical loss of ice from the edges of glaciers and ice shelves (Benn et al., 2007), is thought to be one of the most important components of the total ice loss. For example, solid ice discharge accounts for around 32 % to 40 % of the mass loss from the Greenland ice sheet (Enderlin et al., 2014; van den Broeke et al., 2016), and iceberg calving in Patagonia dominates glacial retreat (Schaefer et al., 2015). On the other hand, several studies found that increased submarine melting is a major factor responsible for the observed rapid retreat of tidewater glaciers (e.g., Straneo and Heimbach, 2013; Luckman et al., 2015; Holmes et al., 2019). The exact partitioning between ice mass loss caused by calving fluxes, submarine melting and surface runoff changes geographically and needs to be measured separately at each location. Calving from tidewater glaciers is driven by different mechanisms, including buoyant instability, longitudinal stretching and terminus undercutting (van der Veen, 2002; Benn et al., 2007). Terminus undercutting results from submarine melting and is often considered to be a major trigger of ice breakup at the glacier front (Bartholomaus et al., 2013; O'Leary and Christoffersen, 2013). In support of this idea, the solid ice discharge from tidewater glaciers was found to be highly correlated with ocean temperatures (Pętlicki et al., 2015; Luckman et al., 2015; Holmes et al., 2019), which are expected to increase significantly as a result of climate shifts (IPCC, 2013). Thus, accurate estimates of calving fluxes from marine-terminating glaciers are crucial to both understanding glacier dynamics and predicting their future contribution to SLR and the freshwater budget of the polar seas. Obtaining these estimates requires remote-sensing techniques, which enable the observation of dynamic glacial processes from a safe distance. Satellite imagery is an effective way to study large-scale, relatively slow changes at the ice–ocean interface, such as the disintegration of the 15 km long ice tongue from Jakobshavn Isbræ in 2003 in Greenland (Joughin et al., 2004). For fast-flowing ice masses, changes of terminus position caused by both calving and glacier flow must be clearly separated. Consequently, satellite imagery is more limited for observing calving events, which typically occur on sub-diurnal timescales and are often not greater than 1000 m3 in volume for most tidewater glaciers in Svalbard or Alaska (e.g., Chapuis and Tetzlaff, 2014). Moreover, a thick layer of clouds, fog or precipitation in the form of snow and rain often makes it difficult to track iceberg calving continuously using optical techniques, such as surface photography or terrestrial laser scanning. These difficulties provide the motivation for investigating the use of underwater noise to quantify calving fluxes. ## 1.2 Measuring ice discharge – tools and methods Many different methods have been developed to measure ice discharge from marine-terminating glaciers. Passive glacier seismology, also called “cryoseismology” (Podolskiy and Walter, 2016), is probably one of the most mature, widespread and useful tools; broadband seismometers have been widely installed in remote areas near calving glaciers since the early studies performed by Hatherton and Evison (1962) and Qamar and St. Lawrence (1983). Seismic signals associated with subaerial calving originate from two main mechanisms: (1) the free fall of ice blocks onto the sea surface (Bartholomaus et al., 2012) and (2) interactions between detaching icebergs and their glacier terminus (e.g., Ekström et al., 2003; Murray et al., 2015). The latter interactions, also known as “glacial earthquakes”, are caused by large, cubic-kilometer-scale icebergs of full-glacier height, and the resulting seismic magnitude is not related to the iceberg volume in a simple manner (Sergeant et al., 2016). Higher-frequency (>1 Hz) calving seismicity from iceberg–ocean interactions, constantly detected by distant seismic networks (e.g., O'Neel et al., 2010; Köhler et al., 2015), usually peaks between 1 and 10 Hz (Bartholomaus et al., 2015; Köhler et al., 2015). Both frequency content and amplitudes of high-frequency signatures are found to be independent of iceberg volumes (O'Neel and Pfeffer, 2007; Walter et al., 2012). Bartholomaus et al. (2015) applied generalized linear models to correlate various properties of seismic signals originating at Yahtse Glacier, Alaska, with estimates of iceberg sizes divided into seven classes. In line with previous findings by Qamar (1988), they identified ice quake duration as the most significant predictor of iceberg volume. Based on these studies, Köhler et al. (2016, 2019) successfully reconstructed a record of total frontal ablation at Kronebreen, Svalbard, using seismic data calibrated with satellite images and lidar volume measurements. Recently, Minowa et al. (2018, 2019) demonstrated the potential of using surface waves generated by falling icebergs to quantify calving flux. They found a strong correlation between calving volumes estimated from time-lapse camera images and the maximum amplitudes of the waves. Other methods for quantifying ice discharge from marine-terminating glaciers, including surface photography (e.g., How et al., 2019), terrestrial laser scanning (e.g., Pętlicki and Kinnard, 2016), ground-based radar imaging (e.g., Chapuis et al., 2010) or terrestrial radar interferometry (e.g., Walter et al., 2019), are usually used for short-term measurements. ## 1.3 Studying iceberg calving with underwater noise The approach investigated here is an example of acoustical oceanography, which extracts environmental information from the underwater noise field (Clay and Medwin, 1977). Acoustical oceanography may offer some advantages over other, more well-developed methods for the study of the interactions between land-based ice and the ocean. Low-cost hydrophones are easily deployed in front of marine-terminating glaciers, and acoustic data can be gathered continuously for several months or longer with a high (>10 000 Hz) sampling rate and low maintenance. Measurements are insensitive to lighting conditions such as fog; cloud coverage; and the polar night, humidity and intensity of precipitation. Moreover, acoustic signals recorded in glacial bays and fjords also contain signatures of ice melt associated with impulsive bubble release events (Urick, 1971; Tegowski et al., 2011; Deane et al., 2014; Pettit et al., 2015; Glowacki et al., 2018). While currently no quantitative models exist to estimate melt rates from underwater noise, the potential idea to simultaneously measure submarine melting and calving, two major processes acting at the glacier–ocean interface, is worth mentioning. Quantifying iceberg calving by “listening to glaciers” was first proposed by Schulz et al. (2008), who suggested long-term deployments of hydrophones (underwater microphones) and pressure gauges, in addition to more traditional measurements of water temperature and salinity, to study signals of ice discharge together with accompanying hydrographic and wave conditions. Following this novel idea, independent studies conducted in Svalbard (Tegowski et al., 2012) and Alaska (Pettit, 2012) showed the first waveforms and spectra of the sounds generated by impacting ice blocks. Pettit (2012) provided an explanation for individual components of the signal, including low-frequency onset, pre-calving activity, mid-frequency block impact, iceberg oscillations, and mini-tsunami and seiche action. Encouraged by these initial results, Glowacki et al. (2015) analyzed 10 subaerial and 2 submarine calving events identified in both acoustic recordings and time-lapse photography made in front of Hansbreen, Svalbard. A spectral analysis of three different calving types, called “typical subaerial”, “sliding subaerial” and “submarine” (see supplementary videos in Glowacki et al., 2015), showed that they radiated underwater noise in distinct spectral and temporal patterns, but all with a spectral peak between 10 and 200 Hz. Most importantly, acoustic emission below 200 Hz was highly correlated with block impact energy in a simple model. The dimensionless coefficient converting impact energy to acoustic energy at the calving impact point was found to be $\mathrm{5.16}×{\mathrm{10}}^{-\mathrm{10}}$, and the power exponent was assumed to be 1. However, this earlier analysis was limited by the small number of subaerial calving events analyzed (10), lack of a full error analysis and the unrealistic assumption of simple cylindrical spreading of acoustic waves in the water column. To address these issues, we conducted a new study covering a total number of 169 subaerial calving events observed with time-lapse photography at Hansbreen, Svalbard. Impact energies generated by falling icebergs are estimated with error bars and related to received acoustic signals. The total noise energy resulting from block–water impact is calculated using a standard sound propagation model Bellhop (Porter, 1987, 2011), which requires bathymetry data and sound speed profiles as inputs. Variability in transmission losses associated with sound wave reflections from an idealized, flat glacier terminus is also accounted for. The analysis shows that impact energy is strongly correlated with acoustic emission below 100 Hz. We present a new energy conversion efficiency calculated with this more detailed physical model and demonstrate how cumulative values of kinetic energy and ice mass loss can be found by integrating impact noise over a specified number of subaerial calving events. 2 Study area ## 2.1 General setting Hansbreen is a retreating, grounded, polythermal tidewater glacier terminating in Hornsund fjord, Svalbard (Fig. 1). It covers an area of around 54 km2 and is more than 15 km long (Błaszczyk et al., 2013). The glacier has a 1.5 km-wide active calving front with an average height of around 30 m (Błaszczyk et 'al., 2009). The mean thickness and total volume of Hansbreen are estimated to be 171 m and 9.6±0.1 km3, respectively (Grabiec et al., 2012). The surface flow of the glacier is dominated by basal motion in the ablation area (Vieli et al., 2004) and the mean annual flow velocity near the terminus, and its calving flux is estimated to be 150 m yr−1 and 38.1×106 m3 yr−1, respectively (Błaszczyk et al., 2009). The average retreat rate of the glacier during 2005–2010, 44 m yr−1, was more than twice the rate observed between 1900 and 2010 (Grabiec et al., 2012). These characteristics are representative of Svalbard's tidewater glaciers, making the bay of Hansbreen a good study site. Figure 1A map of the study site (a) and representative cropped time-lapse image taken by Cam 1 (b). (a) Locations of time-lapse cameras, acoustic buoys, calving events and CTD casts are marked with white, black, yellow and red dots, respectively. Colored, dashed lines show transects of CTD surveys oriented perpendicular (red) and parallel (blue) to the glacier terminus. Black dashed lines show the spatial arrangement of bathymetry profiles, which we used to model noise transmission losses. Landsat 8 satellite data collected on 27 August 2016, courtesy of the US Geological Survey, Department of the Interior. Bathymetric data provided by the Norwegian Hydrographic Service under the permit no. 13/G722, issued by the Institute of Geophysics, Polish Academy of Sciences. Both glacial behavior and the propagation of sound are sensitive to temporal variability in thermohaline structure of water masses in the bay (Pętlicki et al., 2015; Glowacki et al., 2016). The calving activity of Hansbreen is largely controlled by melt-driven undercutting of the ice cliff (Pętlicki et al., 2015). The water temperature and salinity in the center of the bay ranged from −1.8C to more than 2.0C and from 30 PSU to almost 35 PSU during 2015 and 2016 (Moskalik et al., 2018). Significant wave height observed in the study site reached a maximum value of around 1.5 m over the period of August–November 2015 (Herman et al., 2019). A geomorphological map of the bay reveals complicated structures in the seabed created by dynamic glacial processes acting after the Little Ice Age, including terminal moraines, flat areas and iceberg-generated pits, to name a few (Ćwiąkała et al., 2018). The water depth along a transect parallel to the glacier terminus ranges from less than 20 to almost 90 m (see Fig. 5 in Moskalik et al., 2018). ## 2.2 Calving activity and sound propagation conditions The main dataset consists of more than a thousand subaerial calving events observed between 30 July and 15 September 2016, with three time-lapse cameras and two acoustic buoys deployed in the glacial bay (Fig. 1). At least 20 ice blocks calved each day. It was not always possible to unambiguously identify a calving event in both the image and acoustic datasets; the occurrence of more than one iceberg detachment between the two consecutive images resulted in ambiguity in the acoustic data. Moreover, dense fog, rain or otherwise unfavorable lighting conditions would at times obscure the terminus. From the total calving inventory, a subset of N=169 events were unambiguously matched and analyzed (Figs. 1 and 2). The observer present in the field throughout the data collection phase reported that no anthropogenic sound sources were active during the occurrence of these calving events. Figure 2(a–d) Sound velocity profiles for CTD surveys oriented perpendicular (red) and parallel (blue) to the glacier terminus, together with (e) the corresponding frequency of calving occurrence. Locations of the CTD transects taken during the study period are shown in Fig. 1 with the same red and blue colors. Thick, dashed lines mark the dates of the CTD measurements. Blue numbers in the lower panel (e) provide the number of calving events assigned to each set of sound speed profiles. Measurements of ocean temperature and salinity in the bay revealed upward-refracting sound speed profiles, with velocities changing from around 1440 m s−1 just below the surface to almost 1470 m s−1 close to the bottom (Fig. 2a–d). The sound speed gradient between the surface layer and deeper layers, which controls refraction and transmission loss, is driven by fresh meltwater and was clearly increasing during the study period. Moreover, significant differences in sound velocity profiles taken on the same day were also observed between different locations perpendicular and parallel to the glacier terminus, driven by a complex and three-dimensional distribution of the thermohaline field in the bay. The ocean depth between the locations of calving events and the two acoustic buoys varied from 10 m on underwater sills to more than 80 m in the western part of the bay near the terminus (Fig. 3). The bathymetry profiles were very different for the two buoy locations, with a more variable depth observed in the case of the buoy deployed further from the glacier cliff. Figure 3Bathymetry profiles between the terminus of Hansbreen and the two acoustic buoys: A1 (a) and A2 (b). The spatial arrangement of the transects, which are numbered clockwise, is shown in Fig. 1. The horizontal axis is zeroed at locations of the buoys, marked with black dots. Bathymetric data provided by the Norwegian Hydrographic Service under the permit no. 13/G722, issued by the Institute of Geophysics, Polish Academy of Sciences. 3 Methods and data analysis The development of underwater acoustics as a new tool for quantifying calving fluxes requires thorough understanding of the causal relationship between the energy of the ice–water interaction and the resulting noise emission. In this section we discuss all steps that are necessary to complete this task. They are illustrated in Fig. 4 and described in detail in the following subsections. Firstly, a time-lapse camera is used to estimate iceberg dimensions and block impact energies (Sect. 3.1). Secondly, an underwater noise from iceberg–water impact is recorded at a safe distance from the glacier terminus and analyzed to find its amplitude–frequency characteristics (Sect. 3.2). Then, in order to calculate impact noise energy at source, two factors have to be considered: (1) transmission loss in a waveguide, which depends on the distance to the buoy, sea bottom properties along the propagation path and variable thermohaline conditions (Sect. 3.3 and 3.4.1), and (2) the potential contribution of acoustic energy reflected from the underwater part of the glacier terminus on the received calving noise (Sect. 3.4.2). Finally, a simple model relating impact noise energy to the kinetic energy of the falling ice block is proposed (Sect. 3.5). The parameters of this model are derived and investigated further in Sect. 4 to demonstrate a new method for quantifying calving fluxes from underwater noise recordings. Figure 4A scheme illustrating the application of passive underwater acoustics to measure iceberg calving fluxes. The study consists of (1) time-lapse observation of individual calving events; (2) estimation of ice mass loss and block–water impact energy based on the captured images; (3) recordings of underwater noise at a safe distance from the glacier terminus; and (4) calculation of impact noise energy for given thermohaline conditions, bathymetry along the transmission path and contribution of noise reflected from the ice cliff. ## 3.1 Photographic observation of calving events Images of the Hansbreen terminus were taken every 15 min from three locations (“Cam 1–3” in Fig. 1) continuously between 30 July and 15 September 2016 using Canon EOS 1100D cameras (4272 pixel×2848 pixel resolution and 18 mm focal length). The three cameras were not perfectly synchronized, which in fact enabled better separation of individual iceberg calving events occurring shortly after one other. Additionally, a GoPro Hero 3+ camera was placed closer to the terminus to take pictures of the narrow ice cliff segment (“GoPro” in Fig. 1). This camera took images at a much higher rate of 1 s−1 but was not always active during the deployment. Iceberg volume and drop height were estimated using images from Cam 1, which had the most perpendicular orientation to the glacier front of all the cameras. The irregular shape of the ice cliff provided registration features, which were identified in both Landsat 8 satellite images (with resolution of 15 m) and the camera images, enabling a precise localization of calving events. Following Minowa et al. (2018), the volumes of the calved ice blocks are estimated from the area at the glacier terminus exposed by the calving event. Newly exposed areas are identified from differences between pairs of images taken by Cam 1 (see Sect. S1 in the Supplement for details). The newly exposed area in pixels squared, Aimg, is converted to its real value (in m2), Ac, using the formula $\begin{array}{}\text{(1)}& {A}_{\mathrm{c}}=\frac{{A}_{\mathrm{img}}{d}^{\mathrm{2}}}{{F}^{\mathrm{2}}},\end{array}$ where d is the distance between the camera and drop location and F is the camera focal length. The camera was oriented roughly perpendicular with respect to the calving front (Fig. 1), but precise calculation of the exact angle was impossible due to the limited resolution of the satellite images and large variability in the terminus shape over the study period. Nevertheless, this uncertainty was included in the error analysis (see Sect. 4.3 for details). Guided by previous reports on iceberg dimensions observed in Svalbard (Dowdeswell and Forsberg, 1992), we assumed that the thickness of the calved iceberg is proportional to the square root of the newly exposed area. Then, the iceberg volume is given by $\begin{array}{}\text{(2)}& V=C{A}_{\mathrm{c}}^{\mathrm{3}/\mathrm{2}},\end{array}$ where C is a constant scaling factor, which is reported to be around 0.12 (Åström et al., 2014; Pętlicki and Kinnard, 2016). The drop height, h, is measured as a vertical distance between the sea surface and the midpoint of the falling ice block, converted from pixels to meters. Finally, the kinetic energy of the impacting ice block, Eimp, is given by $\begin{array}{}\text{(3)}& {E}_{\mathrm{imp}}=Mgh=V{\mathit{\rho }}_{\mathrm{i}}gh,\end{array}$ where ρi is the ice density, set to be a constant 917 kg  m−3, g=9.81 m s−2 is the acceleration due to gravity and M is the iceberg mass. Equation (3) for Eimp is based on the assumption that there is no energy dissipated during the free fall of an iceberg. In reality, energy is dissipated though various physical mechanisms, such as friction between an ice block and glacier terminus, momentum transfer at the early stage of the water entry, drag during the immersion phase, and block disintegration, which can happen at different stages of calving. However, the details of these hydrodynamic processes lie beyond the scope of this work. Because they are not included, Eq. (3) provides an upper bound of the total amount of energy available for noise production during the block–water interaction. ## 3.2 Impact noise recordings and analysis The acoustic data were recorded continuously between 30 July and 15 September 2016 using two HTI-96-MIN omnidirectional hydrophones deployed at depths of 40 and 22 m, respectively, in front of Hansbreen (“A1” and “A2” in Fig. 1). The hydrophones have a sensitivity of −164 dB re 1 V µPa−1 and were sampled at a rate of 32 kHz at a resolution of 16 bit. A single mooring system consisted of an anchor, short line and acoustic buoy with a hydrophone, powered by D-cell lithium batteries. Acoustic data were stored on SD cards. The moorings were recovered in their entirety by divers. The horizontal distance between the moorings and locations of calving events ranged from 700 to 1500 m for the closer buoy and from 1800 to 2100 m for the more distant buoy. The sound produced by calving events was identified manually, based on timing determined from the time-lapse cameras and deviations from median sound level at frequencies below 200 Hz (see Sect. S2 in the Supplement for details). Power spectral density estimates were calculated for each calving event using the Welch method with a 16 384-point fast Fourier transform, a Hamming window of the same size and a 50 % segment overlap to investigate the noise spectra (see Fig. 6). The acoustic energy of the block–water impact at the buoy, Eac,obs, was subsequently calculated by low-pass filtering the noise record at fc and then integrating the mean-square pressure, ${p}_{\mathrm{low}}^{\mathrm{2}}$ over the event duration: $\begin{array}{}\text{(4)}& {E}_{\mathrm{ac},\mathrm{obs}}=\frac{\mathrm{4}\mathit{\pi }}{{\mathit{\rho }}_{\mathrm{w}}c}\underset{{t}_{\mathrm{start}}}{\overset{{t}_{\mathrm{end}}}{\int }}{p}_{\mathrm{low}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}\mathrm{d}t.\end{array}$ The sound speed, c, and water density, ρw, in Eq. (4) were set to 1450 m s−1 and 1025 kg m−3, respectively. The factor of 4π accounts for the surface area of a unit sphere, over which the noise signal must be integrated to obtain total noise energy in joules. The selection of the cutoff frequency of the filter (fc=100 Hz) is discussed in Sect. 4.2. The background noise energy, Eac,bckg, for each event was computed analogously using noise segments of the same length as the corresponding calving signal, recorded just before the ice block impact. Figure 5Histograms of (a) distances between Cam 1 and locations of calving events, (b) drop heights, (c) exposed areas of the glacier terminus and (d) estimated iceberg volumes. (e) Distribution of iceberg volumes divided into 10 bins, presented on log–log scale. The black line shows best-fit power-law (decay exponent κ) distribution model. (f) Relationship between iceberg drop height and volume. The Pearson correlation coefficient is 0.47 and 0.55 for log-transformed and non-transformed variables, respectively. Figure 6(a–b)  Spectrograms of the acoustic signal generated by the calving event recorded at A1 (a, c, e) and A2 (b, d, f), (c–d) corresponding time-averaged spectra of background (red) and calving (blue) noise, and (e–f) normalized power spectral densities for the entire calving inventory. The calving event for which spectrograms and spectra are shown in panels (a)(d) started on 30 August 2016 at 08:11:08 UTC. A difference of 10, 20 and 40 dB in Pxx corresponds, respectively, to a factor of 10, 100 and 10 000 in acoustic energy. Noise spectra were normalized using maximum values of the calving signal for each event. Solid lines in (e)(f) show median normalized spectra. ## 3.3 Hydrographic and bathymetric data An overview of the temperature and salinity structure in the study site and its influence on the propagation of sound throughout the bay has been provided by Glowacki et al. (2016). In this study, temperature and salinity profiles were taken on 1, 12 and 30 August and 9 September 2016 with a SAIV SD208 CTD (conductivity, temperature and depth) probe at 11 points, located on transects perpendicular and parallel to the glacier terminus (red and blue dashed lines, respectively, in Fig. 1). Sound velocity was calculated from the CTD data according to the Chen and Millero formulae adopted by UNESCO (Chen and Millero, 1977). Each calving event has its own, unique set of hydrographic and bathymetric data used for modeling sound propagation, determined in the following way. Firstly, a median sound speed profile was calculated from each set of profiles measured at the same day. Then, a closest median profile was assigned to each calving event according to the time of its occurrence. As a result, four consecutive median sound speed profiles were assigned to 32, 46, 61 and 30 calving events (see Fig. 2). Additional CTD casts were taken in 2017 after significant recession of Hansbreen. These profiles provided information on bottom depths in 11 additional positions located near the glacier terminus position from 2016, which are not covered by the bathymetry data (0.1 m resolution) collected during multibeam surveys (Fig. 1). We selected five bathymetry profiles, separately for two acoustic buoys, that lie along a straight line between the mooring location and CTD stations belonging to the transect that is closest to the ice cliff. Ocean depths in these sections were then interpolated into a 1 m grid using shape-preserving, piecewise cubic interpolation (Fritsch and Carlson, 1980; Fig. 3). Despite the fact that a high level of variability in the thermohaline structure is expected and there is a lack of detailed bathymetry data close to the glacier terminus, the uniquely assigned sound speed profile and interpolated bathymetry are the best available approximation of real conditions prevailing during the study period. ## 3.4 Attenuation of the calving noise in a glacial bay ### 3.4.1 Noise transmission loss The underwater sound of a calving event must travel through the water column before reception at an acoustic buoy, typically several tens of water depths in range or more. Along its path, the signal undergoes multiple reflections from the sea surface and the sea floor and refracts because of changes in sound speed caused by the spatial and temporal variability in the thermohaline structure. These processes result in significant loss of the total signal energy and change the frequency spectrum of the noise observed at the receiver. These effects must be carefully modeled before the calving signature can be quantified in terms of ice block impact energy. Here we used the standard ray propagation model Bellhop to compute transmission losses, TLprop (Porter, 1987, 2011). The number of beams was set to 2000, with launching angles ranging from −80 to 80 with respect to the sea surface. Guided by previous geomorphological studies (Görlich, 1986; Staszek and Moskalik, 2015), we assumed that the dominant sediment type in the study area is a clayey silt; density, sound speed and attenuation were taken to be 1.4 g cm−3, 1530 m s−1 and 0.1 dB m−1 kHz−1, respectively (Hamilton, 1970, 1976). The absorption of sound in seawater is negligible for the low frequencies considered here (e.g., Ainslie and McColm, 1998). Smoothing bathymetry and sound velocity profiles is highly recommended when using Bellhop to predict acoustic energy levels (Porter, 1987, 2011). The bathymetry profile for a selected calving event was spatially smoothed with a moving boxcar filter with a window size of 20λ, where $\mathit{\lambda }=c{f}^{-\mathrm{1}}$ is the wavelength of sound at the frequency of interest. The median sound speed profile calculated from the set of profiles measured at the closest time to the event occurrence was also spatially smoothed with a moving average over 5 m. A baseline (most probable) transmission loss was computed using the environmental data described above, assuming a source frequency of 50 Hz, which corresponds to the peak in the source spectrum (see Fig. 6), and a realistic source depth of 5 mm. The longest dimension of the calving icebergs is comparable to or greater than a wavelength over the impact noise frequencies, and all points distributed along the ice edge and its close vicinity are considered here to be incoherent noise sources. Accordingly, the incoherent mode of propagation in Bellhop was used to compute TLprop. Finally, to investigate possible variability in TLprop, the simulations were repeated at 100 Hz with the bathymetry-smoothing window changed to 10λ, ocean depth set to the median water depth and the sound speed profile taken to be each of the four median profiles in turn. ### 3.4.2 Contribution from terminus-reflected noise The Bellhop model does not easily account for sound reflected from the underwater part of the glacier terminus, which is potentially an important component of the total acoustic energy received at the buoy. The effect of the glacier terminus on observed calving noise, TLrefl, is considered here. Figure S3 in the Supplement illustrates the direct reflection of sound by the terminus, which is one possible propagation path, but there are more, such as a surface or bottom reflection followed by reflection by the terminus, and so on. All possible paths can be enumerated using a series of image sequences (Deane and Buckingham, 1993) and could, in principle, be investigated. However, we have simplified the problem by considering only energy reflected directly by the terminus, as shown in Fig. S4 (Supplement). The reasoning behind this simplification is twofold. Firstly, the geometry of the problem constrains sound reflected by the bottom followed by the terminus, and sound reflected by the surface between the source and terminus tends to be scattered by the surface waves and bubbles created by the iceberg impact. Secondly, the glacier terminus is rough, resulting in angle-dependent focusing and scattering. Given these complications, which lie beyond the scope of this paper, we have elected to consider only the effect of energy reflected directly from the terminus in comparison with the direct path from source to receiver. As we will show, the greatest effect from this path over the direct path is a 3 dB increase in sound energy and a typical effect is less than 1 dB. These levels are significantly less than the overall effect of the waveguide or inherent scatter in the intensity of sound generated by individual icebergs (see Fig. S5 in Supplement). Moreover, these estimates probably represent an upper bound because the irregular shape of the terminus will tend to scatter incident sound and decrease its contribution when reflected. The magnitude of sound reflected from the terminus was calculated using a wavenumber integration technique (see Eq. 4.3.2 in Brekhovskikh and Lysanov, 1982). The terminus surface was assumed to be perfectly flat, and the angle-dependent reflection coefficient was estimated using standard formulas for a fluid–solid interface (e.g., see Eq. 1.61 in Jensen et al., 2011). The compressional and shear wave velocities for the ice were taken to be 3840 and 1830 m s−1, respectively, consistent with those reported by Vogt et al. (2008) for bubble-free ice (a review of the literature failed to reveal sound speed values for bubbly ice below 100 Hz). A range of absorption coefficient values were considered in the analysis: from 0.1 to 1.0 dB λ−1 for longitudinal waves and from 0.2 to 2.0 dB λ−1 for shear waves (Rajan et al., 1993; Hobæk and Sagen, 2016). Figure S4a in the Supplement illustrates the relationship between the angle of incidence of incoming calving noise and resulting ice reflection loss. Three regions can be identified in this figure: (1) up to 20, the loss is controlled by the ice–water sound speed ratio and typically reaches a value of approximately 7.5 dB, (2) between 20 and 55, high attenuation of acoustic energy exceeding 15 dB results mainly from absorption in ice, and finally (3) for larger angles glacier terminus reflects most of the noise energy back to the water. The analysis demonstrates that the ice reflection loss of calving noise depends greatly on the location of a calving event relative to the glacier–ocean boundary and position of the acoustic buoy. Further analysis was performed using receiver ranges of 700 and 1500 m, which correspond to the terminus-receiver ranges for the experiment. The source frequency was set to the middle of the analysis band (50 Hz), and the source position was varied along the terminus at a fixed distance to the ice cliff of 10 m (Fig. S4b in Supplement). Total energy at the receiver was calculated from the incoherent addition of the direct and terminus-reflected paths and compared with direct path only. The results of this analysis are shown in Fig. S4c (Supplement). At a range of 1500 m, the maximum contribution of ice-reflected path is always smaller than 1 dB because of the steep angles of incidence. At a closer distance of 700 m, the range of possible angles is extended and a maximum increase in received calving noise of around 3 dB can be expected as a “worse-case” scenario. Based on these findings, we assumed a typical contribution from ice reflection of TLrefl=1 dB and corresponding ±1 dB variation around this level. ## 3.5 Impact energy model The impact energy model requires an estimate of the total sound energy radiated by a calving event, which can be calculated from $\begin{array}{}\text{(5)}& {E}_{\mathrm{ac},\mathrm{imp}}=\left({E}_{\mathrm{ac},\mathrm{obs}}-{E}_{\mathrm{ac},\mathrm{bckg}}\right){\mathrm{10}}^{\frac{-T{L}_{\mathrm{tot}}}{\mathrm{10}}},\end{array}$ where ${\mathrm{TL}}_{\mathrm{tot}}={\mathrm{TL}}_{\mathrm{prop}}+{\mathrm{TL}}_{\mathrm{refl}}$ is total energy loss in decibels (Clay and Medwin, 1977), which includes both propagation loss computed from the Bellhop model and a contribution from energy reflected from the glacier terminus. The subtraction of Eac,bckg from the observed impact noise at the hydrophone, Eac,obs, removes background noise energy from the measurement. The factor containing TLtot transforms the corrected, observed energy into source energy at the impact location. A total loss of −10 dB, for example, corresponds to a decrease of 1 order of magnitude in received energy. Based on visual inspection of the scatterplot between Eimp and Eac,imp, we used a loglog transformation to improve linearity in this relationship. The same type of transformation was revealed by an application of the Box–Cox algorithm, which is often used to normalize regression variables (Box and Cox, 1964). The linear model of conversion between log-transformed energies is given by $\begin{array}{}\text{(6)}& \mathrm{ln}{\stackrel{\mathrm{^}}{E}}_{\mathrm{ac},\mathrm{imp}}=a+b\phantom{\rule{0.125em}{0ex}}\mathrm{ln}{E}_{\mathrm{imp}}.\end{array}$ Having a=ln η and knowing that $b\mathrm{ln}{E}_{\mathrm{imp}}=\mathrm{ln}{E}_{\mathrm{imp}}^{b}$, the power-law relationship has a final form given by $\begin{array}{}\text{(7)}& {\stackrel{\mathrm{^}}{E}}_{\mathrm{ac},\mathrm{imp}}=\mathit{\eta }{E}_{\mathrm{imp}}^{b}.\end{array}$ Coefficients a and b could be easily derived from an ordinary least-squares linear regression model using log-transformed energies as variables. However, both Eimp and Eac,imp have associated uncertainties, which should be accounted for in the analysis. Therefore, to address this issue, we used the unified equations for slope, intercept and associated standard errors proposed in a model by York et al. (2004). This model belongs to the family of errors-in-variables regression models, which include all uncertainties and always give an answer that is symmetric for both choices of dependent and independent variables. Finally, to exclude outliers from the analysis, we identified all points for which uncertainty in acoustic energy calculated with Eq. (5) is not within 2 standard deviations of the modeled impact noise energy. 4 Results and discussion This section integrates the acoustic and photographic observations of calving events into a power-law model that quantifies ice mass loss from the noise energy generated by iceberg impact onto the ocean. The model formation begins with a discussion of the statistics of iceberg volume and drop height estimated from the time-lapse images, leading to estimates of the block impact kinetic energy (Sect. 4.1). This is followed by an analysis of the acoustic emission from ice block impacts in terms of its amplitude–frequency characteristics, resulting in an estimate of the total underwater noise energy generated by a calving event (Sect. 4.2). The next section (Sect. 4.3) provides an error analysis of these key variables in terms of uncertainty in measurements of the environment, such as bathymetry and thermohaline structure. The power-law model relating Eimp and Eac,imp is presented and discussed in Sect. 4.4. Finally, based on this relationship, a new methodology is suggested for quantifying the calving flux from the underwater noise of iceberg–water impact (Sect. 4.5). ## 4.1 The statistics of iceberg volume and drop height A total of 169 subaerial calving events were captured by time-lapse camera and unambiguously identified with acoustic events (see Sect. 2.2). Individual detachments were unevenly distributed along the active part of the Hansbreen terminus (Fig. 1). The distance to camera, drop height, exposed terminus area and estimated block volume of the calving inventory are summarized in Fig. 5. The distance between Cam 1 and the locations of block–water impacts varies from 1700 to 2150 m, with an average of 1880 m (Fig. 5a). The drop height spans 8 to 32 m, with a mean value of $\overline{h}=\mathrm{18.3}$ m (Fig. 5b). The range of the exposed terminus is 125 to 5850 m2 of the ice cliff surface, with an average newly exposed area of 1590 m2 (Fig. 5c). Iceberg volumes were estimated from Ac using Eq. (2) and vary from 0.2×103 to 53.7×103 m3. The volume distribution is weighted toward smaller calving events, and approximately 90 % of the ice blocks have a volume of less than 20×103 m3 (Fig. 5d). This observation is consistent with previous reports on the power-law distribution of iceberg sizes in Svalbard (Chapuis and Tetzlaff, 2014), Alaska (Neuhaus et al., 2019), Greenland (Sulak et al., 2017) and Antarctica (Tournadre et al., 2016). A least-mean-squares error analysis of the power-law distribution of iceberg volumes was made using log-transformed variables. The best-fit decay exponent of 1.48 (Fig. 5e) found for the present dataset lies between the exponent of 1.69 for Kronebreen, Svalbard, reported by Chapuis and Tetzlaff (2014), and 0.85 for Perito Moreno Glacier, Patagonia, reported by Minowa et al. (2018). However, we note that some size ranges can be under- or overrepresented due to a limited number of unambiguously matched calving events (169). Ice block volume versus drop height is shown in Fig. 5f. The highest iceberg volumes are observed for h within the range of 17 and 26 m, which corresponds well to the middle heights of the glacier terminus at the locations of calving events. Inspection of Fig. 5f shows that ice block volume is correlated with drop height; Pearson's correlation coefficient is found to be 0.47 and 0.55, respectively, for log-transformed and non-transformed variables. This is not altogether surprising because the largest blocks of ice cannot fall from the bottom of the terminus, whereas the smaller blocks of ice are not so constrained. The correlation between drop height and iceberg mass is a source of bias in the relationship between ice block volume and impact energy and must be accounted for when inverting acoustic recordings of impact noise for ice mass loss. This issue is discussed in detail in Sect. 4.5. ## 4.2 The generation of underwater sound by iceberg calving Figure 6 shows a comparison between power spectral density estimates for underwater noise from calving and background noise recorded by buoys A1 and A2. Spectrograms of the noise generated by a randomly selected calving event are shown in Fig. 6a and b. The computed difference in time of arrival between the two receivers was subtracted from the more distant receiver for better juxtaposition. The two primary sources of sound in the spectrograms are ice melt noise and the underwater noise of calving. The signal of ice melt, driven by impulsive bubble release (Urick, 1971), is most pronounced between 1 and 3 kHz and corresponds well to the spectral bands reported in previous studies (Deane et al., 2014; Pettit et al., 2015). This signal remains stable during the short observation period. The underwater noise of calving is a by-product of the interaction of the falling iceberg with the ocean. The noise is evident from 2 to 8 s in the recording at frequencies below 1 kHz. The acoustic intensity varies in both time and frequency. This variability is almost certainly driven by different noise production mechanisms active at different phases of the calving event (see the high variability in power level between 2 and 4 s, for example). As pointed out by Bartholomaus et al. (2012), low-frequency seismic signals from the impact of ice blocks on the sea surface are generated by three major mechanisms: (1) the transfer of momentum from the falling block to seawater, (2) iceberg deceleration due to buoyancy, and (3) the collapse of an underwater air cavity and subsequent emergence of Worthington jets (e.g., Gekle and Gordillo, 2010). The last mechanism is only possible during total submergence of the ice block, the occurrence of which depends mainly on iceberg dimensions and drop height. Therefore, some calving events may not result in the creation of an air cavity. Moreover, falling icebergs are often fragmented or impact the water at various angles, which certainly modifies all three mechanisms of noise production. The influence of calving style on sound emission lies beyond the scope of this work but is likely a significant factor in the variability in sound generation by blocks of similar mass and drop height, as discussed in Sect. 4.4. The unique patterns in the time and frequency distribution of calving noise potentially contain information about the details of the calving event. However, attention here is restricted to a single number, which is the time and frequency integrated energy in the sound field generated by the iceberg impact. Calculation of this number requires selection of the start and stop times of the impact noise and the frequency band over which the noise exceeds background sound levels. The significant increase in noise power accompanying calving allows easy identification of event start and stop times, and these have been selected manually for each event analyzed (see Sect. S2 in the Supplement). Figure 6c and d show a 6 s average of noise power spectral density for a calving signal (blue) and background noise recorded just before the event (red). There is a difference between the calving and background noise levels at frequencies up to 700 and 400 Hz for buoys A1 and A2, respectively. The maximum increase in received noise power from calving is approximately 40 dB for both buoys, which corresponds to a factor of 10 000 in acoustic power. The results in Fig. 6 show that the appropriate band of frequencies to consider for calving impact noise ends at around 1 kHz. However, an upper frequency limit of 100 Hz was applied in further analysis to yield the highest correlation between the impact energy and the received acoustic energy. The variability in calving noise power across the entire dataset is shown in Fig. 6e and f. The normalized power spectral densities of calving events and background noise are plotted as blue and red dots, respectively. A normalization factor is chosen for each calving event and taken to be the highest power level in decibels during the event. The same normalization factor is used for both calving and background noise. Calving signatures are clearly distinguishable from the background noise across the entire dataset. However, the calving noise power is noticeably more variable at receiver A2 than A1. There are two possible reasons for this discrepancy. Firstly, spatial dependency of the thermohaline structure is expected to be significant along the longer propagation path to A2. Secondly, the signal-to-noise ratio for receiver A2 is lower and more variable than at A1, as a result of the shallower depth of the hydrophone (22 m at A2 versus 40 m at A1) and greater exposure to noise coming from outside the bay (see Sect. S4c in the Supplement for more details). The increased scatter in calving noise observed at location A2 resulted in a decrease in correlation between total impact energy and impact noise (see Table S1 in the Supplement), and data from this buoy are not considered further. ## 4.3 Details of error analysis There are two sources of uncertainty for block–water impact energy and impact noise energy: measurement error and uncertainty in the state of the changeable environment, which is impossible to characterize completely. Estimates of these uncertainties can be made for the various stages of the analysis connecting impact noise to ice mass loss, and these are discussed below. ### 4.3.1 Uncertainty in block–water impact energy Assumptions and approximations need to be made when determining the kinetic energy of the falling ice block from time-lapse images. Uncertainties in estimates of the block–water impact energy result mainly from the conversion of the exposed area at the glacier terminus into ice block volume (see Sect. 3.1). Moreover, additional errors are associated with the details of image analysis, related to the spatial resolution of the time-lapse photography (∼80–100 pixels per terminus height) and imprecise determination of the locations of calving events. The total uncertainty in kinetic energy is difficult to estimate accurately due to several factors, including but not limited to (1) the irregular shapes of the icebergs, (2) poorly understood site-to-site variability in the scaling factor C, and (3) space- and time-varying orientation of the glacier terminus with respect to the camera. However, following Minowa et al. (2018), we assume that the errors in Aimg, d, C and h are not larger than 10 %, 5 %, 20 % and 5 %, respectively. The uncertainty in Ac, computed with Eq. (1), is 14 %. Then, since uncertainties in the estimates of ice volumes and drop heights are dependent, the total error bound in the kinetic energy of the impacting ice block is estimated to be approximately 33 %. ### 4.3.2 Errors in calving-generated acoustic energy Uncertainties in estimates of the iceberg impact noise result from three major sources: (1) spatial and temporal variability in the thermohaline structure in the glacial bay (see Fig. 2a–d), (2) complicated bathymetry along the propagation path, which depends on the location of calving event (see Fig. 3), and (3) angular and frequency dependence of sound reflection from the underwater part of the glacier terminus (see Fig. S4 in Supplement). Considering both transmission and reflection losses, the total loss of acoustic energy generated by block–water interaction, TLtot, ranges from −47 to −57 dB (see Fig. S5 in the Supplement), corresponding to a factor of 10−5 and 10−6 in acoustic energy at the source across the entire inventory of calving events. We combined variability in transmission and ice reflection losses for the entire calving inventory to estimate a representative uncertainty of 33 % in acoustic energy for each individual calving event at its source. ## 4.4 Relationship between the block–water impact and acoustic energy Estimating calving ice mass flux from calving noise is based on the idea that these two quantities are correlated. Figure 7 shows a scatterplot of impact noise, Eac,imp, against impact kinetic energy, Eimp, for the entire dataset. The dashed black line shows the result of a regression analysis of the power-law relationship shown in the figure legend. The acoustic energy generated by a calving event was calculated from the acoustic pressure time series using Eqs. (4) and (5) with manual selection of integration time (see Sect. S2 in the Supplement) and after low-pass filtering at a cutoff frequency of 100 Hz (see Sect. 4.2). The kinetic energies of the falling ice blocks were derived from Eq. (3) using their masses and drop heights estimated from the camera data (see Sect. 4.1). Figure 7Relationship between the block–water impact energy and underwater acoustic emission below 100 Hz. Uncertainties are marked with blue whiskers and were estimated to be 33 % for both variables. The remaining scatter in impact energy is most likely caused by different calving styles and an associated variability in source mechanisms. The results with inclusion of outliers are shown in Fig. S6 in the Supplement (see text for details). The range of energy estimates is large, roughly 2.5 orders of magnitude for both, and there is clearly a strong correlation between the energies across their entire range. The regression coefficient r=0.76 was found between the log-transformed variables for p<0.0001. If uncorrected calving noise energy and signal duration are used instead of Eac,imp, the correlation drops to 0.71 and 0.61, respectively (Table S1 in the Supplement). After removing two outliers and applying an error-in-variables linear regression (see Sect. 3.5 for details), the best functional relationship between acoustic energy and impact energy was found to be a power-law relationship given by Eq. (7), where $\mathit{\eta }=\mathrm{8}×{\mathrm{10}}^{-\mathrm{7}}±\mathrm{60}$ % and $b=\mathrm{0.92}±\mathrm{3}$ %, respectively, for the multiplication factor and exponent of the power law. For completeness, this analysis was repeated, including the two identified outliers, and the results are shown in Fig. S6 (Supplement). Glowacki et al. (2015) previously reported $\mathit{\eta }=\mathrm{5.16}×{\mathrm{10}}^{-\mathrm{10}}$ and b=1, which gives an impact energy that is 2.5 orders of magnitude higher in comparison to the results presented here (see Fig. S7 in Supplement). This discrepancy is due to the overly simplified propagation geometry assumed in the earlier study – simple cylindrical spreading loss and no sound reflection from ice terminus – which resulted in an underestimate of the impact noise energy. The multiplication factor η can be thought of as a conversion efficiency of kinetic energy of a falling iceberg to impact noise energy. The small value of η shows that only a tiny fraction of the ice block energy is transformed into underwater sound, which then propagates from the point of impact to the acoustic receiver. A low conversion efficiency is consistent with observations reported for other physical mechanisms of underwater noise generation. For example, only $\sim {\mathrm{10}}^{-\mathrm{8}}$ of the energy dissipated by a breaking surface wave on the ocean is radiated as sound (Loewen and Melville, 1991). Similarly, the conversion efficiency of the impact energy of a 1–5 mm scale raindrop falling on the sea surface to underwater impact noise is in the range 10−9 to 10−8 (see Eq. 4.6 in Guo and Ffowcs Williams, 1991, and Gunn and Kinzer, 1949). Despite a strong correlation between impact energy and impact noise, there is also a significant scatter in impact noise energy (roughly a factor of 10) for a given value of kinetic energy. This spread in values can be only partly explained by errors in the energy estimates, which are indicated by blue whiskers in the Fig. 7. The scatter is presumably caused by differences in noise generation between individual calving events. The consequence is that estimating the impact energy of an individual calving event from the total noise energy it radiates is accompanied with significant uncertainty. However, because of the overall strong correlation between noise and impact kinetic energy, it is possible to predict the total impact energy summed over a finite number of calving events, provided the inventory is large enough. The uncertainty in individual events tends to average out if enough events are considered, as discussed in Sect. 4.5. ## 4.5 Estimation of ice mass loss from the calving noise Figure 7 and Eq. (7) show that the relationship between iceberg impact energy and calving noise can be modeled robustly with a power-law relationship, providing a means of estimating impact energy from calving noise. Although there is significant variability in doing this on an event-by-event basis, low-error estimates of cumulative impact energy can be made using Eq. (7) if enough events are added together. Once found, the cumulative impact energy can be converted into an estimate of iceberg calving flux as follows. The cumulative modeled ice mass loss from N observed calving events is related to the cumulative impact energy, as inferred from the acoustic signal, by $\begin{array}{}\text{(8)}& g\sum _{j=\mathrm{1}}^{N}{h}_{j}{\stackrel{\mathrm{^}}{M}}_{j}=\sum _{j=\mathrm{1}}^{N}{\stackrel{\mathrm{^}}{E}}_{\mathrm{imp},\phantom{\rule{0.125em}{0ex}}j},\end{array}$ where g is the acceleration due to gravity, hj is the height of the center of mass of the jth iceberg before separation from the glacier terminus, ${\stackrel{\mathrm{^}}{M}}_{j}$ is the mass of jth iceberg determined from its underwater impact noise and ${\stackrel{\mathrm{^}}{E}}_{\mathrm{imp},j}$ is the kinetic energy of impact of the jth iceberg. The cumulative ice mass lost through calving would be trivial to compute from Eq. (8) if the mean iceberg drop height were independent of the iceberg mass, but this is not the case (see Fig. 5f). Icebergs that extend a significant fraction of the exposed terminus height have a minimum drop height that is larger than the minimum drop height possible for smaller icebergs. For this (and possibly other) reasons there is a correlation between iceberg drop height and iceberg mass, the consequence of which is that hj cannot be moved outside the sum on the left-hand side of Eq. (8). The correlation is dealt with by introducing the mass-weighted drop height: $\begin{array}{}\text{(9)}& \stackrel{\mathrm{^}}{h}=\sum _{j=\mathrm{1}}^{N}{h}_{j}{\stackrel{\mathrm{^}}{M}}_{j}/\sum _{j=\mathrm{1}}^{N}{\stackrel{\mathrm{^}}{M}}_{j}.\end{array}$ It follows immediately from Eqs. (8) and (9) that the cumulative mass sum is given by $\begin{array}{}\text{(10)}& \sum _{j=\mathrm{1}}^{N}{\stackrel{\mathrm{^}}{M}}_{j}=\frac{\mathrm{1}}{g\stackrel{\mathrm{^}}{h}}\sum _{j=\mathrm{1}}^{N}{\stackrel{\mathrm{^}}{E}}_{\mathrm{imp},\phantom{\rule{0.125em}{0ex}}j},\end{array}$ which provides a means of computing the calving flux, since the kinetic energy of iceberg impact can be estimated from its underwater noise using Eq. (7). We are left with the problem of computing $\stackrel{\mathrm{^}}{h}$. To address this issue, a new variable $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ is defined by $\begin{array}{}\text{(11)}& \stackrel{\mathrm{^}}{\mathit{\alpha }}=\stackrel{\mathrm{^}}{h}\phantom{\rule{0.125em}{0ex}}{\overline{h}}^{-\mathrm{1}}=\phantom{\rule{0.125em}{0ex}}\sum _{j=\mathrm{1}}^{N}{h}_{j}{\stackrel{\mathrm{^}}{M}}_{j}/\left(\frac{\mathrm{1}}{N}\sum _{j=\mathrm{1}}^{N}{h}_{j}\sum _{j=\mathrm{1}}^{N}{\stackrel{\mathrm{^}}{M}}_{j}\right),\end{array}$ where $\overline{h}$ is an observed, average drop height. Let us now assume that, for sufficiently large N, $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ can be approximated by $\begin{array}{}\text{(12)}& \stackrel{\mathrm{^}}{\mathit{\alpha }}\approx \phantom{\rule{0.125em}{0ex}}\sum _{j=\mathrm{1}}^{N}{h}_{j}{M}_{j}/\left(\frac{\mathrm{1}}{N}\sum _{j=\mathrm{1}}^{N}{h}_{j}\sum _{j=\mathrm{1}}^{N}{M}_{j}\right).\end{array}$ The right-hand side of Eq. (12) is in terms of iceberg mass inferred from the camera observations, providing a means of computing the mass-weighted drop height, $\stackrel{\mathrm{^}}{h}=\stackrel{\mathrm{^}}{\mathit{\alpha }}\overline{h}$, on a glacier-by-glacier basis. The constant $\stackrel{\mathrm{^}}{\mathit{\alpha }}$ and resulting mass-weighted average drop height are estimated to be 1.13 and 20.7 m for Hansbreen. Equation (10) for the cumulative calving mass flux contains significant uncertainty when N is small because of the large scatter in the total underwater sound energy generated by calving events with similar impact energies (see Fig. 7), but the uncertainty reduces as N increases. How large must N be to achieve a desired degree of uncertainty? To answer this question, a Monte Carlo simulation of cumulative ice mass loss was performed using n calving events randomly selected (with replacement) from the entire inventory of calving observations (for which N=169). This selection is repeated ψmax times $\left(\mathit{\psi }=\mathit{\left\{}\mathrm{1},\mathrm{\dots },{\mathit{\psi }}_{\mathrm{max}}\mathit{\right\}}\right)$ for each $n=\mathit{\left\{}\mathrm{1},\mathrm{\dots },{n}_{\mathrm{max}}$}, noting that the total number of possible sets of calving events (and associated cumulative kinetic energies and masses) is given by $\begin{array}{}\text{(13)}& \mathrm{C}=\left(\frac{n+N-\mathrm{1}}{n}\right)=\frac{\left(n+N-\mathrm{1}\right)\mathrm{!}}{n\mathrm{!}\left(N-\mathrm{1}\right)\mathrm{!}}.\end{array}$ From Eq. (10), the cumulative mass sum for a given number of randomly selected calving events n and iteration ψ is $\begin{array}{}\text{(14)}& \sum _{i=\mathrm{1}}^{n}{\stackrel{\mathrm{^}}{M}}_{i}^{\left(\mathit{\psi }\right)}=\frac{\mathrm{1}}{g\stackrel{\mathrm{^}}{h}}\sum _{i=\mathrm{1}}^{n}{\stackrel{\mathrm{^}}{E}}_{\mathrm{imp},\phantom{\rule{0.125em}{0ex}}i}^{\left(\mathit{\psi }\right)},\end{array}$ where $\stackrel{\mathrm{^}}{h}$ is calculated from Eqs. (11) and (12) using the N=169 observed calving events. The modeled mass ${\stackrel{\mathrm{^}}{M}}_{i}^{\left(\mathit{\psi }\right)}$ in Eq. (14) corresponds to ${\stackrel{\mathrm{^}}{M}}_{j}$ in Eq. (10), where $\mathrm{1}\le j\le \mathrm{169}$. The inferred, cumulative ice mass normalized by the cumulative ice mass measured with the camera is then given by $\begin{array}{}\text{(15)}& {\mathit{\beta }}_{n}^{\left(\mathit{\psi }\right)}=\sum _{i=\mathrm{1}}^{n}{\stackrel{\mathrm{^}}{M}}_{i}^{\left(\mathit{\psi }\right)}/\sum _{i=\mathrm{1}}^{n}{M}_{i}^{\left(\mathit{\psi }\right)},\end{array}$ where β, for a specified n and averaged over ψmax iterations, can be expressed as $\begin{array}{}\text{(16)}& {\overline{\mathit{\beta }}}_{n}=\frac{\mathrm{1}}{{\mathit{\psi }}_{\mathrm{max}}}\sum _{\mathit{\psi }=\mathrm{1}}^{{\mathit{\psi }}_{\mathrm{max}}}\left(\sum _{i=\mathrm{1}}^{n}{\stackrel{\mathrm{^}}{M}}_{i}^{\left(\mathit{\psi }\right)}/\sum _{i=\mathrm{1}}^{n}{M}_{i}^{\left(\mathit{\psi }\right)}\right).\end{array}$ We set nmax to 1000 and ψmax to 10 000 to determine the statistical properties of β over a broad range of sample sizes. We note that the probability of randomly obtaining the same set of calving events is vanishingly small for the chosen ψmax (C≫10 000 for n≥3; see Eq. 13). Figure 8 shows the mean, ${\overline{\mathit{\beta }}}_{n}$, and standard deviation, βn,std, of the statistical distributions of ${\mathit{\beta }}_{n}^{\left(\mathit{\psi }\right)}$ computed from the Monte Carlo simulation. The correct and unbiased estimate of the mean ice mass flux ratio is ${\overline{\mathit{\beta }}}_{n}=\mathrm{1}$, which is indeed the asymptotic value reached for large n. As expected, the temporal resolution of the acoustic technique increases with increasing calving activity. The estimated cumulative mass is within 20 % and 10 % of the expected value when integrating over 40 and 135 ice blocks, respectively (Fig. 8). The number of calving events required for a specified level of uncertainty translates into an observational timescale that must be met depending on calving rate. For example, at Hansbreen an uncertainty in ice mass flux of about 20 % is expected when integrating over 2 d of acoustic measurements, corresponding to a calving rate of 20 icebergs per day. The time interval required for a specified level of uncertainty will vary between glaciers and over time. For example, some glaciers calve more than 10 ice blocks hourly (e.g., How et al., 2019), leading to a relatively short time interval requirement. Figure 8The ratio between modeled and observed cumulative ice mass loss computed using the Monte Carlo method, with n events randomly selected (with repetition) from the entire calving inventory. The selection procedure was repeated ${\mathit{\psi }}_{max}=\mathrm{10}$ 000 times for each n. See text for details of the modeled ice mass loss calculation. The thick and thin solid lines, respectively, denote 1- and 2-standard-deviation boundaries of the distributions. 5 Long-term acoustic monitoring of calving fluxes This study demonstrates a new methodology for measuring iceberg calving fluxes from the underwater noise recordings taken in a glacial bay. However, a number of factors have to be considered before the method is adopted for long-term monitoring of ice mass loss, including data retrieval and storage, power supply, instrument clock drift, automatic detection of calving events, and potential site-to-site variability in the model parameters. These aspects are briefly discussed below. ## 5.1 Data collection and automatic detection of calving events We collected the underwater noise data continuously with two light moorings powered by D-cell lithium batteries. Moorings were later recovered by divers with inflatable lift bags. The relatively shallow water at locations A1 and A2 (<45 m) made diving possible. Acoustic monitoring of calving fluxes in deeper study sites will require the use of acoustic releasers in order to retrieve the recording equipment remotely from a boat or ship. In fact, an example of calving signals recorded at location A2 shows that very shallow water prevents effective transmission of the calving noise and decreases the signal-to-noise ratio at low frequencies (see Fig. 6 and Sect. S4c in the Supplement). Further issues for consideration are data storage and clock drift. Acoustic data can be stored on SD cards, which are power-efficient and capacious. The internal clock drift during the recording period is expected to be not greater than 30 s per month, given that quartz oscillators that hold stability of at least ±10 ppm are typically used in data acquisition systems. Low-cost acoustic recorders can be added to the existing monitoring programs, where heavy moorings are used to study processes taking place at the ice–ocean interface, including water circulation, heat exchange or sediment transport, for example (e.g., Straneo et al., 2019). Moreover, data collection can be made real time with a cabled or wireless link to shore, but we have not tested this possibility yet. The problem remains with automatic extraction of the calving signal from long-term, continuous acoustic recordings. Calving events are clearly distinguishable from the noise of ice melting in spectrograms of the acoustic record at frequencies below 1 kHz (see Fig. 6a), which is promising in terms of automatic event detection. However, it should be borne in mind that there are other, low-frequency sound sources active in a glacial bay. For example, calving or the disintegration of bigger icebergs could be mistaken with glacier calving events (see Richardson et al., 2010). Moreover, distinguishing between subaerial and submarine calving can be difficult, as found during passive seismic surveys (see Köhler et al., 2019). These issues await further investigation. ## 5.2 Variability in the model parameters In addition to a minimum sample size for a specified error requirement (see Sect. 4.5), there are five other parameters that must be known to compute reliable estimates of ice mass flux from calving noise: the mass-weighted, average iceberg drop height, $\stackrel{\mathrm{^}}{h}$, the conversion coefficient from the newly exposed area to block volume, C, the conversion efficiency from impact to acoustic energy, η, the power-law coefficient, b, and the transmission loss from the glacier terminus to the hydrophone position, TLprop. Errors in these parameters are important because they affect the uncertainty and temporal resolution of the acoustic measurements of calving fluxes (see Figs. 7 and 8). The problem of how site-specific these parameters are lies beyond the scope of this work, and similar studies should be performed for different tidewater glaciers to obtain quantitative answers. Nevertheless, we briefly discuss here some techniques for measuring or modeling these parameters along with environmental factors driving variability between sites. Noise energy loss is usually calculated using a standard propagation model, such as the Bellhop model used here. Propagation models require sound speed and bathymetry profiles as inputs, making hydrographic and CTD surveys an essential component of the acoustic measurements of calving fluxes. Although the thermohaline structure of a glacial bay is complex and three-dimensional (e.g., Jackson et al., 2014), patterns of temperature and salinity that are sufficiently characteristic of prevailing conditions in the bay can be identified from limited field measurements and used for propagation model inputs (Glowacki et al., 2016). Moreover, it might be possible to use signal duration instead of the corrected noise energy to estimate calving fluxes (see Sect. S5 in the Supplement) as an alternative when no information is given on bathymetry and/or sound speed profiles. We anticipate that there is a high uncertainty associated with the conversion coefficient from the newly exposed area to block volume, which likely varies between glaciers characterized by different surface velocity, thermal regime, hydrology, terminus height, etc. This parameter can be determined more accurately for a specific glacier using short-term lidar measurements or image analysis, e.g., structure from motion or stereo photography. These techniques can also provide an estimate of the mass-weighted, average drop height. The value of $\stackrel{\mathrm{^}}{h}$ is expected to be close to one-half of the average terminus height (in its active part) because the size of ice blocks breaking off from the top or bottom part of the ice cliff is limited (see Fig. 5f). We hypothesize that the remaining two parameters, the energy conversion efficiency and power-law coefficient, are likely stable between glaciers of similar geometry and flow dynamics. 6 Concluding remarks The study presents a new methodology for quantifying the calving flux from the underwater noise of iceberg–water impact. A total of 169 subaerial calving events observed at the terminus of Hansbreen, Svalbard, have been analyzed. The methodology is based on a robust (r=0.76) power-law relationship between ice block–water impact energy and its resulting acoustic emission below 100 Hz, with an impact-to-noise energy conversion efficiency of $\mathrm{8}×{\mathrm{10}}^{-\mathrm{7}}$. The data show that there is significant variability in sound energy production between calving events of similar scale, but stable estimates of ice mass flux can be made if enough events are summed (40 events for a 20 % standard error at Hansbreen). The model analysis shows that there are five parameters that must be known, as discussed in Sect. 4.5. It remains to be seen how site-specific these parameters are, but transmission loss through the bay and the relationship between exposed area at the glacier terminus and block volume are expected to be variable between glaciers and will likely require site-specific determination. We speculate that the energy conversion efficiency η and power-law exponent b are likely robust for tidewater glaciers of similar setting. An important characteristic of any measurement technique is its temporal resolution. While we expect that acoustic determination of ice mass flux will be possible for a broad class of glacier settings, the resolution of calving flux estimates will not be the same for each glacier. The temporal resolution of the acoustic technique for a specified accuracy depends on enough events being observed, so the observation interval is sensitive to calving activity at a particular location. For example, some tidewater glaciers produce a large number (>100) of small ice blocks daily, while others calve large icebergs (>108 m3) not more frequently than every few days (Åström et al., 2014; Chapuis and Tetzlaff, 2014). For the latter, satellite methods are probably the most appropriate when quantifying calving fluxes. The large inter-event scatter in noise energy generated by ice blocks of similar volume may be reducible. All the information available in the time and frequency structure of the impact noise (e.g., Fig. 6a) has been reduced to a single number, which is the total acoustic energy radiated across a selected frequency band. It is possible that some relevant and variable dynamics of the ice block impact, such as impact angle, block submergence, block integrity, and so on, may leave an identifiable signature in the time-varying frequency structure of the impact noise. If so, then some of the scatter evident in Fig. 7 may be reducible with an improved understanding of the influence of different calving styles and associated source mechanisms on the received noise spectra. Similar conclusions also arise from seismic measurements (e.g., Bartholomaus et al., 2012). In situ studies of the hydrodynamics of iceberg calving are difficult to imagine in practical terms, but scale model laboratory experiments may prove to be a valuable tool in identifying major features of block–water impact dynamics and exploiting their acoustic signatures to reduce uncertainty in the efficiency of noise generation. Data availability Data availability. The Bellhop sound propagation model was downloaded from the online Ocean Acoustics Library (available at: https://oalib-acoustics.org/AcousticsToolbox/index_at.html, last access: 13 March 2020; Porter et al., 2020). For image analysis, we used ImageJ software, which can be downloaded free of charge (available at: https://imagej.nih.gov/ij/download.html, last access: 13 March 2020; Rasband, 2020). Bathymetry data were provided by the Institute of Geophysics, Polish Academy of Sciences, who obtained it from the Norwegian Hydrographic Service with permit number 13/G722. Satellite images were downloaded from the EarthExplorer website (available at: https://earthexplorer.usgs.gov/, last access: 13 March 2020; USGS, 2020), courtesy of the US Geological Survey, Department of the Interior. All data collected under the monitoring program of the Polish Polar Station Hornsund can be accessed free of charge (available at: https://monitoring-hornsund.igf.edu.pl/index.php/login, last access: 13 March 2020; Polish Polar Station Hornsund, 2020). The acoustic data used in this study are available upon request from the corresponding author: [email protected]. Supplement Supplement. Author contributions Author contributions. OG conceived the study and analyzed the data. GBD supported model development. Both authors contributed to paper preparation. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. We would like to thank Mateusz Moskalik and Mariusz Czarnul for their significant efforts in maintaining oceanographic and photographic monitoring during the study period. We are also grateful to Aleksandra Stępień and Adam Słucki from the HańczaTech diving team for their underwater work together with Mateusz Moskalik during deployment and recovery of the acoustic buoys and Kacper Wojtysiak for his work on the development of time-lapse camera systems. We thank Andreas Köhler, the anonymous reviewer and handling editor Evgeny Podolskiy for their insightful comments. Financial support Financial support. This research has been supported by the Ministry of Science and Higher Education of Poland (grant no. 1621/MOB/V/2017 and statutory activities no. 3841/E-41/S/2016), the US National Science Foundation (grant no. OPP-1748265), the Polish National Science Centre (grant no. 2013/11/N/ST10/01729), and the US Office of Naval Research (grant no. N00014-17-1-2633). Review statement Review statement. This paper was edited by Evgeny A. Podolskiy and reviewed by Andreas Köhler and one anonymous referee. References Ainslie, M. A. and McColm, J. G.: A simplified formula for viscous and chemical absorption in sea water, J. Acoust. Soc. Am., 103, 1671–1672, 1998. Åström, J. A., Vallot, D., Schäfer, M., Welty, E. Z., O'Neel, S., Bartholomaus, T., Liu, Y., Riikilä, T., Zwinger, T., Timonen, J., and Moore, J. C.: Termini of calving glaciers as self-organized critical systems, Nat. Geosci., 7, 874–878, https://doi.org/10.1038/ngeo2290, 2014. Bartholomaus, T. C., Larsen, C. F., O'Neel, S., and West, M. E.: Calving seismicity from iceberg-sea surface interactions, J. Geophys. Res., 117, F04029, https://doi.org/10.1029/2012JF002513, 2012. Bartholomaus, T. C., Larsen, C. F., and O'Neel, S.: Does calving matter? Evidence for significant submarine melt, Earth Planet. Sc. Lett., 380, 21–30, https://doi.org/10.1016/j.epsl.2013.08.014, 2013. Bartholomaus, T. C., Larsen, C. F., West, M. E., O'Neel, S., Pettit, E. C., and Truffer, M.: Tidal and seasonal variations in calving flux observed with passive seismology, J. Geophys. Res.-Earth, 120, 2318–2337, https://doi.org/10.1002/2015JF003641, 2015. Benn, D. I., Hulton, N. R. J., and Mottram, R. H.: “Calving laws”, “sliding laws” and the stability of tidewater glaciers, Ann. Glaciol., 46, 123–130, https://doi.org/10.3189/172756407782871161, 2007. Błaszczyk, M., Jania, J., and Hagen, J.: Tidewater glaciers of Svalbard: recent changes and estimates of calving fluxes, Pol. Polar Res., 30, 85–142, 2009. Błaszczyk, M., Jania, J. A., and Kolondra, L.: Fluctuations of tidewater glaciers in Hornsund Fjord (Southern Svalbard) since the beginning of the 20th century, Pol. Polar Res., 34, 327–352, https://doi.org/10.2478/popore-2013-0024, 2013. Box, G. E. P. and Cox, D. R.: An analysis of transformations, J. R. Stat. Soc. B Met., 26, 211–252, 1964. Brekhovskikh, L. and Lysanov, Y.: Fundamentals of Ocean Acoustics, Springer-Verlag, New York, https://doi.org/10.1007/978-3-662-02342-6, 1982. Chapuis, A. and Tetzlaff, T.: The variability of tidewater-glacier calving: Origin of event-size and interval distributions, J. Glaciol., 60, 622–634, https://doi.org/10.3189/2014JoG13J215, 2014. Chapuis, A., Rolstad, C., and Norland, R.: Interpretation of amplitude data from a ground-based radar in combination with terrestrial photogrammetry and visual observations for calving monitoring of Kronebreen, Svalbard, Ann. Glaciol., 53, 34–40, https://doi.org/10.3189/172756410791392781, 2010. Chen, C.-T. and Millero F. J.: Speed of sound in seawater at high pressures, J. Acoust. Soc. Am., 62, 1129–1135, https://doi.org/10.1121/1.381646, 1977. Clay, C. S. and Medwin, H.: Acoustical oceanography: principles and applications, Wiley, New York, USA, 1977. Ćwiąkała, J., Moskalik, M., Forwick, M., Wojtysiak, K., Giżejewski, J., and Szczuciński, W.: Submarine geomorphology at the front of the retreating Hansbreen tidewater glacier, Hornsund fjord, southwest Spitsbergen, J. Maps, 14, 123–134, https://doi.org/10.1080/17445647.2018.1441757, 2018. Deane, G. B. and Buckingham, M.: An analysis of the three-dimensional sound field in a penetrable wedge with a stratified fluid or elastic basement, J. Acoust. Soc. Am., 93, 1319–1328, https://doi.org/10.1121/1.405417, 1993. Deane, G. B., Glowacki, O., Tegowski, J., Moskalik, M., and Blondel, Ph.: Directionality of the ambient noise field in an Arctic, glacial bay, J. Acoust. Soc. Am., 136, EL350, https://doi.org/10.1121/1.4897354, 2014. Dowdeswell, J. A. and Forsberg, C. F.: The size and frequency of icebergs and bergy bits derived from tidewater glaciers in Kongsfjorden, northwest Spitsbergen, Polar Res., 11, 81–91, https://doi.org/10.3402/polar.v11i2.6719, 1992. Ekström, G., Nettles, M., and Abers, G.: Glacial Earthquakes, Science, 302, 622–624, https://doi.org/10.1126/science.1088057, 2003. Enderlin, E. M., Howat, I. M., Jeong, S., Noh, M.-J., van Angelen, J. H., and van den Broeke, M. R.: An improved mass budget for the Greenland ice sheet, Geophys. Res. Lett., 41, 866–872, https://doi.org/10.1002/2013GL059010, 2014. Fritsch, F. N. and Carlson, R. E.: Monotone Piecewise Cubic Interpolation, SIAM J. Numer. Anal., 17, 238–246, https://doi.org/10.1137/0717021, 1980. Gardner, A. S., Moholdt, G., Cogley, J. G., Wouters, B., Arendt, A. A., Wahr, J., Berthier, E., Hock, R., Pfeffer, W. T., Kaser, G., Ligtenberg, S. R. M., Bolch, T., Sharp, M. J., Hagen, J. O., vanden Broeke, M. R., and Paul, F.: A reconciled estimate of glacier contributions to sea level rise: 2003 to 2009, Science, 340, 852–857, https://doi.org/10.1126/science.1234532, 2013. Gekle, S. and Gordillo, J. M.: Generation and breakup of Worthington jets after cavity collapse, Part 1. Jet formation, J. Fluid Mech., 663, 293–330, https://doi.org/10.1017/S0022112010003526, 2010. Glowacki, O., Deane, G. B., Moskalik, M., Blondel, Ph., Tegowski, J., and Blaszczyk, M.: Underwater acoustic signatures of glacier calving, Geophys. Res. Lett., 42, 804–812, https://doi.org/10.1002/2014GL062859, 2015. Glowacki, O., Moskalik, M., and Deane, G. B.: The impact of glacier meltwater on the underwater noise field in a glacial bay, J. Geophys. Res.-Oceans, 121, 8455–8470, https://doi.org/10.1002/2016JC012355, 2016. Glowacki, O., Deane, G. B., and Moskalik, M.: The intensity, directionality, and statistics of underwater noise from melting icebergs, Geophys. Res. Lett., 45, 4105–4113, https://doi.org/10.1029/2018GL077632, 2018. Görlich, K.: Glacimarine sedimentation of muds in Hornsund Fjord, Spitsbergen, Ann. Soc. Geol. Pol., 56, 433–477, 1986. Grabiec, M., Jania, J., Puczko, D., and Kolondra, L.: Surface and bed morphology of Hansbreen, a tidewater glacier in Spitsbergen, Pol. Polar Res., 33, 111–138, https://doi.org/10.2478/v10183-012-0010-7, 2012. Gunn, R. and Kinzer, G. D.: The terminal velocity of fall for water droplets in stagnant air, J. Meteorol., 6, 243–248, https://doi.org/10.1175/1520-0469(1949)006<0243:TTVOFF>2.0.CO;2, 1949. Guo, Y. and Ffowcs Williams, J. E.: A theoretical study on drop impact sound and rain noise, J. Fluid Mech., 227, 345–355, https://doi.org/10.1017/S0022112091000149, 1991. Hamilton, E. L.: Sound velocity and related properties of marine sediments, North Pacific, J. Geophys. Res., 75, 4423–4446, https://doi.org/10.1029/JB075i023p04423, 1970. Hamilton, E. L.: Sound attenuation as a function of depth in the sea floor, J. Acoust. Soc. Am., 59, 528–535, https://doi.org/10.1121/1.380910, 1976. Hatherton, T. and Evison, F. F.: A special mechanism for some Antarctic earthquakes, New Zeal. J. Geol. Geop., 5, 864–873, https://doi.org/10.1080/00288306.1962.10417642, 1962. Herman, A., Wojtysiak, K., and Moskalik, M.: Wind wave variability in Hornsund fjord, west Spitsbergen, Estuar. Coast. Shelf S., 217, 96–109, https://doi.org/10.1016/j.ecss.2018.11.001, 2019. Hobæk, H. and Sagen, H.: On underwater sound reflection from layered ice sheets, in: Proceedings of the 39th Scandinavian Symposium on Physical Acoustics, Geilo, Norway, 21, 31 January–3 February 2016. Holmes, F. A., Kirchner, N., Kuttenkeuler, J., Krützfeldt, J., and Noormets, R.: Relating ocean temperatures to frontal ablation rates at Svalbard tidewater glaciers: Insights from glacier proximal datasets, Sci. Rep., 9, 9442, https://doi.org/10.1038/s41598-019-45077-3, 2019. How, P., Schild, K., Benn, D., Noormets, R., Kirchner, N., Luckman, A., and Borstad, C.: Calving controlled by melt-under-cutting: Detailed calving styles revealed through time-lapse observations, Ann. Glaciol., 60, 20–31, https://doi.org/10.1017/aog.2018.28, 2019. Intergovernmental Panel on Climate Change (IPCC): Summary for policymakers, in: Climate Change 2013: The Physical Science Basis, Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., and Midgley, P. M., Cambridge University Press: Cambridge, UK, New York, NY, USA, 2013. Jackson, R. H., Straneo, F., and Sutherland, D. A.: Externally forced fluctuations in ocean temperature at Greenland glaciers in non-summer months, Nat. Geosci., 7, 503–508, https://doi.org/10.1038/ngeo2186, 2014. Jensen, F. B., Kuperman, W. A., Porter, M. B., and Schmidt, H.: Computational Ocean Acoustics, Springer Science+Business Media, LLC, New York, https://doi.org/10.1007/978-1-4419-8678-8, 2011. Joughin, I., Abdalati, W., and Fahnestock, M.: Large fluctuations in speed on Greenland's Jakobshavn Isbrae Glacier, Nature, 432, 608–610, https://doi.org/10.1038/nature03130, 2004. Köhler, A., Nuth, C., Schweitzer, J., Weidle, C., and Gibbons, S.: Regional passive seismic monitoring reveals dynamic glacier activity on Spitsbergen, Svalbard, Polar Res., 34, 26178, https://doi.org/10.3402/polar.v34.26178, 2015. Köhler, A., Nuth, C., Kohler, J., Berthier, E., Weidle, C., and Schweitzer, J.: A 15 year record of frontal glacier ablation rates estimated from seismic data, Geophys. Res. Lett., 43, 12155–12164, https://doi.org/10.1002/2016GL070589, 2016. Köhler, A., Pętlicki, M., Lefeuvre, P.-M., Buscaino, G., Nuth, C., and Weidle, C.: Contribution of calving to frontal ablation quantified from seismic and hydroacoustic observations calibrated with lidar volume measurements, The Cryosphere, 13, 3117–3137, https://doi.org/10.5194/tc-13-3117-2019, 2019. Loewen, M. and Melville, W.: Microwave backscatter and acoustic radiation from breaking waves, J. Fluid Mech., 224, 601–623, https://doi.org/10.1017/S0022112091001891, 1991. Luckman, A., Benn, D. I., Cottier, F., Bevan, S., Nilsen, F., and Inall, M.: Calving rates at tidewater glaciers vary strongly with ocean temperature, Nat. Commun., 6, 8566, https://doi.org/10.1038/ncomms9566, 2015. Minowa, M., Podolskiy, E. A., Sugiyama, S., Sakakibara, D., and Skvarca, P.: Glacier calving observed with time-lapse imagery and tsunami waves at Glaciar Perito Moreno, Patagonia, J. Glaciol., 64, 362–376, https://doi.org/10.1017/jog.2018.28, 2018. Minowa, M., Podolskiy, E. A., Jouvet, G., Weidmann, Y., Sakakibara, D., Tsutaki, S., Genco, R., and Sugiyama, S.: Calving flux estimation from tsunami waves, Earth Planet. Sc. Lett., 515, 283–290, https://doi.org/10.1016/j.epsl.2019.03.023, 2019. Moskalik, M., Ćwiąkała, J., Szczuciński, W., Dominiczak, A., Głowacki, O., Wojtysiak, K., and Zagórski, P.: Spatiotemporal changes in the concentration and composition of suspended particulate matter in front of Hansbreen, a tidewater glacier in Svalbard, Oceanologia, 60, 446–463, https://doi.org/10.1016/j.oceano.2018.03.001, 2018. Murray, T., Nettles, M., Selmes, N., Cathles, L., Burton, J., James, T., Edwards, S., Martin, I., O'Farrell, T., Aspey, R., and Rutt, I.: Reverse glacier motion during iceberg calving and the cause of glacial earthquakes, Science, 349, 305–308, 2015. Neuhaus, S. U., Tulaczyk, S. M., and Branecky Begeman, C.: Spatiotemporal distributions of icebergs in a temperate fjord: Columbia Fjord, Alaska, The Cryosphere, 13, 1785–1799, https://doi.org/10.5194/tc-13-1785-2019, 2019. O'Leary, M. and Christoffersen, P.: Calving on tidewater glaciers amplified by submarine frontal melting, The Cryosphere, 7, 119–128, https://doi.org/10.5194/tc-7-119-2013, 2013. O'Neel, S. and Pfeffer, W. T.: Source mechanics for monochromatic icequakes produced during iceberg calving at Columbia Glacier, AK, Geophys. Res. Lett., 34, L22502, https://doi.org/10.1029/2007GL031370, 2007. O'Neel, S., Larsen, C. F., Rupert, N., and Hansen, R.: Iceberg calving as a primary source of regional-scale glacier-generated seismicity in the St. Elias Mountains, Alaska, J. Geophys. Res., 115, F04034, https://doi.org/10.1029/2009JF001598, 2010. Pettit, E. C.: Passive underwater acoustic evolution of a calving event, Ann. Glaciol., 53, 113–122, https://doi.org/10.3189/2012AoG60A137, 2012. Pętlicki, M., Ciepły, M., Jania, J., Promińska, A., and Kinnard, C.: Calving of a tidewater glacier driven by melting at the waterline, J. Glaciol., 61, 851–863, https://doi.org/10.3189/2015JoG15J062, 2015. Pettit, E. C., Lee, K. M., Brann, J. P., Nystuen, J. A., Wilson, P. S., and O'Neel, S.: Unusually loud ambient noise in tidewater glacier fjords: A signal of ice melt, Geophys. Res. Lett., 42, 2309–2316, https://doi.org/10.1002/2014GL062950, 2015. Pętlicki, M. and Kinnard, C.: Calving of Fuerza Aérea Glacier (Greenwich Island, Antarctica) observed with terrestrial laser scanning and continuous video monitoring, J. Glaciol., 62, 835–846, https://doi.org/10.1017/jog.2016.72, 2016. Podolskiy, E. A. and Walter, F.: Cryoseismology, Rev. Geophys., 54, 708–758, https://doi.org/10.1002/2016RG000526, 2016. Polish Polar Station Hornsund: Monitoring database, available at: https://monitoring-hornsund.igf.edu.pl/index.php/login, last access: 13 March 2020. Porter, M.: The BELLHOP Manual and User's Guide: Preliminary Draft, HLS Res., La Jolla, California, 2011. Porter, M., Lin, Y.-T., and Newhall, A.: Ocean Acoustics Library – Acoustics Toolbox, available at: https://oalib-acoustics.org/AcousticsToolbox/index_at.html, last access: 13 March 2020. Porter, M. B.: Gaussian beam tracing for computing ocean acoustic fields, J. Acoust. Soc. Am., 82, 1349–1359, https://doi.org/10.1121/1.395269, 1987. Qamar, A.: Calving icebergs: a source of low-frequency seismic signals from Columbia Glacier, Alaska, J. Geophys. Res.-Sol. Ea., 93, 6615–6623, https://doi.org/10.1029/JB093iB06p06615, 1988. Qamar, A. and St. Lawrence, W.: An investigation of icequakes on the Greenland icesheet near Jakobshavn icestream, Tech. Rep., Final Rep. NSF Grant DPP7926002, Univ. of Colo., Boulder, 1983. Rajan, S. D., Frisk, G. V., and Sellers, C.: Determination of compressional wave and shear wave speed profiles in sea ice by crosshole tomography – theory and experiment, J. Acoust. Soc. Am., 93, 721–738, https://doi.org/10.1121/1.405436, 1993. Rasband, W. S.: ImageJ, available at: https://imagej.nih.gov/ij/, last access: 13 March 2020. Richardson, J. P., Waite, G. P., FitzGerald, K. A., and Pennington, W. D.: Characteristics of seismic and acoustic signals produced by calving, Bering Glacier, Alaska, Geophys. Res. Lett., 37, L03503, https://doi.org/10.1029/2009GL041113, 2010. Schaefer, M., Machguth, H., Falvey, M., Casassa, G., and Rignot, E.: Quantifying mass balance processes on the Southern Patagonia Icefield, The Cryosphere, 9, 25–35, https://doi.org/10.5194/tc-9-25-2015, 2015. Schulz, M., Berger, W. H., and Jansen, E.: Listening to glaciers, Nat. Geosci., 1, 408, https://doi.org/10.1038/ngeo235, 2008. Sergeant, A., Mangeney, A., Stutzmann, E., Montagner, J.-P., Walter, F., Moretti, L., and Castelnau, O.: Complex force history of a calving-generated glacial earthquake derived from broadband seismic inversion, Geophys. Res. Lett., 43, 1055–1065, https://doi.org/10.1002/2015GL066785, 2016. Staszek, M. W. and Moskalik, M.: Contemporary sedimentation in the forefield of Hornbreen, Hornsund, Open Geosci., 7, 490–512, https://doi.org/10.1515/geo-2015-0042, 2015. Straneo, F. and Heimbach, P.: North Atlantic warming and the retreat of Greenland's outlet glaciers, Nature, 504, 36–43, https://doi.org/10.1038/nature12854, 2013. Straneo, F., Sutherland, D. A., Stearns, L., Catania, G., Heimbach, P., Moon, T., Cape, M. R., Laidre, K. L., Barber, D., Rysgaard, S., Mottram, R., Olsen, S., Hopwood, M. J., and Meire, L.: The Case for a Sustained Greenland Ice Sheet-Ocean Observing System (GrIOOS), Front. Mar. Sci, 6, 138, https://doi.org/10.3389/fmars.2019.00138, 2019. Sulak, D., Sutherland, D., Enderlin, E., Stearns, L., and Hamilton, G.: Iceberg properties and distributions in three Greenlandic fjords using satellite imagery, Ann. Glaciol., 58, 92–106, https://doi.org/10.1017/aog.2017.5, 2017. Tegowski, J., Deane, G. B., Lisimenka, A., and Blondel, Ph.: Detecting and analyzing underwater ambient noise of glaciers on Svalbard as indicator of dynamic processes in the Arctic, in: Proceedings of the 4th UAM Conference, Kos, Greece, 1149–1154, 2011. Tegowski, J., Deane, G. B., Lisimenka, A., and Blondel, Ph.: Spectral and statistical analyses of ambient noise in Spitsbergen Fjords and identification of glacier calving events, in Proceedings of the 11th European Conference on Underwater Acoustics, Edinburgh, Scotland, 1667–1672, 2012. Tournadre, J., Bouhier, N., Girard-Ardhuin, F., and Rémy, F.: Antarctic icebergs distributions 1992–2014, J. Geophys. Res.-Oceans, 121, 327–349, https://doi.org/10.1002/2015JC011178, 2016. Urick, R. J.: The noise of melting icebergs, J. Acoust. Soc. Am., 50, 337–341, https://doi.org/10.1121/1.1912637, 1971. USGS: EarthExplorer, available at: https://earthexplorer.usgs.gov/, last access: 13 March 2020. van den Broeke, M. R., Enderlin, E. M., Howat, I. M., Kuipers Munneke, P., Noël, B. P. Y., van de Berg, W. J., van Meijgaard, E., and Wouters, B.: On the recent contribution of the Greenland ice sheet to sea level change, The Cryosphere, 10, 1933–1946, https://doi.org/10.5194/tc-10-1933-2016, 2016. van der Veen, C. J.: Calving glaciers, Prog. Phys. Geog., 26, 96–122, https://doi.org/10.1191/0309133302pp327ra, 2002. Vieli, A., Jania, J., Blatter, H., and Funk, M.: Shortterm velocity variations on Hansbreen, a tidewater glacier in Spitsbergen, J. Glaciol., 50, 389–398, 2004. Vogt, Ch., Laihem, K., and Wiebusch, Ch.: Speed of sound in bubble-free ice, J. Acoust. Soc. Am., 124, https://doi.org/10.1121/1.2996304, 2008. Walter, F., Amundson, J. M., O'Neel, S., Truffer, M., Fahnestock, M., and Fricker, H. A.: Analysis of low-frequency seismic signals generated during a multiple-iceberg calving event at Jakobshavn Isbræ, Greenland, J. Geophys. Res., 117, F01036, https://doi.org/10.1029/2011JF002132, 2012. Walter, A., Lüthi, M. P., and Vieli, A.: Calving event size measurements and statistics of Eqip Sermia, Greenland, from terrestrial radar interferometry, The Cryosphere Discuss., https://doi.org/10.5194/tc-2019-102, in review, 2019. York, D., Evensen, N. M., Lopez Martinez, M., and De Basabe Delgado, J.: Unified equations for the slope, intercept, and standard errors of the best straight line, Am. J. Phys., 72, 367–375, https://doi.org/10.1119/1.1632486, 2004.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 48, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683143258094788, "perplexity": 3648.5306807890593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00231.warc.gz"}
http://math.stackexchange.com/questions/236906/proof-correctness-t-in-lv-v-t2-0-iff-tv-subset-nt/236925
# Proof Correctness: $T \in L(V,V), T^2 = 0 \iff T(v) \subset n(T)$ Prove: $T \in L(V,V), T^2 = 0 \iff T(v) \subset n(T)$ Is the following correct? Proof: $\rightarrow$ Let $T^2 = 0 \iff T(T(v)) = 0$ Suppose $x \in T(v)$ we must show that $x \in n(T) \iff T(x) = 0$ $x\in T(v)$ implies there exists $v\in V$ s.t. $T(v)=x$ Consider $T(T(v))$: $T(T(v)) = T(x) = 0$ $\leftarrow$ Let $T(v) \subset n(T)$ Suppose $x\in T(v)$. Then we know $x\in n(T) \iff T(x) = 0$. We also know there exists $v\in V$ so that $T(v) = x$. Consider $T(T(v))$: $T(T(v)) = T(x) = 0$ - How should "Let $T^2 = 0 \iff T(T(v)) = 0$" be interpreted? $\:$ Should it be "If $T(T(v)) = 0$ then $\hspace{.8 in}$ redefine $T$ to be such that $T^2 = 0$, else leave $T$ as it was." ? $\;\;$ – Ricky Demer Nov 14 '12 at 3:00 Also, the elements of $T(v)$ should be completely irrelevant. $\:$ (On the other hand, $\hspace{1.8 in}$ the elements of $\operatorname{Range}(T)$ should be important.) $\;\;$ – Ricky Demer Nov 14 '12 at 3:05 Your proof is correct in spirit but it lacks clarity. You need to take care in distinguishing $T(V)$ which is a set (the image of $T$) and $T(\mathbf{v})$ which is a single vector (the image of $\mathbf{v}$ under $T$). I will rewrite your proof, following your exact logic. Hopefully you will find this version more clear. Theorem: Let $T:\ V\rightarrow V$ be a linear mapping. Then $T^2 = 0 \iff T(V) \subseteq \operatorname{null}(T)$. Proof: ($\Rightarrow$) Suppose that $T^2 = 0$. Then for all $\mathbf{v} \in V$ we have $$T^2(\mathbf{v}) = T(T(\mathbf{v})) = \mathbf{0}$$ This shows that $T(\mathbf{v}) \in \operatorname{null}(T)$ for all $\mathbf{v} \in V$ and therefore $T(V) \subseteq \operatorname{null}(T)$. ($\Leftarrow$) Conversely, suppose that $T(V) \subseteq \operatorname{null}(T)$. This means that for all $\mathbf{v} \in V$ we have $$T(\mathbf{v}) \in \operatorname{null}(T) \implies T(T(\mathbf{v})) = \mathbf{0}$$ Therefore $T^2 = 0$. $\square$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916938543319702, "perplexity": 218.49114304346747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701171770.2/warc/CC-MAIN-20160205193931-00183-ip-10-236-182-209.ec2.internal.warc.gz"}
https://baylor-ir.tdl.org/baylor-ir/handle/2104/5502/browse?rpp=20&order=ASC&sort_by=1&etal=-1&type=title&starts_with=O
Now showing items 32-37 of 37 • #### Photophoresis on polydisperse basalt microparticles under microgravity  (Journal of Aerosol Science, 2014-10) Photophoresis is a force which can dominate the motion of illuminated aerosols in low pressure environments of laboratory experiments, planetary atmospheres or protoplanetary disks. In drop tower experiments we quantified ... • #### Photophoretic Force on Aggregate Grains, Monthly Notices of the Royal Astronomical Society  (Monthly Notices of the Royal Astronomical Society, 2016-01-21) The photophoretic force may impact planetary formation by selectively moving solid particles based on their composition and structure. This generates collision velocities between grains of different sizes and sorts the ... • #### Physical interpretation of the spectral approach to delocalization in infinite disordered systems  (Materials Research Express, 2016-12-05) In this paper we introduce the spectral approach to delocalization in infinite disordered systems and provide a physical interpretation in context of the classical model of Edwards and Thouless. We argue that spectral ... • #### Probing the sheath electric field using thermophoresis in dusty plasma  (IEEE Transactions on Plasma Science, 2010-04) A self-consistent dusty plasma fluid model has been extended to incorporate all the oble gases as the carrier gas. An analysis of void closure in complex plasma composed of these gases over a wide range of experimental ... • #### Simulation of dust voids in complex plasmas  (IOP Publishing, 2008-11-04) In dusty radio-frequency (RF) discharges under micro-gravity conditions often a void is observed, a dust free region in the discharge center. This void is generated by the drag of the positive ions pulled out of the ... • #### Vibrational Modes and Instabilities of a Dust Particle Pair in a Complex Plasma  (IEEE Transactions on Plasma Science, 2010-04-10) Vibrational modes and instabilities of a dust-particle pair in a terrestrial aboratory complex plasma are investigated by employing an analytical method whereby the plasma wakefield induced by an external electric field ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923342227935791, "perplexity": 4791.880927567372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812871.2/warc/CC-MAIN-20180220010854-20180220030854-00762.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=11E12&jrnl=one&onejrnl=proc
American Mathematical Society My Account · My Cart · Customer Services · FAQ Publications Meetings The Profession Membership Programs Math Samplings Washington Office In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(11E12) AND publication=(proc) Sort order: Date Format: Standard display Results: 1 to 16 of 16 found      Go to page: 1 [1] Robert Harron. The shapes of pure cubic fields. Proc. Amer. Math. Soc. 145 (2017) 509-524. Abstract, references, and article information    View Article: PDF [2] Wai Kiu Chan and Byeong-Kweon Oh. Almost universal ternary sums of triangular numbers. Proc. Amer. Math. Soc. 137 (2009) 3553-3562. MR 2529860. Abstract, references, and article information    View Article: PDF This article is available free of charge [3] A. G. Earnest and Robert W. Fitzgerald. Represented value sets for integral binary quadratic forms and lattices. Proc. Amer. Math. Soc. 135 (2007) 3765-3770. MR 2341925. Abstract, references, and article information    View Article: PDF This article is available free of charge [4] Wai Kiu Chan and R. Daniel Mauldin. Steinhaus tiling problem and integral quadratic forms. Proc. Amer. Math. Soc. 135 (2007) 337-342. MR 2255279. Abstract, references, and article information    View Article: PDF This article is available free of charge [5] Wai Kiu Chan and Joshua Daniels. Definite regular quadratic forms over $\mathbb F_q[T]$. Proc. Amer. Math. Soc. 133 (2005) 3121-3131. MR 2160173. Abstract, references, and article information    View Article: PDF This article is available free of charge [6] Wai Kiu Chan and Byeong-Kweon Oh. Positive ternary quadratic forms with finitely many exceptions. Proc. Amer. Math. Soc. 132 (2004) 1567-1573. MR 2051115. Abstract, references, and article information    View Article: PDF This article is available free of charge [7] Byeong-Kweon Oh. Universal $\mathbb{Z}$-lattices of minimal rank. Proc. Amer. Math. Soc. 128 (2000) 683-689. MR 1654105. Abstract, references, and article information    View Article: PDF This article is available free of charge [8] R. Baeza and M. I. Icaza. On Humbert-Minkowski's constant for a number field. Proc. Amer. Math. Soc. 125 (1997) 3195-3202. MR 1403112. Abstract, references, and article information    View Article: PDF This article is available free of charge [9] John R. Swallow. On constructing fields corresponding to the $\tilde{A}\sb n$'s of Mestre for odd $n$ . Proc. Amer. Math. Soc. 122 (1994) 85-89. MR 1204386. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] J. S. Hsia, M. Jöchner and Y. Y. Shao. A structure theorem for a pair of quadratic forms . Proc. Amer. Math. Soc. 119 (1993) 731-734. MR 1155599. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] Donald G. James. Quadratic forms with cube-free discriminant . Proc. Amer. Math. Soc. 110 (1990) 45-52. MR 1027095. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] Donald G. James. Even quadratic forms with cube-free discriminant . Proc. Amer. Math. Soc. 106 (1989) 73-79. MR 955998. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] Shmuel Friedland. Normal forms for definite integer unimodular quadratic forms . Proc. Amer. Math. Soc. 106 (1989) 917-921. MR 976366. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] Min King Eie. On the values at negative half-integers of the Dedekind zeta function of a real quadratic field . Proc. Amer. Math. Soc. 105 (1989) 273-280. MR 977923. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] Robert L. Snider. Solvable linear groups over division rings . Proc. Amer. Math. Soc. 91 (1984) 341-344. MR 744625. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] Dennis R. Estes and J. S. Hsia. Sums of three integer squares in complex quadratic fields . Proc. Amer. Math. Soc. 89 (1983) 211-214. MR 712624. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 16 of 16 found      Go to page: 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036971926689148, "perplexity": 1793.7872747581607}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689572.75/warc/CC-MAIN-20170923070853-20170923090853-00384.warc.gz"}
http://math.stackexchange.com/questions/84065/calculate-points-for-a-parallel-line
Calculate Points for a Parallel Line Given a line running through p1:(x1,y1) and p2:(x2,y2), I need to calculate two points such that a new parallel line 20 pixels away from the given line runs through the two new points. Edit: The newline can be 20 pixels in either direction (i.e., pick a direction; it does not matter). - 20 pixels away - in which direction? – Gerry Myerson Nov 20 '11 at 23:30 Gerry, it can be 20pixels in either direction. – Mark P. Nov 20 '11 at 23:35 There are infinitely many directions. Which two do you have in mind? – Gerry Myerson Nov 21 '11 at 0:56 It's a 2D plane. If for example the given line was vertical,it could be 20 pixels to the left or 20 pixels to the right. – Mark P. Nov 21 '11 at 4:01 Suppose the line makes a 30 degree angle with the horizontal. Do you want a line 20 pixels to the left? or a line 20 pixels above? or a line 20 pixels away, measured along a perpendicular to the first line? They are all different, and that's just three of the infinitely many different ways to measure the distance between two lines. – Gerry Myerson Nov 21 '11 at 4:55 The slope of your line is $m=\frac{y_2-y_1}{x_2-x_1}$, and the slope of the perpendicular is $\frac{-1}{m}=\frac{-(x_2-x_1)}{y_2-y_1}$. You want a segement along the perpendicular of length $20$. So if the $x$ offset is $\Delta x$ and the $y$ offset is $\Delta y$, we have $\Delta y=\frac{-(x_2-x_1)}{y_2-y_1}\Delta x$ and using the length $20=\Delta x \sqrt{1+(\frac{-(x_2-x_1)}{y_2-y_1})^2}$ where you choose the sign of the square root to get the correct side of the line you can find $\Delta x$. - The interpretation that you mean a perpendicular distance of 20 fits with there being only two possible locations for the translated line segment. The vector $\langle x_2-x_1,y_2-y_1\rangle$ is along the given line; the vector $\langle y_2-y_1,-x_2+x_1\rangle$ is orthogonal (perpendicular) to the given line (in one particular direction—$\langle -y_2+y_1,x_2-x_1\rangle$ would be in the other direction); and the vector \begin{align} \frac{20\langle y_2-y_1,-x_2+x_1\rangle}{||\langle y_2-y_1,-x_2+x_1\rangle||} &=\frac{20}{\sqrt{(y_2-y_1)^2+(-x_2+x_1)^2}}\langle y_2-y_1,-x_2+x_1\rangle\\ &=\left\langle\frac{20(y_2-y_1)}{\sqrt{(y_2-y_1)^2+(-x_2+x_1)^2}},\frac{20(-x_2+x_1)}{\sqrt{(y_2-y_1)^2+(-x_2+x_1)^2}}\right\rangle \end{align} is orthogonal to the given line and has length 20. Translate your two given point by this vector and you'll have two points on the line you want: $$(x_1,y_1)\to\left(x_1+\frac{20(y_2-y_1)}{\sqrt{(y_2-y_1)^2+(-x_2+x_1)^2}},y_1+\frac{20(-x_2+x_1)}{\sqrt{(y_2-y_1)^2+(-x_2+x_1)^2}}\right)$$ $$(x_2,y_2)\to\left(x_2+\frac{20(y_2-y_1)}{\sqrt{(y_2-y_1)^2+(-x_2+x_1)^2}},y_2+\frac{20(-x_2+x_1)}{\sqrt{(y_2-y_1)^2+(-x_2+x_1)^2}}\right)$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118425607681274, "perplexity": 399.6835444800599}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121423.81/warc/CC-MAIN-20160428161521-00131-ip-10-239-7-51.ec2.internal.warc.gz"}
http://jonismathnotes.blogspot.com/2014/09/a-refined-brun-sieve.html
## Saturday, September 27, 2014 ### A refined Brun sieve We improve the Brun sieve from the previous post to allow us to consider almost primes of various forms. We will prove for example the result that there are infinitely many $n$ such that $n$ and $n+2$ both have at most $7$ prime factors, and that every large enough even integer is the sum of two numbers both having at most $7$ prime factors. Choosing an auxiliary function We follow mostly Halberstam and Richert's book, using the notations of the previous post about Brun sieve. In that post, we found out that if the function $\sigma(n):=\sum_{d\mid n}\mu(d)\chi(d)$ approximating the characteristic function of $n=1$ satisfies $\sigma(n)\geq 0$ or $\sigma(n)\leq 0$ for all $n$, then $\begin{eqnarray}S(A,P,z)&\leq&xW(z)\left(1+\sum_{1<d\mid P(z)}\sigma(d)g(d)\right)+\sum_{d\mid P(z)}|\chi(d)||R_d|\quad (1)\end{eqnarray}$ or $\begin{eqnarray}S(A,P,z)\geq xW(z)\left(1+\sum_{1<d\mid P(z)}\sigma(d)g(d)\right)-\sum_{d\mid P(z)}|\chi(d)||R_d|,\quad (2)\end{eqnarray}$ respectively. We noticed that the very simple auxiliary  function $\chi(d)=1_{\{\nu(d)\leq r\}}$ gives rise to such a function $\sigma$, and from this we derived a combinatorial sieve with error term $O(z^{\log \log z})$. It turns out, however, that the choice was not optimal, and a more clever choice of $\chi(d)$ gives the desired error term of $O(z^K)$ for some $K$. Let us consider what would be a better auxiliary function $\chi$. In the basic Brun sieve, the first error term $\begin{eqnarray}\sum_{1<d\mid P(z)}\sigma(d)g(d)\end{eqnarray}$ was $o(1)$ as long as $r>C\log \log z$ for large $C$, owing to the fact that we had $\sigma(n)=0$ if $0<\nu(n)\leq r$. If this additional condition is fulfilled, the problematic error term is the second one, $\begin{eqnarray}\sum_{d\mid P(z)}|\chi(d)||R_d|.\end{eqnarray}$ Fortunately, this error can be estimated much more sharply if we choose a different function $\chi$ for which $\sigma(n)$ still has constant sign. We assume $|R_d|\leq \omega(d)$ (this is usually satisfied, but there are also important exceptions), and it is also reasonable to assume $\chi(d)\in \{0,1\}$. Now if $\chi(d)=0$ for all $d\mid P(z)$ except for those having at most $m$ prime factors above $z'$, where $m\in \mathbb{Z}_+$ and $z'<z$ are parameters, we find $\begin{eqnarray}\sum_{d\mid P(z)}|\chi(d)|\omega(d)\leq \left(\sum_{p\leq z'}\omega(p)\right)^{M-m}\left(\sum_{p\leq z}\omega(p)\right)^m,\end{eqnarray}$ where $M$ is the maximal number of prime factors of $d$ such that $\chi(d)\neq 0$ can hold. We suppose as before that $\omega(p)\leq A$ for some constant $A$, so the error is at most $(Az')^{M-m}z^m$. If for example $M=r$ (but $\chi$ is not supposed to be $1_{\nu(d)\leq r}$) and $m=1$, the error is at most $(Az')^{r-1}z$. If $z'$ is a lot smaller than $z$, say $z'=\sqrt{z},$ then the error is at most $(Az)^{\frac{r-1}{2}}z$. Our original error in the basic Brun sieve was $(Az)^r$, so we have basically cut the error to its square root (given that a function $\chi$ satisfying the required properties exist). The error is nevertheless too large, so we want to iterate the procedure. If we demand $d$ to have at most $k$ prime factors above $z^{\frac{1}{2^k}}$ for all $k$, and at most $r$ in total, the error is bounded by $\begin{eqnarray}\left(\sum_{p\leq z\atop}\omega(p)\right)\left(\sum_{p\leq z^{\frac{1}{2}}}\omega(p)\right)\left(\sum_{p\leq z^{\frac{1}{4}}}\omega(p)\right)...\leq (Az)^{1+\frac{1}{2}+\frac{1}{4}+...}=(Az)^2,\end{eqnarray}$ so we indeed get an error term that is only a power of $z$. Our task now is to do the above argument rigorously, and with suitable parameters (we do not necessarily want to take square roots). We use the notation $\nu_{a}(d)$ for the number of prime factors of $d$ that are at least $a$. We define two functions $\chi_1,\chi_2$ in terms of the sequence $z_r,z_{r-1},...z_1,z_0$, where $z_k=z^{e^{-\lambda (r-k)}}$ for a parameter $\lambda \in (0,1)$ and $z_r=z,z_0=2,z_{i+1}>z_i$. For $d\mid P(z)$ set $\begin{eqnarray}\chi_1(d)&=&1\quad \text{if}\quad \nu_{z_k}(d)\leq 2(r- k) \quad \text{for all}\quad 0\leq k\leq r-1\\\chi_2(d)&=&1\quad \text{if}\quad \nu_{z_k}(d)\leq 2(r-k)-1 \quad \text{for all}\quad 0\leq k\leq r-1\end{eqnarray}$ and $\chi_1(d)=0,\chi_2(d)=0$ in all the other cases. This is almost what we did heuristically above. Now we prove a crucial lemma. Lemma 1. If $\sigma_j(n)=\sum_{d\mid n}\mu(d)\chi_j(d)$ for $j=1,2$, then $\sigma_j(1)=1$ and $\sigma_1(n)\geq 0,\sigma_2(n)\leq 0$ for all square-free $n>1$. Proof. Clearly $\sigma_j(1)=1$. We give a simple sufficient criterion for $\sigma(n)$ to have constant sign, in terms of $\chi$. If $p$ is a prime divisor of $n$, then $\begin{eqnarray}\sigma(n)&=&\sum_{d\mid \frac{n}{p}}\mu(d)\chi(d)+\sum_{d\mid \frac{n}{p}}\mu(pd)\chi_1(pd)\\&=&\sum_{d\mid \frac{n}{p}}\mu(d)(\chi_1(d)-\chi_1(p d)).\end{eqnarray}$ In particular, we can choose $p=q(n)$, the smallest prime divisor of $n$. As $n$ is square-free, we have $q(d)>q(n)$, so it is enough to have $\begin{eqnarray}\mu(d)(\chi(d)-\chi(pd))\geq 0\quad \text{for all primes}\quad p<q(d).\end{eqnarray}$ We check this criterion for the functions $\chi_1$ and $\chi_2$. Trivially $\chi_j(d)\geq \chi_j(pd)$, so it is enough to show that if $\chi_j(d)=1, \chi_j(pd)=0$, then $\mu(d)=(-1)^{j+1}$. Let $p\in [z_k,z_{k+1})$. As $p<q(d)$, we have $\nu(d)=\nu_{z_k}(d),$ and this must be equal to $2(r-k)$ if $\chi_1(d)=1$ and $\chi_1(pd)=0.$ Therefore $\nu(d)$ is even and $\mu(d)=1$. Similarly, we see in the case of $\chi_2$ that $\mu(d)=-1$. Now the criterion is verified, so the lemma is proved. ■ Estimating the error terms Now we are allowed to use the formula $(1)$ or $(2)$, so we turn to error terms. Lemma 2. If $z_k$ are as before ($\chi_1$ and $\chi_2$ are determined by them), we have $\begin{eqnarray}\sum_{d\mid P(z)}\chi_1(d)|R_d|\leq (Az)^{\frac{2}{1-e^{\lambda}}}.\end{eqnarray}$ $\begin{eqnarray}\sum_{d\mid P(z)}\chi_2(d)|R_d|\leq (Az)^{\frac{2}{1-e^{\lambda}}-1}.\end{eqnarray}$ Proof. This follows just as in our heuristic. Indeed, $\begin{eqnarray}\sum_{d\mid P(z)}\chi_1(d)|R_d|&\leq& \left(1+\sum_{p\leq z}\omega(p)\right)^2\left(1+\sum_{p\leq z^{e^{-\lambda}}}\omega(p)\right)^2\left(1+\sum_{p\leq z^{e^{-2\lambda}}}\omega(p)\right)^2...\\&\leq& (Az)^{2+2e^{-\lambda}+2e^{-2\lambda}+...}=(Az)^{\frac{2}{1-e^{-\lambda}}}.\end{eqnarray}$ and the other estimate follows similarly. ■ Lemma 3. Assume $\omega(p)\leq A$, and let $\lambda>\frac{1}{Ae}$ be a number such that $\begin{eqnarray}\kappa_A:=(1+10^{-10})\frac{A}{2}\lambda e^{1+\frac{A}{2}\lambda}\in (0,1).\end{eqnarray}$ Then, for large values of $z$, it holds that $\begin{eqnarray}\sum_{1<d\mid P(z)}|\sigma_j(d)|g(d)\ll_A \kappa_A^{2\log \log z}\quad (3).\end{eqnarray}$ Proof. Let $j=1;$ the other case is similar. We estimate the terms in the sum on the left of $(3)$ depending on $q(d)$. We do this because for those $d$ for which $\nu(d)$ is large (say $\geq r$), we can estimate essentially as we did in the previous post, but for small values of $\nu(d)$ we have to be careful so that our upper bound is not automatically $\gg 1$ (if $\nu(d)$ is small, we will see that $q(d)$ is large so that $g(d)$ is small). We find $\begin{eqnarray}\sum_{1<d\mid P(z)}|\sigma_j(d)|g(d)&=&\sum_{t=0}^{r-1}\sum_{d\mid P(z)\atop q(d)\in [z_t,z_{t+1})}|\sigma_1(d)|g(d)\\&=&\sum_{t=0}^{r-1}\sum_{d\mid P(z)\atop q(d)\in [z_t,z_{t+1})}\left|\sum_{k\mid \frac{d}{q(d)}}\mu(k)(\chi_1(k)-\chi_1(q(d)k))\right|g(d)\\&=&\sum_{t=0}^{r-1}\sum_{d\mid P(z)\atop q(d)\in [z_t,z_{t+1})}\left|\sum_{k\mid \frac{d}{q(d)}\atop \nu(k)=2(r-t)}\mu(k)\right|g(d) \quad \quad (4)\\&=&\sum_{t=0}^{r-1}\sum_{d\mid P(z)\atop q(d)\in [z_t,z_{t+1})}\binom{\nu(d)-1}{2(r-t)}g(d)\quad \quad \quad \quad (5)\\&\leq&\sum_{t=0}^{r-1}\sum_{d'\mid P(z)\atop q(d')>z_t}\binom{\nu(d')}{2(r-t)}g(d')\cdot\frac{(A+1)^2}{z_t}\quad (6),\end{eqnarray}$ where we obtained $(4)$ from the proof of Lemma 1, $(5)$ as in the previous post, and $(6)$ by writing $d=q(d)d'$ with $q(d')>q(d)$ and applying the multiplicativity of $g$. We estimate $(6)$ further by $\begin{eqnarray}&\leq& \sum_{t=0}^{r-1} \frac{(A+1)^2}{z_t}\sum_{m=2(r-t)}^{\infty} \binom{m}{2(r-t)}\frac{1}{m!}\left(\sum_{z_t\leq z\leq z}g(p)\right)^m\\&=&\sum_{t=0}^{r-1}\frac{(A+1)^2}{z_t}\left(\sum_{z_t\leq p\leq z} g(p)\right)^{2(r-t)}\exp\left(\sum_{z_t\leq p\leq z}g(p)\right)\cdot \frac{1}{(2(r-t)!)}\\&\ll_A& \sum_{t=0}^{r-1}\frac{1}{z_t}\left(\sum_{z_t\leq p\leq z}\frac{A}{p}+c_A\right)^{2(r-t)}\exp\left(\sum_{z_t\leq p\leq z}\frac{A}{p}\right)\cdot \frac{1}{(2(r-t))!}\quad (7)\\&\ll_A& \sum_{t=0}^{r-1}\frac{1}{z_t} \left(\frac{Ae(r-t)\lambda+c_A}{2(r-t)}\right)^{2(r-t)}e^{A(r-t)\lambda}\quad\quad \quad \quad \quad \quad \quad \quad \quad (8)\\&\leq& \max_{0\leq t\leq r-1} \frac{1}{z_t}\left(\frac{Ae\lambda}{2}+\frac{c_A}{2(r-t)}\right)^{2(r-t)}e^{A(r-t)\lambda}\quad\quad \quad \quad \quad \quad \quad \quad \,\,\, \,(9).\end{eqnarray}$ The first intermediate steps are just as for the basic Brun sieve. For $(7)$ we used $\begin{eqnarray}\sum_{z_t\leq p\leq z}g(p)=\sum_{z_t\leq p\leq z}\left(\frac{\omega(p)}{p}+\frac{\omega(p)^2}{p(p-\omega(p))}\right)\leq \sum_{z_t\leq p\leq z}\frac{\omega(p)}{p}+c_A.\end{eqnarray}$ For $(8)$ we used Mertens' estimates (see this post) and Stirling's approximation. When $r-t\leq 10^{10}\cdot\frac{c_A}{Ae}$, the expression $(9)$ is bounded by $\begin{eqnarray}\ll_A z^{-exp(-\alpha_A \lambda)}\left(\frac{Ae\lambda}{2}+\frac{1}{2Ae}\right)^{2\beta_A}=o(z^{-\varepsilon_A}),\end{eqnarray}$ where $\alpha_A,\beta_A,\varepsilon_A>0$ are suitable constants. For other values of $r-t$, $(9)$ is at most $\begin{eqnarray}\frac{1}{z_t}\kappa_A^{2(r-t)},\end{eqnarray}$ where $\kappa_A$ is asin the theorem. Hence we want to maximize over $[0,r]$ the function $f(y)=-e^{-\lambda y}\log z+2y\log \kappa_A$, where $\kappa_A\in (0,1)$ by assumption. We have $f(0)=-\log z,f(r)=-1+2r\log \kappa_A$. Also $f'(y)=0$ if and only if $y=\frac{1}{\lambda}\log\frac{2\kappa_A}{\lambda \log z}=\frac{1}{\lambda}\log \log z+O_A(1)$. For this value of $y=r-t$, the expression $(9)$ becomes $\begin{eqnarray}\ll_A \kappa_A^{\frac{2\log \log z}{\lambda}}\quad \quad (10),\end{eqnarray}$ and since $2=z^{e^{-\lambda r}}$, we have $r=\frac{\log \log z}{\lambda}+O(1)$, so $(10)$ is the maximum of $(9)$ up to a constant factor. This completes the proof. ■ Combining the estimates above, we have the refined Brun sieve. Theorem 4 (refined Brun sieve). Let $\omega(p)\leq A$, and let $\lambda\in (0,1)$ be such that $\kappa_A\in (0,1)$, where $\kappa_A$ is defined in Lemma 3. Then, for some $C_A>0,$ $\begin{eqnarray}S(A,P,z)&\leq& xW(z)(1+C_A\cdot \kappa_A^{\frac{2\log \log z}{\lambda}})+(Az)^{\frac{2}{1-e^{-\lambda}}},\\S(A,P,z)&\geq& xW(z)(1-C_A\cdot \kappa_A^{\frac{2\log \log z}{\lambda}})+(Az)^{\frac{2}{1-e^{-\lambda}}-1}.\end{eqnarray}$ Proof. This follows by combining Lemmas 1, 2 and 3. ■ Applications As the error term in the sieve is a power of $z$, we may consider various problems about almost primes. We start by considering twin almost primes. We choose $A=\{n(n+2): n\leq x\}, P=\mathbb{P}\setminus\{2\}, \omega(p)=2$. Then $A=2$ and we may choose $\lambda=0.2784$ (we want $\lambda$ to be as large as possible). Then $\frac{2}{1-e^{-\lambda}}=8.2303...$, so $\begin{eqnarray}(1-o(1))xW(z)+O(z^{7.3})\leq S(A,P,z)\leq(1+o(1))xW(z)+O(z^{8.3})\end{eqnarray}$ Choosing $z=2x^{\frac{1}{8}}$, we have $S(A,P,2x^{\frac{1}{8}})\gg\frac{x}{\log^2 x}$, and similarly $S(A,P,2x^{\frac{1}{9}})\ll \frac{x}{\log^2 x}$. The values of $n$ counted by $S(A,P,2x^{\frac{1}{8}})$ are such that $n$ and $n+2$ have at most $7$ prime factors; otherwise $n$ or $n+2$ would have a prime factor below $x^{\frac{1}{8}}$. Furthermore, $S(A,P,2x^{\frac{1}{9}})$ counts the twin primes on $(x^{\frac{1}{9}},x]$ among some other numbers, so $\pi_2(x)\ll \frac{x}{\log^2 x}$, where $\pi_2$ is the counting function of twin primes. In conclusion, we have the following theorems. Theorem 5. There are infinitely many integers $n$ such that $n$ and $n+2$ have at most $7$ prime factors, counted with multiplicities. Theorem 6. We have $\pi_2(x)\ll \frac{x}{\log^2 x}$. The latter estimate is believed to be optimal up to a constant factor (and it would be if we could have an error term of $o(z^2)$ in the computations above), but of course we do not even know whether $\pi_2(x)$ is unbounded. Next we consider Goldbach's problem for almost primes. Let $x$ be any large even integer, and set $A=\{n(x-n):n\leq x\}, P=\mathbb{P}, \omega(p)=2$ if $p\nmid x$, $\omega(p)=1$ if $p\mid x$. Then again $A=2$ and we may choose $\lambda=0.2784$. We get, as before, $\begin{eqnarray}S(A,P,2x^{\frac{1}{8}})\gg\frac{x}{\log^2 x}, \quad S(A,P,2x^{\frac{1}{9}})\ll \prod_{p\mid x}\left(1-\frac{1}{p}\right)\prod_{p\nmid x\atop p\leq x}\left(1-\frac{2}{p}\right).\end{eqnarray}$ The quanity $S(A,P,2x^{\frac{1}{8}})$ counts integers $n\in [1,x]$ such that $n$ and $x-n$ have at most $7$ prime factors. On the other hand, $S(A,P,2x^{\frac{1}{9}})$ is an upper bound for $N(x)$, the number of representations of $x$ as a sum of two primes. To summarize, Theorem 7. Every large enough even integer is the sum of two numbers having at most $7$ prime factors. Theorem 8. We have $N(x)\ll \frac{x}{\log^2 x}\prod_{p\mid x}\frac{p-1}{p-2}$, where $N(\cdot)$ is as before. Next we consider primes represented by polynomials. Let $f(x)\in \mathbb{Z}[x]$ be a non-constant irreducible polynomial with positive leading coefficient, of degree $d$. In what follows, implicit constants may depend on the coefficients of $f$ but not on $d$. Let $s(f,p)$ be the number of solutions to $f(m)\equiv 0 \pmod p$, and suppose that $s(f,p)<p$ for all $p$; that is, $f$ has no fixed prime divisor. By Lagrange's theorem, $s(f,p)\leq d$. Clearly we also have $|A_d|=\frac{s(f,p)}{p}|A_p|+R_p, |R_p|\leq \omega(p)$, and by the Chinese remainder theorem the same holds for any $d\mid P(z)$. We can choose $A=d$, so we need to choose $\lambda$ such that $\begin{eqnarray}(1+10^{-10})\frac{d}{2}\lambda e^{1+\frac{d}{2}\lambda}\in(0,1).\end{eqnarray}$ If $u=\frac{d}{2}\lambda$, we may take $u$ to be the solution of $ue^{u}=\frac{1-10^{-10}}{e}$, which is $0.2784...$. Then $\lambda=\frac{0.5569...}{d}$ and $\frac{2}{1-e^{-\lambda}}\leq 4.31d$ (by applying $e^{-t}\leq 1-t+\frac{t^2}{2}$ for $t\geq 0$). Now the Brun sieve gives $\begin{eqnarray}S(A,P,x^{\frac{1}{4.31d-0.1}-1})\gg \frac{x}{\log^d x}, \quad S(A,P,x^\frac{1}{0.431d-0.1})\ll x\prod_{p\leq x\atop s(f,p)>0}\left(1-\frac{1}{p}\right)\end{eqnarray}$ (the upper bound is rather rough). By the Chebotarev density theorem, $s(f,p)>0$ happens for the proportion $\frac{G_P}{\#G}$ of primes (in the sense of the asymptotic density), where $G$ is the Galois group of the field extension of $\mathbb{Q}$ arising from $f$, and $G_P$ is the number of those permutations $\sigma \in G$ ($G$ is a subgroup of the permutation group $S_d$) with just one cycle. Let $S(d)$ be the number of permutations of $d$ elements with just one cycle. Then $s(f,p)>0$ for at least density $\frac{S(d)}{d!}\leq \frac{1}{d}$ of primes. By partial summation, we see that $\begin{eqnarray}\sum_{p\leq x\atop s(f,p)>0}\frac{1}{p}\geq (1-o(1))\cdot \frac{1}{d}\log \log x.\end{eqnarray}$ Therefore $\begin{eqnarray}S(A,P,x^{\frac{1}{4.31d-0.1}})\ll \frac{x}{\log^{\frac{1}{d}} x}.\end{eqnarray}$ We obtained the following theorems. Theorem 9. If $f$ is a polynomial as above, there are infinitely many positive integers $n$ such that $f(n)$ has at most $\frac{1}{\frac{1}{4.31d-0.1}-1}$ prime divisors, counted with multiplicities. In particular, if $f$ is a quadratic polynomial satisfying the conditions, and in the special case $d=2$, there are infinitely many positive integers $n$ with $f(n)$ having at most $7$ prime factors with multiplicities (since then we have $\omega(p)\leq 2$, so the error term is the same as in the twin prime problem). Theorem 10. If $f$ is a polynomial as above, there are $\ll \frac{x}{\log^{\frac{1}{d}}x}$ primes of the form $f(n)$ with $n\in[1,x]$, where the implicit constant may depend on the coefficients of $f$ but not on $d$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922643899917603, "perplexity": 72.20108121013938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215261.83/warc/CC-MAIN-20180819165038-20180819185038-00404.warc.gz"}
https://figshare.com/articles/Investigations_of_Highly_Concentrated_Emulsions_Incorporating_Multi-walled_Carbon_Nanotubes/6917414
## Investigations of Highly Concentrated Emulsions Incorporating Multi-walled Carbon Nanotubes 2018-08-03T05:38:57Z (GMT) <p>The refining characteristics of highly concentrated water-in-oil (w/o) emulsions, wherein the dispersed phase constitutes greater than 90 wt% of the total emulsion, have been investigated. The dispersed phase of the emulsion comprises a supersaturated solution of inorganic salts, and the continuous phase consists of a mixture of an emulsifier in a blend of two oils. The development of microstructure at various stages of the emulsification process has been studied in detail and an empirical correlation between the characteristic droplet size and refining time has been proposed. As the refining time was increased, the Sauter mean diameter (d32) of the aqueous phase droplets decreased exponentially and the width of the droplet-size distribution reduced. The evolution of rheological characteristics of the emulsion during the refinement of the microstructure has also been investigated through different protocols of the dynamic and steady-state rheology. The increase in the refining time led to an increase in the elastic modulus, the yield stress and the viscosity of the emulsion. The network structure of the dispersed phase, the droplet-size distribution and the corresponding interdroplet interactions all govern the rheological characteristics of the final emulsion. The dependence of the elastic modulus and the yield stress on the characteristic droplet-size has also been discussed.<br>Multi-walled carbon nanotubes (MWCNTs) were incorporated into the oil phase of highly concentrated w/o emulsions with the aim of achieving ‘network-like’ structure of MWCNTs throughout the entire continuous phase of the emulsion, which can The refining characteristics of highly concentrated water-in-oil (w/o) emulsions, wherein the dispersed phase constitutes greater than 90 wt% of the total emulsion, have been investigated. The dispersed phase of the emulsion comprises a supersaturated solution of inorganic salts, and the continuous phase consists of a mixture of an emulsifier in a blend of two oils. The development of microstructure at various stages of the emulsification process has been studied in detail and an empirical correlation between the characteristic droplet size and refining time has been proposed. As the refining time was increased, the Sauter mean diameter (d32) of the aqueous phase droplets decreased exponentially and the width of the droplet-size distribution reduced. The evolution of rheological characteristics of the emulsion during the refinement of the microstructure has also been investigated through different protocols of the dynamic and steady-state rheology. The increase in the refining time led to an increase in the elastic modulus, the yield stress and the viscosity of the emulsion. The network structure of the dispersed phase, the droplet-size distribution and the corresponding interdroplet interactions all govern the rheological characteristics of the final emulsion. The dependence of the elastic modulus and the yield stress on the characteristic droplet-size has also been discussed.<br>Multi-walled carbon nanotubes (MWCNTs) were incorporated into the oil phase of highly concentrated w/o emulsions with the aim of achieving ‘network-like’ structure of MWCNTs throughout the entire continuous phase of the emulsion, which can ultimately modify the emulsion characteristics. By keeping the same aqueous-to-oil phase ratio, the amount of MWCNTs in the oil phase was systematically adjusted to investigate their effects on the refining characteristics, microstructure and rheology of the emulsion. The concentration of the MWCNTs in the emulsions for the investigation has been varied from 0.5 to 4 wt% of the oil phase of the emulsion; the corresponding concentration of the MWCNTs in the emulsion varied from 0.0325 to 0.26 wt% of the total emulsion. The refining characteristics of nanotube-incorporated emulsions have been investigated. The incorporation of MWCNTs led to a finer emulsion microstructure with reduced droplet size and narrowed droplet-size distribution. The decrease in droplet size with the addition of MWCNTs is mainly due to the increase in the viscosity of the oil phase which, in turn, results in an increased applied stress during emulsion refining. However, the state of dispersion of MWCNTs within the emulsion also plays a crucial role in determining the final microstructure of the nanotube-incorporated emulsions.<br>The state of dispersion of MWCNTs in the emulsions was investigated through cryo-FEG-SEM analysis, however, from the fractured surface morphology, it was hard to unequivocally conclude the selective dispersion of MWCNTs in the continuous phase of the emulsion. Rheological properties of nanotube-incorporated were characterised as a function of the emulsification time, as well as the MWCNTs concentration. The rheological behaviour of the nanotube-incorporated emulsions was identical to that of the neat emulsions, and primarily governed by the droplet drop size and droplet-size distribution. However, the strain behaviour, especially the yield strain and crossover stain are independent droplet size of the droplet size and the polydispersity of the emulsion. Emulsions that have smaller droplets exhibited higher storage modulus (G^'), yield stress (τ_Y) and apparent viscosity (η). For all the studied refining times, nanotube-incorporated emulsions have higher G^', τ_Y, and η values when compared to the neat emulsion, and these values further increased with the MWCNTs concentration. This is primarily due to the decrease in droplet size with the addition of MWCNTs. Furthermore, our findings suggest that the incorporated MWCNTs did not induce any significant changes in the rheological behaviour of emulsions with identical droplet sizes and it remained essentially unchanged with the MWCNTs concentration. However, the nanotube-incorporated emulsions possessed the solid-like behaviour up to a higher applied stress when compared to the neat emulsion of identical droplet size.<br>Two tetra-alkylated pyrenes have been designed and synthesized for the noncovalent surface modification of MWCNTs, namely, 1,3,6,8-tetra(oct-1-yn-1-yl)pyrene (TOPy) and 1,3,6,8-tetra(dodec-1-yn-1-yl)pyrene (TDPy). The modifier molecules were designed in such a way that they could facilitate better dispersion of individualised MWCNTs in the continuous phase of the emulsion. Moreover, the adsorbed modifiers facilitate the MWCNTs, which are incorporated in the emulsions, to be localised in the continuous phase of the emulsion through the interaction between oil and the alkyl chains of the modifiers. Scanning electron microscopic and transmission electron microscopic analyses suggested that the modifier molecules have been adsorbed on the MWCNT surface, which subsequently resulted in the ‘debundling’ of MWCNT ‘agglomerates.' The red-shift in the C‒H wagging vibrational bands in the FTIR spectroscopy and the G-band shift in Raman spectroscopic analysis for the modified MWCNTs, and the fluorescence quenching of the alkylated pyrene molecule in the presence of the MWCNTs, have confirmed the π–π interaction between the modifier molecules and MWCNTs.<br>The modified MWCNTs were then incorporated into highly concentrated water-in-oil emulsions, and the effect of the noncovalent surface modification on the emulsion morphology was investigated. The concentration of modified MWCNTs was varied between 0.25 ‒ 2 wt% of the oil phase of the emulsion while maintaining the identical droplet size. In the modified MWCNT-incorporated emulsion, there was a significant reduction in the average agglomerate size and the area ratio of the remaining MWCNT agglomerates in the emulsion matrix when compared to the corresponding emulsions that comprise unmodified MWCNTs.<br>The dispersion and localisation of modified and unmodified MWCNTs in the oil phase was assessed through the electrical conductivity measurements. For the MWCNT‒oil blend dispersions, there was a significant improvement in the electrical conductivity (an increase of the order of ~106 in the DC electrical conductivity with 1 wt% MWCNTs). Emulsions with 1 wt% and 2 wt% MWCNTs exhibited a low DC electrical conductivity as opposed to the purely insulating behaviour of the neat emulsion. This change could be an indication of the change in emulsion morphology due to the presence of incorporated MWCNTs. However, the enhancement in the electrical conductivity of the emulsions was very low when compared to the enhancement in oil blend with the addition of MWCNTs. The electrical conductivity measurements of the emulsions did not suggest the formation of a complete and effective percolation network up to an MWCNT content of 2 wt% of the oil phase.<br>In the present study, the first of its kind, an attempt has been made to investigate the effect of incorporation of MWCNTs into the highly concentrated w/o emulsions. A significant level of understanding has been gleaned about the effect of MWCNT incorporation on the morphology and rheology of the HCEs through different microscopic techniques, rheological analysis and various spectroscopic analyses.</p>
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287630081176758, "perplexity": 2306.416085382616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656665.34/warc/CC-MAIN-20190116031807-20190116053807-00120.warc.gz"}
https://www.cheenta.com/discover-the-covariance-isi-mstat-2016-problem-6/
Get inspired by the success stories of our students in IIT JAM MS, ISI  MStat, CMI MSc Data Science.  Learn More # Discover the Covariance | ISI MStat 2016 Problem 6 This problem from ISI MStat 2016 is an application of the ideas of indicator and independent variables and covariance of two summative random variables. ## Problem- Covariance Problem Let $X_{1}, \ldots, X_{n}$ ~ $X$ be i.i.d. random variables from a continuous distribution whose density is symmetric around 0. Suppose $E\left(\left|X\right|\right)=2$ . Define $Y=\sum_{i=1}^{n} X_{i} \quad \text { and } \quad Z=\sum_{i=1}^{n} 1\left(X_{i}>0\right)$. Calculate the covariance between $Y$ and $Z$. This problem is from ISI MStat 2016 (Problem #6) ### Prerequisites 1. X has Symmetric Distribution around 0 $\Rightarrow E(X) = 0$. 2. $|X| = X.1( X > 0 ) - X.1( X \leq 0 ) = 2X.1( X > 0 ) - X$, where $X$ is a random variable. 3. $X_i$ and $X_j$ are independent $\Rightarrow$ $g( X_i)$ and $f(X_j)$ are independent. 4. $A$ and $B$ are independent $\Rightarrow Cov(A,B) = 0$. ## Solution $2 = E(|X|) = E(X.1(X >0)) - E(X.1(X \leq 0)) = E(2X.1( X > 0 )) - E(X) = 2E(X.1( X > 0 ))$ $\Rightarrow E(X.1( X > 0 )) = 1 \overset{E(X) = 0}{\Rightarrow} Cov(X, 1( X > 0 )) = 1$. Let's calculate the covariance of $Y$ and $Z$. $Cov(Y, Z) = \sum_{i,j = 1}^{n} Cov( X_i, 1(X_{j}>0))$ $= \sum_{i = 1}^{n} Cov( X_i, 1(X_{i}>0)) + \sum_{i,j = 1, i \neq j}^{n} Cov( X_i, 1(X_{j}>0))$ $\overset{X_i \text{&} X_j \text{are independent}}{=} \sum_{i = 1}^{n} Cov( X_i, 1(X_{i}>0)) = \sum_{i = 1}^{n} 1 = n$. This problem from ISI MStat 2016 is an application of the ideas of indicator and independent variables and covariance of two summative random variables. ## Problem- Covariance Problem Let $X_{1}, \ldots, X_{n}$ ~ $X$ be i.i.d. random variables from a continuous distribution whose density is symmetric around 0. Suppose $E\left(\left|X\right|\right)=2$ . Define $Y=\sum_{i=1}^{n} X_{i} \quad \text { and } \quad Z=\sum_{i=1}^{n} 1\left(X_{i}>0\right)$. Calculate the covariance between $Y$ and $Z$. This problem is from ISI MStat 2016 (Problem #6) ### Prerequisites 1. X has Symmetric Distribution around 0 $\Rightarrow E(X) = 0$. 2. $|X| = X.1( X > 0 ) - X.1( X \leq 0 ) = 2X.1( X > 0 ) - X$, where $X$ is a random variable. 3. $X_i$ and $X_j$ are independent $\Rightarrow$ $g( X_i)$ and $f(X_j)$ are independent. 4. $A$ and $B$ are independent $\Rightarrow Cov(A,B) = 0$. ## Solution $2 = E(|X|) = E(X.1(X >0)) - E(X.1(X \leq 0)) = E(2X.1( X > 0 )) - E(X) = 2E(X.1( X > 0 ))$ $\Rightarrow E(X.1( X > 0 )) = 1 \overset{E(X) = 0}{\Rightarrow} Cov(X, 1( X > 0 )) = 1$. Let's calculate the covariance of $Y$ and $Z$. $Cov(Y, Z) = \sum_{i,j = 1}^{n} Cov( X_i, 1(X_{j}>0))$ $= \sum_{i = 1}^{n} Cov( X_i, 1(X_{i}>0)) + \sum_{i,j = 1, i \neq j}^{n} Cov( X_i, 1(X_{j}>0))$ $\overset{X_i \text{&} X_j \text{are independent}}{=} \sum_{i = 1}^{n} Cov( X_i, 1(X_{i}>0)) = \sum_{i = 1}^{n} 1 = n$. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355871081352234, "perplexity": 1000.8714915090611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00122.warc.gz"}
http://en.wikipedia.org/wiki/Seven-dimensional_cross_product
# Seven-dimensional cross product Jump to: navigation, search In mathematics, the seven-dimensional cross product is a bilinear operation on vectors in seven dimensional Euclidean space. It assigns to any two vectors a, b in ℝ7 a vector a × b also in ℝ7.[1] Like the cross product in three dimensions the seven-dimensional product is anticommutative and a × b is orthogonal to both a and b. Unlike in three dimensions, it does not satisfy the Jacobi identity. And while the three-dimensional cross product is unique up to a change in sign, there are many seven-dimensional cross products. The seven-dimensional cross product has the same relationship to octonions as the three-dimensional product does to quaternions. The seven-dimensional cross product is one way of generalising the cross product to other than three dimensions, and it turns out to be the only other non-trivial bilinear product of two vectors that is vector valued, anticommutative and orthogonal.[2] In other dimensions there are vector-valued products of three or more vectors that satisfy these conditions, and binary products with bivector results. ## Example The postulates underlying construction of the seven-dimensional cross product are presented in the section Definition. As context for that discussion, the historically first example of the cross product is tabulated below using e1 to e7 as basis vectors.[3][4] This table is one of 480 independent multiplication tables fitting the pattern that each unit vector appears once in each column and once in each row.[5] Thus, each unit vector appears as a product in the table six times, three times with a positive sign and three with a negative sign because of antisymmetry about the diagonal of zero entries. For example, e1 = e2 × e3 = e4 × e5 = e7 × e6 and the negative entries are the reversed cross products. Number 1 2 3 4 5 6 7 Letter i j k l il jl kl Alternate i j k l m n o Cayley's sample multiplication table × e1 e2 e3 e4 e5 e6 e7 e1 0 e3 e2 e5 e4 e7 e6 e2 e3 0 e1 e6 e7 e4 e5 e3 e2 e1 0 e7 e6 e5 e4 e4 e5 e6 e7 0 e1 e2 e3 e5 e4 e7 e6 e1 0 e3 e2 e6 e7 e4 e5 e2 e3 0 e1 e7 e6 e5 e4 e3 e2 e1 0 Entries in the interior give the product of the corresponding vectors on the left and the top in that order (the product is anti-commutative). Some entries are highlighted to emphasize the symmetry. The table can be summarized by the relation[4] $\mathbf{e}_i \mathbf{\times} \mathbf{e}_j = \varepsilon _{ijk} \mathbf{e}_k \ ,$ where $\varepsilon _{ijk}$ is a completely antisymmetric tensor with a positive value +1 when ijk = 123, 145, 176, 246, 257, 347, 365. By picking out the factors leading to the unit vector e1, for example, one finds the formula for the e1 component of x × y. Namely $\left( \mathbf{ x \times y}\right)_1 = x_2y_3 - x_3y_2 +x_4y_5-x_5y_4 + x_7y_6-x_6y_7 = -\left( \mathbf{ y \times x}\right)_1 \ .$ The top left 3 × 3 corner of the table is the same as the cross product in three dimensions. It also may be noticed that orthogonality of the cross product to its constituents x and y is a requirement upon the entries in this table. However, because of the many possible multiplication tables, general results for the cross product are best developed using a basis-independent formulation, as introduced next. ## Definition We can define a cross product on a Euclidean space V as a bilinear map from V × V to V mapping vectors x and y in V to another vector x × y also in V, where x × y has the properties[1][6] $\mathbf{x} \cdot (\mathbf{x} \times \mathbf{y}) = (\mathbf{x} \times \mathbf{y}) \cdot \mathbf{y}=0$, $|\mathbf{x} \times \mathbf{y}|^2 = |\mathbf{x}|^2 |\mathbf{y}|^2 - (\mathbf{x} \cdot \mathbf{y})^2$ where (x·y) is the Euclidean dot product and |x| is the vector norm. The first property states that the cross product is perpendicular to its arguments, while the second property gives the magnitude of the cross product. An equivalent expression in terms of the angle θ between the vectors[7] is[8] $|\mathbf{x} \times \mathbf{y}| = |\mathbf{x}| |\mathbf{y}| \sin \theta,$ or the area of the parallelogram in the plane of x and y with the two vectors as sides.[9] As a third alternative the following can be shown to be equivalent to either expression for the magnitude:[10] $|\mathbf{x} \times \mathbf{y}| = |\mathbf{x}| |\mathbf{y}|~\mbox{if} \ \left( \mathbf{x} \cdot \mathbf{y} \right)= 0.$ ## Consequences of the defining properties Given the three basic properties of (i) bilinearity, (ii) orthogonality and (iii) magnitude discussed in the section on definition, a nontrivial cross product exists only in three and seven dimensions.[2][8][10] This restriction upon dimensionality can be shown by postulating the properties required for the cross product, then deducing an equation which is only satisfied when the dimension is 0, 1, 3 or 7. In zero dimensions there is only the zero vector, while in one dimension all vectors are parallel, so in both these cases a cross product must be identically zero. The restriction to 0, 1, 3 and 7 dimensions is related to Hurwitz's theorem, that normed division algebras are only possible in 1, 2, 4 and 8 dimensions. The cross product is derived from the product of the algebra by considering the product restricted to the 0, 1, 3, or 7 imaginary dimensions of the algebra. Again discarding trivial products the product can only be defined this way in three and seven dimensions.[11] In contrast with three dimensions where the cross product is unique (apart from sign), there are many possible binary cross products in seven dimensions. One way to see this is to note that given any pair of vectors x and y ∈ ℝ7 and any vector v of magnitude |v| = |x||y| sinθ in the five dimensional space perpendicular to the plane spanned by x and y, it is possible to find a cross product with a multiplication table (and an associated set of basis vectors) such that x × y = v. That leaves open the question of just how many vector pairs like x and y can be matched to specified directions like v before the limitations of any particular table intervene. Another difference between the three dimensional cross product and a seven dimensional cross product is:[8] “…for the cross product x × y in ℝ7 there are also other planes than the linear span of x and y giving the same direction as x × y —Pertti Lounesto, Clifford algebras and spinors, p. 97 This statement is exemplified by every multiplication table, because any specific unit vector selected as a product occurs as a mapping from three different pairs of unit vectors, once with a plus sign and once with a minus sign. Each of these different pairs, of course, corresponds to another plane being mapped into the same direction. Further properties follow from the definition, including the following identities: 1. Anticommutativity: $\mathbf{x} \times \mathbf{y} = -\mathbf{y} \times \mathbf{x}$, 2. Scalar triple product: $\mathbf{x} \cdot (\mathbf{y} \times \mathbf{z}) = \mathbf{y} \cdot (\mathbf{z} \times \mathbf{x}) = \mathbf{z} \cdot (\mathbf{x} \times \mathbf{y})$ 3. Malcev identity:[8] $(\mathbf{x} \times \mathbf{y}) \times (\mathbf{x} \times \mathbf{z}) = ((\mathbf{x} \times \mathbf{y}) \times \mathbf{z}) \times \mathbf{x} + ((\mathbf{y} \times \mathbf{z}) \times \mathbf{x}) \times \mathbf{x} + ((\mathbf{z} \times \mathbf{x}) \times \mathbf{x}) \times \mathbf{y}$ $\mathbf{x} \times (\mathbf{x} \times \mathbf{y}) = -|\mathbf{x}|^2 \mathbf{y} + (\mathbf{x} \cdot \mathbf{y}) \mathbf{x}.$ Other properties follow only in the three dimensional case, and are not satisfied by the seven dimensional cross product, notably, 1. Vector triple product: $\mathbf{x} \times (\mathbf{y} \times \mathbf{z}) = (\mathbf{x} \cdot \mathbf{z}) \mathbf{y} - (\mathbf{x} \cdot \mathbf{y}) \mathbf{z}$ 2. Jacobi identity:[8] $\mathbf{x} \times (\mathbf{y} \times \mathbf{z}) + \mathbf{y} \times (\mathbf{z} \times \mathbf{x}) + \mathbf{z} \times (\mathbf{x} \times \mathbf{y}) = 0$ ## Coordinate expressions To define a particular cross product, an orthonormal basis {ej} may be selected and a multiplication table provided that determines all the products {ei × ej}. One possible multiplication table is described in the Example section, but it is not unique.[5] Unlike three dimensions, there are many tables because every pair of unit vectors is perpendicular to five other unit vectors, allowing many choices for each cross product. Once we have established a multiplication table, it is then applied to general vectors x and y by expressing x and y in terms of the basis and expanding x × y through bilinearity. × e1 e2 e3 e4 e5 e6 e7 e1 0 e4 e7 e2 e6 e5 e3 e2 e4 0 e5 e1 e3 e7 e6 e3 e7 e5 0 e6 e2 e4 e1 e4 e2 e1 e6 0 e7 e3 e5 e5 e6 e3 e2 e7 0 e1 e4 e6 e5 e7 e4 e3 e1 0 e2 e7 e3 e6 e1 e5 e4 e2 0 Lounesto's multiplication table Using e1 to e7 for the basis vectors a different multiplication table from the one in the Introduction, leading to a different cross product, is given with anticommutativity by[8] $\mathbf{e}_1 \times \mathbf{e}_2 = \mathbf{e}_4, \quad \mathbf{e}_2 \times \mathbf{e}_4 = \mathbf{e}_1, \quad \mathbf{e}_4 \times \mathbf{e}_1 = \mathbf{e}_2,$ $\mathbf{e}_2 \times \mathbf{e}_3 = \mathbf{e}_5, \quad \mathbf{e}_3 \times \mathbf{e}_5 = \mathbf{e}_2, \quad \mathbf{e}_5 \times \mathbf{e}_2 = \mathbf{e}_3,$ $\mathbf{e}_3 \times \mathbf{e}_4 = \mathbf{e}_6, \quad \mathbf{e}_4 \times \mathbf{e}_6 = \mathbf{e}_3, \quad \mathbf{e}_6 \times \mathbf{e}_3 = \mathbf{e}_4,$ $\mathbf{e}_4 \times \mathbf{e}_5 = \mathbf{e}_7, \quad \mathbf{e}_5 \times \mathbf{e}_7 = \mathbf{e}_4, \quad \mathbf{e}_7 \times \mathbf{e}_4 = \mathbf{e}_5,$ $\mathbf{e}_5 \times \mathbf{e}_6 = \mathbf{e}_1, \quad \mathbf{e}_6 \times \mathbf{e}_1 = \mathbf{e}_5, \quad \mathbf{e}_1 \times \mathbf{e}_5 = \mathbf{e}_6,$ $\mathbf{e}_6 \times \mathbf{e}_7 = \mathbf{e}_2, \quad \mathbf{e}_7 \times \mathbf{e}_2 = \mathbf{e}_6, \quad \mathbf{e}_2 \times \mathbf{e}_6 = \mathbf{e}_7,$ $\mathbf{e}_7 \times \mathbf{e}_1 = \mathbf{e}_3, \quad \mathbf{e}_1 \times \mathbf{e}_3 = \mathbf{e}_7, \quad \mathbf{e}_3 \times \mathbf{e}_7 = \mathbf{e}_1.$ More compactly this rule can be written as $\mathbf{e}_i \times \mathbf{e}_{i+1} = \mathbf{e}_{i+3}$ with i = 1...7 modulo 7 and the indices i, i + 1 and i + 3 allowed to permute evenly. Together with anticommutativity this generates the product. This rule directly produces the two diagonals immediately adjacent to the diagonal of zeros in the table. Also, from an identity in the subsection on consequences, $\mathbf{e}_i \times \left( \mathbf{e}_i \times \mathbf{e}_{i+1}\right) =-\mathbf{e}_{i+1} = \mathbf{e}_i \times \mathbf{e}_{i+3} \ ,$ which produces diagonals further out, and so on. The ej component of cross product x × y is given by selecting all occurrences of ej in the table and collecting the corresponding components of x from the left column and of y from the top row. The result is: \begin{align}\mathbf{x} \times \mathbf{y} = (x_2y_4 - x_4y_2 + x_3y_7 - x_7y_3 + x_5y_6 - x_6y_5)\,&\mathbf{e}_1 \\ {}+ (x_3y_5 - x_5y_3 + x_4y_1 - x_1y_4 + x_6y_7 - x_7y_6)\,&\mathbf {e}_2 \\ {}+ (x_4y_6 - x_6y_4 + x_5y_2 - x_2y_5 + x_7y_1 - x_1y_7)\,&\mathbf{e}_3 \\ {}+ (x_5y_7 - x_7y_5 + x_6y_3 - x_3y_6 + x_1y_2 - x_2y_1)\,&\mathbf{e}_4 \\ {}+ (x_6y_1 - x_1y_6 + x_7y_4 - x_4y_7 + x_2y_3 - x_3y_2)\,&\mathbf{e}_5 \\ {}+ (x_7y_2 - x_2y_7 + x_1y_5 - x_5y_1 + x_3y_4 - x_4y_3)\,&\mathbf{e}_6 \\ {}+ (x_1y_3 - x_3y_1 + x_2y_6 - x_6y_2 + x_4y_5 - x_5y_4)\,&\mathbf{e}_7. \\ \end{align} As the cross product is bilinear the operator x×– can be written as a matrix, which takes the form[citation needed] $T_{\mathbf x} = \begin{bmatrix} 0 & -x_4 & -x_7 & x_2 & -x_6 & x_5 & x_3 \\ x_4 & 0 & -x_5 & -x_1 & x_3 & -x_7 & x_6 \\ x_7 & x_5 & 0 & -x_6 & -x_2 & x_4 & -x_1 \\ -x_2 & x_1 & x_6 & 0 & -x_7 & -x_3 & x_5 \\ x_6 & -x_3 & x_2 & x_7 & 0 & -x_1 & -x_4 \\ -x_5 & x_7 & -x_4 & x_3 & x_1 & 0 & -x_2 \\ -x_3 & -x_6 & x_1 & -x_5 & x_4 & x_2 & 0 \end{bmatrix}.$ The cross product is then given by $\mathbf{x} \times \mathbf{y} = T_{\mathbf{x}}(\mathbf{y}).$ ### Different multiplication tables Fano planes for the two multiplication tables used here. Two different multiplication tables have been used in this article, and there are more.[5][12] These multiplication tables are characterized by the Fano plane,[13][14] and these are shown in the figure for the two tables used here: at top, the one described by Sabinin, Sbitneva, and Shestakov, and at bottom that described by Lounesto. The numbers under the Fano diagrams (the set of lines in the diagram) indicate a set of indices for seven independent products in each case, interpreted as ijkei × ej = ek. The multiplication table is recovered from the Fano diagram by following either the straight line connecting any three points, or the circle in the center, with a sign as given by the arrows. For example, the first row of multiplications resulting in e1 in the above listing is obtained by following the three paths connected to e1 in the lower Fano diagram: the circular path e2 × e4, the diagonal path e3 × e7, and the edge path e6 × e1 = e5 rearranged using one of the above identities as: $\mathbf{e_6 \times} \left( \mathbf{e_6 \times e_1} \right) = -\mathbf{e_1} = \mathbf {e_6 \times e_5} ,$ or $\mathbf {e_5 \times e_6} =\mathbf{e_1} ,$ also obtained directly from the diagram with the rule that any two unit vectors on a straight line are connected by multiplication to the third unit vector on that straight line with signs according to the arrows (sign of the permutation that orders the unit vectors). It can be seen that both multiplication rules follow from the same Fano diagram by simply renaming the unit vectors, and changing the sense of the center unit vector. The question arises: how many multiplication tables are there?[14] The question of possible multiplication tables arises, for example, when one reads another article on octonions, which uses a different one from the one given by [Cayley, say]. Usually it is remarked that all 480 possible ones are equivalent, that is, given an octonionic algebra with a multiplication table and any other valid multiplication table, one can choose a basis such that the multiplication follows the new table in this basis. One may also take the point of view, that there exist different octonionic algebras, that is, algebras with different multiplication tables. With this interpretation...all these octonionic algebras are isomorphic. —Jörg Schray, Corinne A Manogue, Octonionic representations of Clifford algebras and triality (1994) ### Using geometric algebra The product can also be calculated using geometric algebra. The product starts with the exterior product, a bivector valued product of two vectors: $\mathbf{B} = \mathbf{x} \wedge \mathbf{y} = \frac{1}{2}(\mathbf{xy} - \mathbf{yx}).$ This is bilinear, alternate, has the desired magnitude, but is not vector valued. The vector, and so the cross product, comes from the product of this bivector with a trivector. In three dimensions up to a scale factor there is only one trivector, the pseudoscalar of the space, and a product of the above bivector and one of the two unit trivectors gives the vector result, the dual of the bivector. A similar calculation is done is seven dimensions, except as trivectors form a 35-dimensional space there are many trivectors that could be used, though not just any trivector will do. The trivector that gives the same product as the above coordinate transform is $\mathbf{v} = \mathbf{e}_{124} + \mathbf{e}_{235} + \mathbf{e}_{346} + \mathbf{e}_{457} + \mathbf{e}_{561} + \mathbf{e}_{672} + \mathbf{e}_{713}.$ This is combined with the exterior product to give the cross product $\mathbf{x} \times \mathbf{y} = -(\mathbf{x} \wedge \mathbf{y}) ~\lrcorner~ \mathbf{v}$ where $\lrcorner$ is the left contraction operator from geometric algebra.[8][15] ## Relation to the octonions Just as the 3-dimensional cross product can be expressed in terms of the quaternions, the 7-dimensional cross product can be expressed in terms of the octonions. After identifying ℝ7 with the imaginary octonions (the orthogonal complement of the real line in O), the cross product is given in terms of octonion multiplication by $\mathbf x \times \mathbf y = \mathrm{Im}(\mathbf{xy}) = \frac{1}{2}(\mathbf{xy}-\mathbf{yx}).$ Conversely, suppose V is a 7-dimensional Euclidean space with a given cross product. Then one can define a bilinear multiplication on ℝ⊕V as follows: $(a,\mathbf{x})(b,\mathbf{y}) = (ab - \mathbf{x}\cdot\mathbf{y}, a\mathbf y + b\mathbf x + \mathbf{x}\times\mathbf{y}).$ The space ℝ⊕V with this multiplication is then isomorphic to the octonions.[16] The cross product only exists in three and seven dimensions as one can always define a multiplication on a space of one higher dimension as above, and this space can be shown to be a normed division algebra. By Hurwitz's theorem such algebras only exist in one, two, four, and eight dimensions, so the cross product must be in zero, one, three or seven dimensions. The products in zero and one dimensions are trivial, so non-trivial cross products only exist in three and seven dimensions.[17][18] The failure of the 7-dimension cross product to satisfy the Jacobi identity is due to the nonassociativity of the octonions. In fact, $\mathbf{x}\times(\mathbf{y}\times\mathbf{z}) + \mathbf{y}\times(\mathbf{z}\times\mathbf{x}) + \mathbf{z}\times(\mathbf{x}\times\mathbf{y}) = -\frac{3}{2}[\mathbf x, \mathbf y, \mathbf z]$ where [x, y, z] is the associator. ## Rotations In three dimensions the cross product is invariant under the group of the rotation group, SO(3), so the cross product of x and y after they are rotated is the image of x × y under the rotation. But this invariance is not true in seven dimensions; that is, the cross product is not invariant under the group of rotations in seven dimensions, SO(7). Instead it is invariant under the exceptional Lie group G2, a subgroup of SO(7).[8][16] ## Generalizations Non-trivial binary cross products exist only in three and seven dimensions. But if the restriction that the product is binary is lifted, so products of more than two vectors are allowed, then more products are possible.[19][20] As in two dimensions the product must be vector valued, linear, and anti-commutative in any two of the vectors in the product. The product should satisfy orthogonality, so it is orthogonal to all its members. This means no more than n − 1 vectors can be used in n dimensions. The magnitude of the product should equal the volume of the parallelotope with the vectors as edges, which is can be calculated using the Gram determinant. So the conditions are • orthogonality: $\left( \mathbf{a_1} \times \ \cdots \ \times \mathbf{a_k}\right) \cdot \mathbf{a_j} = 0$ • the Gram determinant: $|\mathbf{a_1} \times \ \cdots \ \times \mathbf{a_k} |^2 = \det (\mathbf{a_i \cdot a_j}) = \begin{vmatrix} \mathbf {a_1 \cdot a_1} & \mathbf {a_1 \cdot a_2} & \dots & \mathbf {a_1 \cdot a_k}\\ \mathbf {a_2 \cdot a_1} & \mathbf {a_2 \cdot a_2} & \dots & \mathbf {a_2 \cdot a_k}\\ \dots & \dots & \dots & \dots\\ \mathbf {a_k \cdot a_1} & \mathbf {a_k \cdot a_2} & \dots & \mathbf {a_k \cdot a_k}\\ \end{vmatrix}$ The Gram determinant is the squared volume of the parallelotope with a1, ..., ak as edges. If there are just two vectors x and y it simplifies to the condition for the binary cross product given above, that is $|\mathbf{x} \times \mathbf{y}|^2 = \begin{vmatrix} \mathbf {x \cdot x} & \mathbf {x \cdot y}\\ \mathbf {y \cdot x} & \mathbf {y \cdot y}\\ \end{vmatrix} = |\mathbf{x}|^2 |\mathbf{y}|^2 - (\mathbf{x} \cdot \mathbf{y})^2 ,$ With these conditions a non-trivial cross product only exists: • as a binary product in three and seven dimensions • as a product of n − 1 vectors in n > 3 dimensions • as a product of three vectors in eight dimensions The product of n − 1 vectors is in n dimensions is the Hodge dual of the exterior product of n − 1 vectors. One version of the product of three vectors in eight dimensions is given by $\mathbf{a} \times \mathbf{b} \times \mathbf{c} = (\mathbf{a} \wedge \mathbf{b} \wedge \mathbf{c}) ~\lrcorner~ (\mathbf{w} - \mathbf{ve}_8)$ where v is the same trivector as used in seven dimensions, $\lrcorner$ is again the left contraction, and w = −ve12...7 is a 4-vector. ## Notes 1. ^ a b WS Massey (1983). "Cross products of vectors in higher dimensional Euclidean spaces". The American Mathematical Monthly (Mathematical Association of America) 90 (10): 697–701. doi:10.2307/2323537. JSTOR 2323537. 2. ^ a b WS Massey (1983). "Cross products of vectors in higher dimensional Euclidean spaces". The American Mathematical Monthly 90 (10): 697–701. doi:10.2307/2323537. JSTOR 2323537. "If one requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and 7-dimensional Euclidean space." 3. ^ This table is due to Arthur Cayley (1845) and John T. Graves (1843). See G Gentili, C Stoppato, DC Struppa and F Vlacci (2009). "Recent developments for regular functions of a hypercomplex variable". In Irene Sabadini, M Shapiro, F Sommen. Hypercomplex analysis (Conference on quaternionic and Clifford analysis; proceedings ed.). Birkaüser. p. 168. ISBN 978-3-7643-9892-7. 4. ^ a b Lev Vasilʹevitch Sabinin, Larissa Sbitneva, I. P. Shestakov (2006). "§17.2 Octonion algebra and its regular bimodule representation". Non-associative algebra and its applications. CRC Press. p. 235. ISBN 0-8247-2669-3 5. ^ a b c Rafał Abłamowicz, Pertti Lounesto, Josep M. Parra (1996). "§ Four ocotonionic basis numberings". Clifford algebras with numeric and symbolic computations. Birkhäuser. p. 202. ISBN 0-8176-3907-1. 6. ^ Mappings are restricted to be bilinear by (Massey 1993) and Robert B Brown and Alfred Gray (1967). "Vector cross products". Commentarii Mathematici Helvetici (Birkhäuser Basel) 42 (1/December): 222–236. doi:10.1007/BF02564418.. 7. ^ The definition of angle in n-dimensions ordinarily is defined in terms of the dot product as: $(\mathbf{x \cdot y}) = |\mathbf x ||\mathbf y | \ \cos \theta \ , \ \mathrm {in \ the \ range } \ (-\pi < \theta \le \pi )\ ,$ where θ is the angle between the vectors. Consequently, this property of the cross product provides its magnitude as: $|\mathbf{ x \times y} |^2 =|\mathbf x |^2 |\mathbf y |^2 \left(1 - \cos^2 \theta \right) \ .$ From the Pythagorean trigonometric identity this magnitude equals $|\mathbf{x} \times \mathbf{y}| = |\mathbf{x}| |\mathbf{y}| \sin \theta$. See Francis Begnaud Hildebrand (1992). Methods of applied mathematics (Reprint of Prentice-Hall 1965 2nd ed.). Courier Dover Publications. p. 24. ISBN 0-486-67002-3. 8. Lounesto, pp. 96–97 9. ^ Kendall, M. G. (2004). A Course in the Geometry of N Dimensions. Courier Dover Publications. p. 19. ISBN 0-486-43927-5. 10. ^ a b Z.K. Silagadze (2002). "Multi-dimensional vector product". Journal of Physics A: Mathematical and General 35 (23): 4949. arXiv:math.RA/0204357. doi:10.1088/0305-4470/35/23/310. 11. ^ Nathan Jacobson (2009). Basic algebra I (Reprint of Freeman 1974 2nd ed.). Dover Publications. pp. 417–427. ISBN 0-486-47189-6. 12. ^ Further discussion of the tables and the connection of the Fano plane to these tables is found here: Tony Smith. "Octonion products and lattices". Retrieved 2010-07-11. 13. ^ Rafał Abłamowicz, Bertfried Fauser (2000). Clifford Algebras and Their Applications in Mathematical Physics: Algebra and physics. Springer. p. 26. ISBN 0-8176-4182-3. 14. ^ a b Jörg Schray, Corinne A. Manogue (1996). "Octonionic representations of Clifford algebras and triality". Foundations of physics (Springer) 26 (1/January): 17–70. doi:10.1007/BF02058887. Available as ArXive preprint Figure 1 is located here. 15. ^ Bertfried Fauser (2004). "§18.4.2 Contractions". In Pertti Lounesto, Rafał Abłamowicz. Clifford algebras: applications to mathematics, physics, and engineering. Birkhäuser. pp. 292 ff. ISBN 0-8176-3525-4. 16. ^ a b John C. Baez (2001). "The Octonions". Bull. Amer. Math. 39: 38. 17. ^ Elduque, Alberto (2004). Vector cross products. 18. ^ Darpö, Erik (2009). "Vector product algebras". Bulletin of the London Mathematical Society 41 (5): 898–902. doi:10.1112/blms/bdp066. See also: Real vector product algebras. CiteSeerX: 10.1.1.66.4. 19. ^ Lounesto, §7.5: Cross products of k vectors in ℝn, p. 98 20. ^ Jean H. Gallier (2001). "Problem 7.10 (2)". Geometric methods and applications: for computer science and engineering. Springer. p. 244. ISBN 0-387-95044-3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9449942708015442, "perplexity": 620.9497810443083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163901500/warc/CC-MAIN-20131204133141-00061-ip-10-33-133-15.ec2.internal.warc.gz"}
https://brilliant.org/problems/chess-and-probability/
# Chess and probability Three unit squares are chosen at random from a chessboard. The probability that they form the letter 'L' (in every orientation) can be written as $$\displaystyle \frac{A}{\binom{64}{3}}$$. Find the value of A. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907819032669067, "perplexity": 576.0662951094649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937090.0/warc/CC-MAIN-20180420003432-20180420023432-00262.warc.gz"}
https://stackabuse.com/calculating-variance-and-standard-deviation-in-python/
## Calculating Variance and Standard Deviation in Python ### Introduction Two closely related statistical measures will allow us to get an idea of the spread or dispersion of our data. The first measure is the variance, which measures how far from their mean the individual observations in our data are. The second is the standard deviation, which is the square root of the variance and measures the amount of variation or dispersion of a dataset. In this tutorial, we'll learn how to calculate the variance and the standard deviation in Python. We'll first code a Python function for each measure and later, we'll learn how to use the Python statistics module to accomplish the same task quickly. With this knowledge, we'll be able to take a first look at our datasets and get a quick idea of the general dispersion of our data. ### Calculating the Variance In statistics, the variance is a measure of how far individual (numeric) values in a dataset are from the mean or average value. The variance is often used to quantify spread or dispersion. Spread is a characteristic of a sample or population that describes how much variability there is in it. A high variance tells us that the values in our dataset are far from their mean. So, our data will have high levels of variability. On the other hand, a low variance tells us that the values are quite close to the mean. In this case, the data will have low levels of variability. To calculate the variance in a dataset, we first need to find the difference between each individual value and the mean. The variance is the average of the squares of those differences. We can express the variance with the following math expression: $$\sigma^2 = \frac{1}{n}{\sum_{i=0}^{n-1}{(x_i - \mu)^2}}$$ In this equation, xi stands for individual values or observations in a dataset. μ stands for the mean or average of those values. n is the number of values in the dataset. The term xi - μ is called the deviation from the mean. So, the variance is the mean of square deviations. That's why we denoted it as σ2. Say we have a dataset [3, 5, 2, 7, 1, 3]. To find its variance, we need to calculate the mean which is: $$(3 + 5 + 2 + 7 + 1 + 3) / 6 = 3.5$$ Then, we need to calculate the sum of the square deviation from the mean of all the observations. Here's how: $$(3 - 3.5)^2 + (5 - 3.5)^2 + (2 - 3.5)^2 + (7 - 3.5)^2 + (1 - 3.5)^2 + (3 - 3.5)^2 = 23.5$$ To find the variance, we just need to divide this result by the number of observations like this: $$23.5 / 6 = 3.916666667$$ That's all. The variance of our data is 3.916666667. The variance is difficult to understand and interpret, particularly how strange its units are. For example, if the observations in our dataset are measured in pounds, then the variance will be measured in square pounds. So, we can say that the observations are, on average, 3.916666667 square pounds far from the mean 3.5. Fortunately, the standard deviation comes to fix this problem but that's a topic of a later section. If we apply the concept of variance to a dataset, then we can distinguish between the sample variance and the population variance. The population variance is the variance that we saw before and we can calculate it using the data from the full population and the expression for σ2. The sample variance is denoted as S2 and we can calculate it using a sample from a given population and the following expression: $$S^2 = \frac{1}{n}{\sum_{i=0}^{n-1}{(x_i - X)^2}}$$ This expression is quite similar to the expression for calculating σ2 but in this case, xi represents individual observations in the sample and X is the mean of the sample. S2 is commonly used to estimate the variance of a population (σ2) using a sample of data. However, S2 systematically underestimates the population variance. For that reason, it's referred to as a biased estimator of the population variance. When we have a large sample, S2 can be an adequate estimator of σ2. For small samples, it tends to be too low. Fortunately, there is another simple statistic that we can use to better estimate σ2. Here's its equation: $$S^2_{n-1} = \frac{1}{n-1}{\sum_{i=0}^{n-1}{(x_i - X)^2}}$$ This looks quite similar to the previous expression. It looks like the squared deviation from the mean but in this case, we divide by n - 1 instead of by n. This is called Bessel's correction. Bessel's correction illustrates that S2n-1 is the best unbiased estimator for the population variance. So, in practice, we'll use this equation to estimate the variance of a population using a sample of data. Note that S2n-1 is also known as the variance with n - 1 degrees of freedom. Now that we've learned how to calculate the variance using its math expression, it's time to get into action and calculate the variance using Python. #### Coding a variance() Function in Python To calculate the variance, we're going to code a Python function called variance(). This function will take some data and return its variance. Inside variance(), we're going to calculate the mean of the data and the square deviations from the mean. Finally, we're going to calculate the variance by finding the average of the deviations. Here's a possible implementation for variance(): >>> def variance(data): ... # Number of observations ... n = len(data) ... # Mean of the data ... mean = sum(data) / n ... # Square deviations ... deviations = [(x - mean) ** 2 for x in data] ... # Variance ... variance = sum(deviations) / n ... return variance ... >>> variance([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 5.76 We first calculate the number of observations (n) in our data using the built-in function len(). Then, we calculate the mean of the data, dividing the total sum of the observations by the number of observations. The next step is to calculate the square deviations from the mean. To do that, we use a list comprehension that creates a list of square deviations using the expression (x - mean) ** 2 where x stands for every observation in our data. Finally, we calculate the variance by summing the deviations and dividing them by the number of observations n. In this case, variance() will calculate the population variance because we're using n instead of n - 1 to calculate the mean of the deviations. If we're working with a sample and we want to estimate the variance of the population, then we'll need to update the expression variance = sum(deviations) / n to variance = sum(deviations) / (n - 1). We can refactor our function to make it more concise and efficient. Here's an example: >>> def variance(data, ddof=0): ... n = len(data) ... mean = sum(data) / n ... return sum((x - mean) ** 2 for x in data) / (n - ddof) ... >>> variance([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 5.76 >>> variance([4, 8, 6, 5, 3, 2, 8, 9, 2, 5], ddof=1) 6.4 In this case, we remove some intermediate steps and temporary variables like deviations and variance. We also turn the list comprehension into a generator expression, which is much more efficient in terms of memory consumption. Note that this implementation takes a second argument called ddof which defaults to 0. This argument allows us to set the degrees of freedom that we want to use when calculating the variance. For example, ddof=0 will allow us to calculate the variance of a population. Meanwhile, ddof=1 will allow us to estimate the population variance using a sample of data. #### Using Python's pvariance() and variance() Python includes a standard module called statistics that provides some functions for calculating basic statistics of data. In this case, the statistics.pvariance() and statistics.variance() are the functions that we can use to calculate the variance of a population and of a sample respectively. Here's how Python's pvariance() works: >>> import statistics >>> statistics.pvariance([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 5.760000000000001 We just need to import the statistics module and then call pvariance() with our data as an argument. That will return the variance of the population. On the other hand, we can use Python's variance() to calculate the variance of a sample and use it to estimate the variance of the entire population. That's because variance() uses n - 1 instead of n to calculate the variance. Here's how it works: >>> import statistics >>> statistics.variance([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 6.4 This is the sample variance S2. So, the result of using Python's variance() should be an unbiased estimate of the population variance σ2, provided that the observations are representative of the entire population. ### Calculating the Standard Deviation The standard deviation measures the amount of variation or dispersion of a set of numeric values. Standard deviation is the square root of variance σ2 and is denoted as σ. So, if we want to calculate the standard deviation, then all we just have to do is to take the square root of the variance as follows: $$\sigma = \sqrt{\sigma^2}$$ Again, we need to distinguish between the population standard deviation, which is the square root of the population variance (σ2) and the sample standard deviation, which is the square root of the sample variance (S2). We'll denote the sample standard deviation as S: $$S = \sqrt{S^2}$$ Low values of standard deviation tell us that individual values are closer to the mean. High values, on the other hand, tell us that individual observations are far away from the mean of the data. Values that are within one standard deviation of the mean can be thought of as fairly typical, whereas values that are three or more standard deviations away from the mean can be considered much more atypical. They're also known as outliers. Unlike variance, the standard deviation will be expressed in the same units of the original observations. Therefore, the standard deviation is a more meaningful and easier to understand statistic. Retaking our example, if the observations are expressed in pounds, then the standard deviation will be expressed in pounds as well. If we're trying to estimate the standard deviation of the population using a sample of data, then we'll be better served using n - 1 degrees of freedom. Here's a math expression that we typically use to estimate the population variance: $$\sigma_x = \sqrt\frac{\sum_{i=0}^{n-1}{(x_i - \mu_x)^2}}{n-1}$$ Note that this is the square root of the sample variance with n - 1 degrees of freedom. This is equivalent to say: $$S_{n-1} = \sqrt{S^2_{n-1}}$$ Once we know how to calculate the standard deviation using its math expression, we can take a look at how we can calculate this statistic using Python. #### Coding a stdev() Function in Python To calculate the standard deviation of a dataset, we're going to rely on our variance() function. We're also going to use the sqrt() function from the math module of the Python standard library. Here's a function called stdev() that takes the data from a population and returns its standard deviation: >>> import math >>> # We relay on our previous implementation for the variance >>> def variance(data, ddof=0): ... n = len(data) ... mean = sum(data) / n ... return sum((x - mean) ** 2 for x in data) / (n - ddof) ... >>> def stdev(data): ... var = variance(data) ... std_dev = math.sqrt(var) ... return std_dev >>> stdev([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 2.4 Our stdev() function takes some data and returns the population standard deviation. To do that, we rely on our previous variance() function to calculate the variance and then we use math.sqrt() to take the square root of the variance. If we want to use stdev() to estimate the population standard deviation using a sample of data, then we just need to calculate the variance with n - 1 degrees of freedom as we saw before. Here's a more generic stdev() that allows us to pass in degrees of freedom as well: >>> def stdev(data, ddof=0): ... return math.sqrt(variance(data, ddof)) >>> stdev([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 2.4 >>> stdev([4, 8, 6, 5, 3, 2, 8, 9, 2, 5], ddof=1) 2.5298221281347035 With this new implementation, we can use ddof=0 to calculate the standard deviation of a population, or we can use ddof=1 to estimate the standard deviation of a population using a sample of data. #### Using Python's pstdev() and stdev() The Python statistics module also provides functions to calculate the standard deviation. We can find pstdev() and stdev(). The first function takes the data of an entire population and returns its standard deviation. The second function takes data from a sample and returns an estimation of the population standard deviation. Here's how these functions work: >>> import statistics >>> statistics.pstdev([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 2.4000000000000004 >>> statistics.stdev([4, 8, 6, 5, 3, 2, 8, 9, 2, 5]) 2.5298221281347035 We first need to import the statistics module. Then, we can call statistics.pstdev() with data from a population to get its standard deviation. If we don't have the data for the entire population, which is a common scenario, then we can use a sample of data and use statistics.stdev() to estimate the population standard deviation. ### Conclusion The variance and the standard deviation are commonly used to measure the variability or dispersion of a dataset. These statistic measures complement the use of the mean, the median, and the mode when we're describing our data. In this tutorial, we've learned how to calculate the variance and the standard deviation of a dataset using Python. We first learned, step-by-step, how to create our own functions to compute them, and later we learned how to use the Python statistics module as a quick way to approach their calculation. Holguín, Cuba Leodanis is an industrial engineer who loves Python and software development. He is a self-taught Python programmer with 5+ years of experience building desktop applications with PyQt.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254363179206848, "perplexity": 384.89808673031916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00573.warc.gz"}
https://socratic.org/questions/what-is-the-average-speed-of-an-object-that-is-still-at-t-0-and-accelerates-at-a-26
Physics Topics # What is the average speed of an object that is still at t=0 and accelerates at a rate of a(t) =3t-4 from t in [2, 3]? Jul 16, 2017 The average speed is $= - 0.5 m {s}^{-} 1$ #### Explanation: The speed is the integral of the acceleration $a \left(t\right) = 3 t - 4$ $v \left(t\right) = \int \left(3 t - 4\right) \mathrm{dt} = \frac{3}{2} {t}^{2} - 4 t + C$ Plugging in the initial conditions at $t = 0$ $v \left(0\right) = 0 - 0 + C = 0$, $\implies$, $C = 0$ The average velocity is $\left(3 - 2\right) \overline{v} = {\int}_{2}^{3} \left(\frac{3}{2} {t}^{2} - 4 t\right) \mathrm{dt}$ $= {\left[\frac{1}{2} {t}^{3} - 2 {t}^{2}\right]}_{2}^{3}$ $= \left(\frac{27}{2} - 18\right) - \left(4 - 8\right)$ $\overline{v} = - 0.5$ ##### Impact of this question 325 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866383075714111, "perplexity": 1277.6495406803456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00180.warc.gz"}
https://math.stackexchange.com/questions/1397533/multiplicative-inverse-in-the-modulo-of-the-larger-number-what-does-that-mean
# “multiplicative inverse in the modulo of the larger number” what does that mean? while I was reading this artical I have read the following paragraph: The interesting thing is that if two numbers have a $\gcd$ of $1$, then the smaller of the two numbers has a multiplicative inverse in the modulo of the larger number. It is expressed in the following equation: and he gives the following example: Lets work in the set $\mathbb{Z_9}$, then $4\in\mathbb{Z_9}$ and $\gcd(4,9)=1$. Therefore $4$ has a multiplicative inverse (written $4^{−1}$) in $\bmod9$, which is $7$. And indeed, $4\cdot7=28\equiv1\pmod9$. But not all numbers have inverses. For instance, $3\in\mathbb{Z_9}$ but $3^{−1}$ does not exist! This is because $\gcd(3,9)=3\neq1$. but what I do not understand is what does he mean by: then the smaller of the two numbers has a multiplicative inverse in the modulo of the larger number. and how he got the $7$ • I would never say "in the modulo of"; rather I would just use "modulo" as a preposition: "the multiplicative inverse modulo the larger number". All mathematicians understand that. The former locution is not standard. ${}\qquad{}$ – Michael Hardy Aug 14 '15 at 23:02 • Speculation on the author's motivation: English prepositions are a closed class - new ones aren't invented often. If you're a native English speaker, you learn them all pretty early. Adding a new one to your vocabulary isn't like learning a new noun, adjective, or verb. It's weird. It takes some mental effort to find the right way to parse a sentence with an unfamiliar preposition. Avoiding prepositional "modulo" may just be a way of going easy on the target audience. – user141452 Aug 15 '15 at 1:16 The two numbers in his example are $4$ and $9$. The statement is that $4$ has a multiplicative inverse in the integers modulo $9$, or in other words, there is an integer $n$ such that $4 \cdot n \equiv 1 \mod 9$. The $7$ can be obtained by some trial and error (you only need to check the integers $1$ through $9$). He then gives an example of an integer that does not have a multiplicative inverse modulo $9$, namely $3$. • it was a perfect answer, thanks a lot, but I am wondering about why they call it multiplicative inverse, it is a little confusing because the multiplicative inverse of number is the number which you multiply it by that number and gives 1 for example if we multiply 9 * 1/9 this will give one so 1/9 is the multiplicative inverse of 9 – Mohamad Aug 15 '15 at 1:30 • @MohammadHaidar It is the same thing, except you have modulo $9$ afterward; as I stated above, the multiplicative inverse of $4$ (working modulo $9$) is the number $n$ such that $4 \cdot n$ equals $1$ (modulo $9$). – angryavian Aug 15 '15 at 1:33 How familiar are you with modular arithmetic? What the author means is that if $\gcd(n,m)=1$ and $m<n$, then we can find a number $k\in\{1,2,...,n-1\}$ such that $mk\equiv 1(\mod n)$. One way to find the multiplicative inverse is to use the Extended Euclidean Algorithm, but for something small like $4$ and $9$, it is pretty fast to just multiply $4$ by everything in the set $\{1,2,...,8\}$, and see what comes out to be congruent to $1$ modulo $9$. It is a fact from group theory that only one of these numbers should be the inverse. • Just a comment (+1): I find (when just going by hand, for small numbers) that it's easier for me personally to look at multiples of the modulus. It only takes up to $9 \cdot 3 = 27$ to find that $9 \cdot 3 + 1 = 28$ is a multiple of $4$. – pjs36 Aug 14 '15 at 23:13 Concretely, what he means is this: if $a<b$ are positive integers, and are coprime (that is, their gcd is 1), then there is some integer $c$ such that $ac$ leaves a remainder of 1 when divided by $b$ - that is, $ac-1$ is a multiple of $b$. A more abstract way of putting this: consider the set $\{0, 1, . . . , b-1\}$. There is a binary operation $\otimes$ on this set, "inspired" by multiplication on the integers, which is defined as follows: $x\otimes y$ is the remainder left when $xy$ is divided by $b$. So, for instance, if $b=7$ then $4\otimes 3=5$. We can similarly extend addition in this way: $a\oplus b$ is the remainder left when $a+b$ is divided by $b$. The set $\{0, 1, . . . , b-1\}$ equipped with these operations is called the integers modulo $b$. In the normal universe, a multiplicative inverse of a number $x$ is a number $y$ such that $xy=1$. In the integers, there are no (interesting) numbers with multiplicative inverses; in the integers modulo $b$, however, we get lots of multiplicative inverses! For example, if $b=7$, then 2 is the multiplicative inverse of 4 in the integers modulo 7. (NOTE: In general, any operation on the integers which respects remainders has an analogue on the set $\{0, 1, . . . , b-1\}$, but we're usually most interested in plus and times.) Notation-wise, what I've written above is non-standard. We write "$a\equiv c$ $(mod$ $b)$" to mean that $a$ leaves a remainder of $c$ when divided by $b$ (or that $a$ and $c$ leave the same remainder); so we usually write "$x+y\equiv z$ $(mod\,\, b)$" rather than "$x\oplus y=z$ in mod $b$". Let $n$ be a positive integer and consider the positive integers, $x$, that are less than $n$ and for which $\gcd(x,n)=1$. Then $x$ is invertible modulo $n$. Also, if $\gcd(x,n) > 1$, then $x$ is not invertible modulo $n$. For example $\gcd(3,14)=1$ and $3 \cdot 5 \equiv 15 \equiv 1 \pmod{14}$. However $\gcd(6,14)=2$ and there is no integer $y$ such that $6y \equiv 1 \pmod{14}$ A multiplicative inverse of a number $a\in R$ for a ring $R$ is a number $a^{-1}$ such that $aa^{-1}=1$. What he means by saying "modulo the bigger number" which we can denote $b$ is a multiplicative inverse of $a$ (the smaller number) in the ring $\mathbb{Z_b}$ which is the ring of the integers under the identification $\forall c,d \in \mathbb{Z} \ \ \bar{c}=\bar{d}\in\mathbb{Z}_b \ \iff c\equiv d(\mbox{mod} b)$. Note this is intereseting because in general $\mathbb{Z}_b$ is a ring where we do not know if we have multiplicative inverses. $\mathbb{Z}_b$ is a field (commutative ring with inverses) $\iff \ b$ is a prime number.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664074182510376, "perplexity": 137.54839360675584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00484.warc.gz"}
http://mathhelpforum.com/differential-equations/128969-sturm-liouville-problem.html
## Sturm Liouville Problem Problem: find the eigenvectors and eigenfunctions of the following Sturm-Liouville problem: $ \frac{d}{dx} \left(x^4 \frac{dy}{dx} \right) + \lambda yx^2 = 0, \qquad 1 \leq x \leq 2, \qquad y(1)=y(2) = 0 $ Since $ x \neq 0 $ i can rearrange to $ \displaystyle x^4 \frac{d^2 y}{dx^2} + 4x^3 \frac{dy}{dx} + \lambda yx^2 = 0 $ When substituting $ y = x^k $ i am having difficulties when simplifying to get k on its own. Am i doing this correctly?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440983295440674, "perplexity": 1496.4241680057119}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00025-ip-10-31-129-80.ec2.internal.warc.gz"}
http://math.gatech.edu/seminar-and-colloquia-schedule/2018-W17
Seminars and Colloquia Schedule Monday, April 23, 2018 - 14:00 , Location: Skiles 006 , Hong Van Le , Institute of Mathematics CAS, Praha, Czech Republic , , Organizer: Thang Le Novikov  homology was introduced by  Novikov in  the early 1980s motivated by problems  in hydrodynamics.  The Novikov inequalities in the Novikov homology theory give lower bounds for the number of critical points of a Morse  closed 1-form  on a compact  differentiable manifold M. In the first part of my talk  I shall survey  the Novikov homology theory in finite dimensional setting and its  further developments  in infinite dimensional setting with applications in the theory of symplectic fixed points and Lagrangian intersection/embedding problems. In the  second part of my talk I shall report  on my recent joint work with Jean-Francois Barraud  and Agnes Gadbled on construction  of the Novikov fundamental group  associated to a  cohomology class  of a closed 1-form  on M  and its application to obtaining  new lower bounds for the number of critical points of  a Morse 1-form. Monday, April 23, 2018 - 15:00 , Location: Skyles006 , , Georgia Tech/Ben-Gurion University , , Organizer: Amnon Besser The talk reports on joint work with Wayne Raskind and concerns the conjectural definition of a new type of regulator map into a quotient of an algebraic torus by a discrete subgroup, that should fit in "refined" Beilinson type conjectures, exteding special cases considered by Gross and Mazur-Tate.The construction applies to a smooth complete variety over a p-adic field K which has totally degenerate reduction, a technical term roughly saying that cycles acount for the entire etale cohomology of each component of the special fiber. The regulator is constructed out of the l-adic regulators for all primes l simulateously. I will explain the construction, the special case of the Tate elliptic curve where the regulator on cycles is the identity map, and the case of K_2 of Mumford curves, where the regulator turns out to be a map constructed by Pal. Time permitting I will also say something about the relation with syntomic regulators. Series: PDE Seminar Tuesday, April 24, 2018 - 15:00 , Location: Skiles 006 , , SISSA , , Organizer: Yao Yao We prove an abstract theorem giving a $t^\epsilon$ bound for any $\epsilon> 0$ on the growth of the Sobolev norms in some abstract linear Schrödinger equations. The abstract theorem is applied to  nonresonant Harmonic oscillators in R^d. The proof is obtained by conjugating the system to some normal form in which the perturbation is a smoothing operator. Finally, time permitting, we will show how to construct a perturbation of the harmonic oscillator which provokes growth of Sobolev norms. Wednesday, April 25, 2018 - 01:55 , Location: Skiles 005 , March Boedihardjo , UCLA , Organizer: Shahaf Nitzan Abstract: I will state a version of Voiculescu's noncommutative Weyl-von Neumann theorem for operators on l^p that I obtained. This allows certain classical results concerning unitary equivalence of operators on l^2 to be generalized to operators on l^p if we relax unitary equivalence to similarity. For example, the unilateral shift on l^p, 1 Friday, April 27, 2018 - 15:00 , Location: Skiles 005 , , Cornell University , Organizer: Lutz Warnke Given a collection of finite sets, Kneser-type problems aim to partition this collection into parts with well-understood intersection pattern, such as in each part any two sets intersect. Since Lovász' solution of Kneser's conjecture, concerning intersections of all k-subsets of an n-set, topological methods have been a central tool in understanding intersection patterns of finite sets. We will develop a method that in addition to using topological machinery takes the topology of the collection of finite sets into account via a translation to a problem in Euclidean geometry. This leads to simple proofs of old and new results. Friday, April 27, 2018 - 15:00 , Location: Skiles 202 , Brian Kennedy , School of Physics, Georgia Tech , Organizer: Michael Loss Electrons possess both spin and charge. In one dimension, quantum theory predicts that systems of interacting electrons may behave as though their charge and spin are transported at different speeds.We discuss examples of how  such many-particle effects may be simulated using neutral atoms and radiation fields. Joint work with Xiao-Feng Shi Friday, April 27, 2018 - 15:05 , Location: Skiles 271 , Bhanu Kumar , GTMath , Organizer: Jiaqi Yang This talk follows Chapter 4 of the well known text by Guckenheimer and Holmes. It is intended to present the theorems on averaging for systems with periodic perturbation, but slow evolution of the solution. Also, a discussion of Melnikov’s method for finding persistence of homoclinic orbits and periodic orbits will also be given. Time permitting, an application to the circular restricted three body problem may also be included.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8419641256332397, "perplexity": 1414.1091289112248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944848.33/warc/CC-MAIN-20180420233255-20180421013255-00171.warc.gz"}
http://mathoverflow.net/questions/42653/number-of-n-th-roots-of-elements-in-a-finite-group-and-higher-frobenius-schur-in
# Number of n-th roots of elements in a finite group and higher Frobenius-Schur indicators This is the second follow-up to this question on square roots of elements in symmetric groups and is concerned with generalisations to $n$-th roots. Let $G$ be a finite group and let $r_n(g)$ be the number of elements $h\in G$ such that $h^n = g$. In other words, $$r_n(g) = \sum_{h\in G}\delta_{h^n,g},$$ where $\delta$ is the usual Kronecker delta. In a comment to my answer to the above mentioned question, Richard Stanley notes that if $G=S_m$, then $r_n(g)$ attains its maximum at the identity element of $G$. My question is: how far does this generalise and what exactly does it tell us about $G$? This should be primarily a question about higher Frobenius-Schur indicators. Let me elaborate a bit. The function $r_n$ is clearly a class function on $G$ and, upon taking its inner product with all irreducible characters of $G$, one finds that $$r_n(g) = \sum_\chi s_n(\chi)\chi(g),$$ where the sum runs over all irreducible complex characters of $G$ and $s_n(\chi)$ is the $n$-th Frobenius-Schur indicator of $\chi$, defined as $$s_n(\chi) = \frac{1}{|G|}\sum_{h\in G}\chi(h^n).$$ When $n=2$, the Frobenius-Schur indicator is equal to 0,1 or -1 and carries explicit information about the field of definition of the representation associated with $\chi$. What do higher Frobenius-Schur indicators tell us about the representations and, by extension, about the group? What do we know about their values? Have higher Frobenius-Schur indicators been studied in any detail? Given $n\in \mathbb{N}$, for what groups $G$ do we have $\max_g \; r_n(g) = r_n(1)$? For what groups does this hold for all $n$? As noted by Richard Stanley, the latter is true for all symmetric groups. It is also easy to see that the set of groups with this property is closed under direct products, and that all finite abelian groups possess this property. - Thanks, Denis, for the edit! Well spotted. –  Alex B. Oct 18 '10 at 16:02 Here are some things you probably know. For a representation $W$ of $G$, let $\text{Inv}(W)$ denote the subspace of $G$-invariants. For an irreducible representation $V$ with character $\chi$, the F-S indicator $s_2(\chi)$ naturally appears in the formulas $$\dim \text{Inv}(S^2(V)) = \frac{1}{|G|} \sum_{g \in G} \frac{\chi(g)^2 + \chi(g^2)}{2}$$ and $$\dim \text{Inv}(\Lambda^2(V)) = \frac{1}{|G|} \sum_{g \in G} \frac{\chi(g)^2 - \chi(g^2)}{2}.$$ More precisely the F-S indicator is their difference, while their sum is $1$ if $V$ is self-dual and $0$ otherwise. The corresponding formulas involving $s_3(\chi)$ are $$\dim \text{Inv}(S^3(V)) = \frac{1}{|G|} \sum_{g \in G} \frac{\chi(g)^3 + 3 \chi(g^2) \chi(g) + 2 \chi(g^3)}{6}$$ and $$\dim \text{Inv}(\Lambda^3(V)) = \frac{1}{|G|} \sum_{g \in G} \frac{\chi(g)^3 - 3 \chi(g^2) \chi(g) + 2 \chi(g^3)}{6}.$$ Here the F-S indicator $s_3(\chi)$ naturally appears in the sum, not the difference, of these two dimensions. Of course $T^3(V)$ decomposes into three pieces, and the third piece $S^{(2,1)}(V)$ satisfies $$\dim \text{Inv}(S^{(2,1)}(V)) = \frac{1}{|G|} \sum_{g \in G} \frac{4 \chi(g)^3 - 4 \chi(g^3)}{6}.$$ So $s_3(\chi)$ constrains the dimensions of these spaces in some more mysterious way than $s_2(\chi)$ does. The sum of these dimensions $$\dim \text{Inv}(T^3(V)) = \frac{1}{|G|} \sum_{g \in G} \chi(g)^3$$ tell us whether $V$ admits a "self-triality," and this dimension is an upper bound on $s_3(\chi)$. If $V$ is self-dual, this is equivalent to asking whether there is an equivariant bilinear map $V \times V \to V$, which might be of interest to somebody. If this dimension is nonzero then $s_3(\chi)$ gives us information about how a triality behaves under permutation. The situation for higher values of $3$ is worse in the sense that the bulk of the corresponding formulas are not completely in terms of F-S indicators but in terms of inner products of F-S indicators and their interpretation will only get more confusing. Already I don't know of many applications of triality (in fact I know exactly one: http://math.ucr.edu/home/baez/octonions/node7.html). - Dear Qiaochu, thank you! I didn't know about triality, very interesting! I will have to think about this a bit. I like "higher values of 3". –  Alex B. Feb 9 '11 at 3:53 If $n > 2$, there is NO absolute upper bound on the "higher" F.S. indicator $s_n(\chi)$. This is Problem 4.9 in my character theory book. (A hint is given there.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381245374679565, "perplexity": 155.5742225166815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/ramanujan-summation.922509/
# Ramanujan Summation • B • Start date • #1 Leo Authersh ## Main Question or Discussion Point What does the equation ζ(−1) = −1/12 represent precisely? It's impossible for that to be the sum of all natural numbers. And it is also mentioned in all the maths articles that the 'equal to' in the equation should not be understood in a traditional way. If so, then why wikipedia article states that, 1+2+3+.... = - 1/12 Last edited by a moderator: • #2 fresh_42 Mentor 12,322 8,709 It's impossible for that to be the sum of all natural numbers. Yes. And it is also mentioned in all the maths articles that the 'equal to' in the equation should not be understood in a traditional way. Yes. If so, then why are even wikipedia article states that, 1+2+3+.... = - 1/12 Have you read it and what do you know about the zeta function and analytic continuations? God, I hate that video. The video is very misleading. I hoped they would be somewhat clear in it. First of all, the series $1+2+3+4+...$ diverges. You will find no mathematician that disagrees with this. The most natural sum is $1+2+3+4+... = +\infty$. Now, what is the $-1/12$ thing all about? Well, some mathematicians have found a way to associate a number to divergent series. I would not call that number the "sum" of the series, it is just a number associated to it. In this case, the number associated to $1+2+3+4+...$ is $-1/12$. Now, we often write $1+2+3+4+5+... = -1/12$, but that's where you should be careful, since that $=$ sign does not mean the classical one, in fact it means that we evaluate the series in a nonstandard way (like Ramanujan summation). Now in many circumstances, replacing $1+2+3+4+...$ with $-1/12$ is wrong and a very bad idea, but in some it might work out. It should then be shown why exactly we can replace the sum by $-1/12$. Also of interest: https://mathoverflow.net/questions/64898/values-of-the-riemann-zeta-function-and-the-ramanujan-summation-how-strong-is https://en.wikipedia.org/wiki/Ramanujan's_sum • #3 Leo Authersh @fresh_42 As far I have understood from the topics I have studied earlier, this zeta regularization is used to define the type of the series. ζ(−1) = −1/12 represents the value of the series in the complex plane at 1. It is just the value of the series at a particular point (here it is 1) while in complex plane. It just defines the nature of the series in the complex plane. And those topics are strong with the point that the zeta function (extension of the series in complex plane) is continuous upto infinity. So, as per my understanding, the wikipedia notion of writing the series as 1+2+3+4+5..... = -1/12 is wrong. One thing that can be said is that Ramanujan based this discovery upon the already proven series 1+1-1+1-1+1... = 1/2 If you think about this series you can perceive that the value 1/2 is not the summation because the summation value alters infinitely between 1 and 0. But one can understand the nature of the series that the sum should be between 1 and 0 and hence the average value calculated as 1/2. It's similar to the quantum physics, where they say that the chance of an electron to be present simultaneously in two different locations is not zero%. Some instance it can be 50℅ which can be interpreted numerically as the series above. Again the common misinterpretation is that the 50% chance means the electron will be present in two different locations at the same time. But that's not true. It actually is that the possibility of an electron being in any one of the location at the same time is 50% (the probability of the electron present in a location is mutually dependent on its presence or absence in another location). And this is suggested by Schrödinger's paradox. Last edited by a moderator: • #4 Leo Authersh @fresh_42 Note that the above answer is completely based on my understandings. And my understanding is incomplete and hence I asked this question in the thread for more comprehension on the subject. I haven't yet read the links you posted. Will read it and let you know. Hopefully they will provide me better acquisition on the definition. Last edited by a moderator: • #5 Leo Authersh @fresh_42 Nevertheless, I want to assert that the Numberphile video is nothing but a hypocrisy. Completely misleading people for the sake of making money through YouTube views. Especially the reactions given by both of them in the thumbnail of the video explicates the deception they execute. Last edited by a moderator: • #6 mathman 7,739 406 What does the equation ζ(−1) = −1/12 represent precisely? It's impossible for that to be the sum of all natural numbers. And it is also mentioned in all the maths articles that the 'equal to' in the equation should not be understood in a traditional way. If so, then why wikipedia article states that, 1+2+3+.... = - 1/12 The basic idea is analytic extension. The series is equal to some function where it converges. The function itself may be well defined outside the series convergence range. A very simple example $$\frac{1}{1-x}=1+x+x^2+x^3+....$$ for |x|<1. however the function is defined for all x, except x=1 • #7 Leo Authersh The basic idea is analytic extension. The series is equal to some function where it converges. The function itself may be well defined outside the series convergence range. A very simple example $$\frac{1}{1-x}=1+x+x^2+x^3+....$$ for |x|<1. however the function is defined for all x, except x=1 Thank you for the explanation. If so, what is the range of convergence in the Ramanujan sum? And how can we have different ranges when the series is of natural numbers and not a variable? Last edited by a moderator: • #8 mathman 7,739 406 Thank you for the explanation. If so, what is the range of convergence in the Ramanujan sum? And how can we have different ranges when the series is of natural numbers and not a variable? I am not familiar with the Ramanujan sum. The series of numbers results from evaluating the series at a particular value of the argument. For example: $$\frac{1}{1-x}$$ series evaluated at x=2 leads to 1+2+4+8+......=-1. • #9 Leo Authersh [tex said: \frac{1}{1-x}[/tex] series evaluated at x=2 leads to 1+2+4+8+......=-1. Can you please explain how is it -1? • #10 470 267 $\frac{1}{1-2}=-1$ • Last Post Replies 7 Views 839 • Last Post Replies 3 Views 631 • Last Post Replies 4 Views 807 • Last Post Replies 4 Views 576 • Last Post Replies 1 Views 662 • Last Post Replies 5 Views 608 • Last Post Replies 6 Views 1K • Last Post Replies 3 Views 2K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110369086265564, "perplexity": 449.7207614702313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00479.warc.gz"}
https://www.physicsforums.com/threads/finding-the-time-it-takes-for-two-stacked-blocks-to-travel.955972/
# Finding the time it takes for two stacked blocks to travel • Start date • Tags • #1 49 1 ## Homework Statement The coefficient of static friction is 0.604 between the two blocks shown. The coefficient of kinetic friction between the lower block and the floor is 0.104. Force F causes both blocks to cross a distance of 3.44m, starting from rest. What is the least amount of time in which the motion can be completed without the top block sliding on the lower block, if the mass of the lower block is 1.13kg and the mass of the upper block is 2.25kg? ## The Attempt at a Solution I looked at the attached file that my teacher gave us I found the normal of the top block which was 22.0 N Since it equals one of the normals of the bottom block, I substituted in the same value in for calculating the other normal. This "newfound" normal was used to calculate the acceleration in the x direction using Fnet=ma ---> ma = friction static - friction kinetic #### Attachments • 58.9 KB Views: 56 Related Introductory Physics Homework Help News on Phys.org • #2 nrqed Homework Helper Gold Member 3,605 205 ## Homework Statement The coefficient of static friction is 0.604 between the two blocks shown. The coefficient of kinetic friction between the lower block and the floor is 0.104. Force F causes both blocks to cross a distance of 3.44m, starting from rest. What is the least amount of time in which the motion can be completed without the top block sliding on the lower block, if the mass of the lower block is 1.13kg and the mass of the upper block is 2.25kg? ## The Attempt at a Solution I looked at the attached file that my teacher gave us I found the normal of the top block which was 22.0 N Since it equals one of the normals of the bottom block, I substituted in the same value in for calculating the other normal. This "newfound" normal was used to calculate the acceleration in the x direction using Fnet=ma ---> ma = friction static - friction kinetic Let's call the block under "A" et the block on top "B". The applied force is only on A? And it is horizontal? What you have to do is to determine the maximum horizontal force on block B (top) such that it does not slide. This is given by the max static friction force. Calculate this first, and then set that equal to $m_B a$. That will give you the acceleration, which will be the acceleration of both blocks since they move as one. • #3 49 1 Let's call the block under "A" et the block on top "B". The applied force is only on A? And it is horizontal? What you have to do is to determine the maximum horizontal force on block B (top) such that it does not slide. This is given by the max static friction force. Calculate this first, and then set that equal to $m_B a$. That will give you the acceleration, which will be the acceleration of both blocks since they move as one. does this maximum static friction force equal the force being applied to pull the boxes? In the FBD it indicates so but do i just calculate the mac static friction force alone? • #4 nrqed Homework Helper Gold Member 3,605 205 does this maximum static friction force equal the force being applied to pull the boxes? In the FBD it indicates so but do i just calculate the mac static friction force alone? No, it is not equal to the force applied. You need to use the equation for the maximum static friction force on an object (the one that contains $\mu_s$). • #5 49 1 No, it is not equal to the force applied. You need to use the equation for the maximum static friction force on an object (the one that contains $\mu_s$). So i get: Fs = (coefficient of static friction) x (normal force of top block) Fs = 13.3 13.3 = mb(a) 13.3= 2.25kg (a) a = 5.91 m/s^2 d = Vit + 1/2at^2 (Vit vanishes because theres no initial velocity) d = 1/2at^2 3.44 = 1/2 (5.91)t^2 3.44/2.955 = t^2 //sqrt both sides 1.08 s = t this is apparently wrong :( • #6 nrqed Homework Helper Gold Member 3,605 205 So i get: Fs = (coefficient of static friction) x (normal force of top block) Fs = 13.3 13.3 = mb(a) 13.3= 2.25kg (a) a = 5.91 m/s^2 d = Vit + 1/2at^2 (Vit vanishes because theres no initial velocity) d = 1/2at^2 3.44 = 1/2 (5.91)t^2 3.44/2.955 = t^2 //sqrt both sides 1.08 s = t this is apparently wrong :( I don't see any mistake. The force is only acting on block A, right? • #7 49 1 I don't see any mistake. The force is only acting on block A, right? yeah it is. But what i wish i could show you is the diagram. I should probably mention that the top block has a rope attached to it and this is where that question i asked you about calculating static friction on its own came from. Knowing this, would it affect the outcome? • #8 nrqed Homework Helper Gold Member 3,605 205 yeah it is. But what i wish i could show you is the diagram. I should probably mention that the top block has a rope attached to it and this is where that question i asked you about calculating static friction on its own came from. Knowing this, would it affect the outcome? AHH! It changes everything. I was assuming there were no other force than friction on the top block • #9 49 1 AHH! It changes everything. I was assuming there were no other force than friction on the top block yes i am so sorry ;(! thats why I was getting confused because I thought that the tension in the rope = static friction of the top block. Im back to square one now, please help!!! • #10 haruspex Homework Helper Gold Member 32,739 5,034 So i get: Fs = (coefficient of static friction) x (normal force of top block) Fs = 13.3 13.3 = mb(a) 13.3= 2.25kg (a) a = 5.91 m/s^2 d = Vit + 1/2at^2 (Vit vanishes because theres no initial velocity) d = 1/2at^2 3.44 = 1/2 (5.91)t^2 3.44/2.955 = t^2 //sqrt both sides 1.08 s = t this is apparently wrong :( Your method would be right if F were applied to the lower block, but it is applied to the top block. Clearly, the friction between the lower block and ground will be important, but you have not considered it. • #11 nrqed Homework Helper Gold Member 3,605 205 yes i am so sorry ;(! thats why I was getting confused because I thought that the tension in the rope = static friction of the top block. Im back to square one now, please help!!! No, *I* am sorry, I opened the file you had attached, but I did not notice there were two pages!! I just saw the first one, so I did not realize there was a rope there. I am sorry • #12 nrqed Homework Helper Gold Member 3,605 205 Your method would be right if F were applied to the lower block, but it is applied to the top block. Clearly, the friction between the lower block and ground will be important, but you have not considered it. It is my fault, I had not seen the FBD and I gave instructions thinking that there were no horizontal forces on the top block. My mistake. • #13 49 1 It is my fault, I had not seen the FBD and I gave instructions thinking that there were no horizontal forces on the top block. My mistake. So if i calculate the force of the normal from the top block, according to the diagram it is the same magnitude for the bottom block? Or do i reverse its signs? • #14 nrqed Homework Helper Gold Member 3,605 205 So if i calculate the force of the normal from the top block, according to the diagram it is the same magnitude for the bottom block? Or do i reverse its signs? Watch out, there are two normal forces on the bottom block, one due to the contact with the top block and one with the floor. Calculate these two. Then calculate the net horizontal force on the bottom block, using for the static friction force the maximum value, which you had calculated before, $f_s = \mu_s m_{top} g$. Then the net horizontal force on the bottom block is $f_s - f_k$. Set this to $m_{bot} a$ and then do as before. • #15 49 1 Watch out, there are two normal forces on the bottom block, one due to the contact with the top block and one with the floor. Calculate these two. Then calculate the net horizontal force on the bottom block, using for the static friction force the maximum value, which you had calculated before, $f_s = \mu_s m_{top} g$. Then the net horizontal force on the bottom block is $f_s - f_k$. Set this to $m_{bot} a$ and then do as before. so if i calculate the normal force of the top block to be 22.0 N, since there are two normal forces acting on the bottom block, the normal force of the top block will act as one of the normal forces of the bottom block. Should it be 22.0 N or -22.0 N? • #16 nrqed Homework Helper Gold Member 3,605 205 so if i calculate the normal force of the top block to be 22.0 N, since there are two normal forces acting on the bottom block, the normal force of the top block will act as one of the normal forces of the bottom block. Should it be 22.0 N or -22.0 N? The magnitude is 22 N while the y component is -22 N. But for the calculation of the static friction force, $f_s$, you need to use the magnitude. You will of course get the same result as before. • #17 49 1 The magnitude is 22 N while the y component is -22 N. But for the calculation of the static friction force, $f_s$, you need to use the magnitude. You will of course get the same result as before. im still lost, I don't know what to do :(, any more hints? • #18 haruspex Homework Helper Gold Member 32,739 5,034 im still lost, I don't know what to do :(, any more hints? You need to analyse the forces and acceleration of the lower block. • #19 nrqed Homework Helper Gold Member 3,605 205 im still lost, I don't know what to do :(, any more hints? First step: calculate the maximum static friction force $f_s$ (it is the same static friction force on th stop block an don the bottom block by the action-reaction principle ). Second step: calculate the force of kinetic friction on the bottom block (due to the friction with the ground). Do you see how to do those two steps? • Last Post Replies 2 Views 4K • Last Post Replies 7 Views 8K • Last Post Replies 1 Views 1K • Last Post Replies 0 Views 8K • Last Post Replies 1 Views 4K • Last Post Replies 2 Views 2K • Last Post Replies 8 Views 3K • Last Post Replies 19 Views 1K • Last Post Replies 2 Views 2K • Last Post Replies 1 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142604470252991, "perplexity": 538.3248250069896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00457.warc.gz"}
https://www.ques10.com/p/19988/fluid-mechanics-question-paper-may-2016-mechanic-5/
Question Paper: Fluid Mechanics : Question Paper May 2016 - Mechanical Engineering (Semester 3) | Visveswaraya Technological University (VTU) 0 ## Fluid Mechanics - May 2016 ### Mechanical Engg. (Semester 3) TOTAL MARKS: 100 TOTAL TIME: 3 HOURS (1) Question 1 is compulsory. (2) Attempt any four from the remaining questions. (3) Assume data wherever required. (4) Figures to the right indicate full marks. 1(a) Explain the following fluid properties with relevant equations: (i) Bulk modulus (ii) Capillarity (iii) Kinematic viscosity (iv) Surface tension. (8 marks) 1(b) What is cavitation? Explain the importance of cavitation in the study of fluid mechanics.(4 marks) 1(c) A square plate of side 1 m and weight 350 N slides down an inclined plane with a uniform velocity of 2 m/s. The inclined plane is laid on a slope of 6 : 8 and has an oil film of 1 mm thickness. Calculate the viscosity of oil.(8 marks) ### Explain the terms: 2(a)(i) Total pressure(2 marks) 2(a)(ii) Centre of pressure(2 marks) 2(a)(iii) Pressure at a point(2 marks) 2(b) A simple U-tube manometer containing mercury is connected to a pipe in which a fluid of sp.gr. 0.8 and having vacuum pressure is flowing. The other end of the manometer is open to atmosphere. Find the vacuum pressure in pipe, if the difference of mercury level in the two limbs is 40 cm and the height of fluid in the left from the centre of pipe is 15 cm below.(4 marks) 2(c) A circuilar plate of 3.0 m diameter with a concentric circular hole of diameter 1.5 m is immersed in water in such a way that its gratest and least depth below the free surface are 4 m and 1.5 m respectively. Determine the total pressure on one face of the plate and position of the centre of pressure.(10 marks) 3(a) A metallic body floats at the interface of mercury and water in such a way that 30% of its volume is submerged in mercury and 70% in water. Find the density of the metallic body.(5 marks) 3(b) A wooden block of size 3m×2m×1m nad of specific gravity 0.8 floats in water. Determine its meta centric height.(5 marks) 3(c) A fluid flow is given by V = 10x3i - 8x3yj. Find the shear strain rate and state whether the flow is rotational or irrotational.(5 marks) 3(d) The velocity potential is given by ϕ = x(2y - 1). Calculate the value of stream function at a point (1, 2).(5 marks) 4(a) State Bernoulli's theorem for fluid for fluid flow. Derive an expression for Bernoulli's equation from first principle. Also state the assumption made for such a derivation.(10 marks) 4(b) A pipeline carrying oil of specific gravity 0.8 changes in diameter from 300 mm at a position A to 500 mm to a position B which is 5 m at a higher level. If the pressures at A and B are 1.962 bar and 1.491 bar repectively, and the discharge is 150 litres/s, determine the loss of head during the fluid flow. Also state the direction of the fluid flow.(10 marks) 5(a) When do you prefer orifice meter over a venturimeter? Why?(2 marks) 5(b) An oil of specific gravity 0.9 is flowing in a venturimeter of size 30cm×10sm. The oil mercury differential manometer shows a reading of 20 cm. Calculate the flow rate of oil through the horizontal venturimeter. Take discharge coefficient of venturimeter as 0.98.(6 marks) 5(c) A rectangular channel 2 m wide has a discharge of 0.25 m3/s, which is measured by a right-angled V-notch weir. Find the position of the apex of the notch from the bed of the channel if maximum depth of water is not to exceed 1.3 m. Take Cd = 0.62.(4 marks) 5(d) Show by Buckingham's π-theorem that the frictional torque T of a disc of diameter D rotating at a speed N in a fluid of viscosity μ and density ρ in a flow is given by, $$T=D^5N^2\rho \phi\left [ \dfrac{\mu}{D^2N\rho} \right ]$$(8 marks) 6(a) Explain the terms HGL and TEL in case of flow through pipes.(4 marks) 6(b) List out the various frctional and minor losses occurring in a flow through piper. Also write down the expressions for the loss of head in each of the above cases.(6 marks) 6(c) A horizontal pipe line 40 m long is connected to a water tank at one end and discharges freely into the atmosphere at the other end. For the first 25 m of its length from the tank, the pipe is 150mm diameter and its diameter is suddenly enlarged to 300 mm. The height of water level in the tank is 8 m above the centre of the pipe. Determine the rate of flow considering all losses of head which occur. Take f = 0.01 for both sections of the pipe.(10 marks) 7(a) Explain the terms the critical Reynold's number, velocity gradient and pressure gradient with respect to a viscous flow.(6 marks) 7(b) Derive an expression for the velocity distribution for Hagen-Poiseuille flow occurring in a circular pipe. Hence prove that the maximum velocity is twice the average velocity of the flow.(10 marks) 7(c) Determine (i) the preessure gradient (ii) the shear stress at the two horizontal parallel plates for the laminar flow of oil with a maximum velocity of 1.5 m/s between two horizontal parallel fixed plates which are 80 mm apart. Take the viscosity of oil as 1.962 NS/m2(4 marks) 8(a) Explain the terms : (i) boundary layer thickness (ii) Displacement thickness (iii) Momentum thickness (iv) Energy thickness.(6 marks) 8(b) A flat plate 2m×2m moves at 40 km/hr in stationary air of density 1.25 kg/m3. If the coefficient of drag and lift are 0.2 and 0.8 respectively, find (i) the lift force (ii) the drag force (ii) the resultant force and (iv) the power required to keep the plate in motion.(4 marks) 8(c) Obtain an expression for velocity of the sound wave in a compressible fluid in terms of change of pressure and change of density.(6 marks) 8(d) Calculate the Mach number and Mach angle at a point on a jet propelled aircraft which is flying at 900 km/hour at sea level where air temperature is 15°C. Take K = 1.4 and R = 287 J/kgK.(4 marks)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8733043074607849, "perplexity": 1439.9188373754014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00503.warc.gz"}
https://physics.stackexchange.com/questions/126905/how-can-a-generalised-force-be-dependent-on-an-angle-i-e-not-a-vector
# How can a generalised force be dependent on an angle i.e. not a vector? I'm currently working through an example question in Patrick Hamill's 'A Student's Guide to Hamiltonians and Lagrangians'. The question I'm having conceptual difficulty with is: A particle is acted upon with components $F_x$ and $F_y$. Determine the generalized forces, $Q_i$, in polar coordinates. It's simple to find the generalised force dependence on $r$: ($Q_r = F_x\cos(\theta) + F_y\sin(\theta)$) however I can't seem to get my head around the answer for the dependence on $\theta$, (which is $-F_x r\sin(\theta) + F_y r\cos(\theta)$). I think that the difficulty I'm having stems from $\theta$ looking a lot like a vector here in component form. As far as I am aware, angles are never vectors. $\theta$ in this case is a coordinate, i.e. part of the description of a point. The vector associated to that coordinate could be called $\hat{e}_\theta$, and point in the direction, in which $\theta$ changes. So, in polar coordinates, the force depends on the position at which it is evaluated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848440408706665, "perplexity": 223.4539153177047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655929376.49/warc/CC-MAIN-20200711095334-20200711125334-00497.warc.gz"}
https://www.physicsforums.com/threads/simple-simple-question.185598/
# Simple Simple question 1. Sep 19, 2007 ### dbx 1. The problem statement, all variables and given/known data lim x -> negative infinity (2x - 4) / rad (3x^2-5) 2. Relevant equations no equations 3. The attempt at a solution I cannot figure out how to rationalize the denominator. I tried rad (3x^2+5) but i am not sure what to do after that. Please solve using ALGEABRA 2. Sep 19, 2007 ### dynamicsolo You won't rationalize the denominator by using rad(3x^2 *+* 5 ), because that will just give you a difference of two squares under the root sign. In fact, you don't need to rationalize this at all. Instead, multiply numerator and denominator by (1/x): (2x - 4) · (1/x) __________________ (1/x) · sqrt(3x^2 - 5) , change the (1/x) in the denominator to the radical sqrt[1/(x^2)] and multiply numerator and denominator through. Now there is a *catch* for the limit x-> negative infinity (or any negative value, really); since we are following *negative* values of x, the square root of (x^2), for instance, is going to be -x [because the square root operation gives a positive value, but our x's are negative]. So, to evaluate *this* limit, we must use (1/x) = -sqrt[1/(x^2)] : (2x - 4) · (1/x) _________________________ . -sqrt[1/(x^2)] · sqrt(3x^2 - 5) (2 - [4/x]) ____________________ -sqrt(3 - [5/{x^2}]) . From here, we can now just use the Limit Law, lim x-> plus or minus inf. [1/(x^p)] = 0 , for p positive, to obtain (2 - 0)/[ -sqrt(3 - 0) ] = -2/sqrt(3), or after rationalizing, -[2 sqrt(3)]/3 . If you were evaluating the limit for x-> plus infinity, that minus sign for the square root wouldn't be needed and the limit would be +[2 sqrt(3)]/3 . This illustrates an interesting behavior of rational functions with even roots in them. By now, you are probably used to rational functions of polynomials having just one horizontal asymptote (limit at infinity). When you have a numerator or denominator with a square root of an expression (or fourth root, etc.), however, because of the sign change for negative x, you often end up with *two* horizontal asymptotes, one for each sign of infinity. Similar Discussions: Simple Simple question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793799519538879, "perplexity": 1344.6964238277412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429417.40/warc/CC-MAIN-20170727182552-20170727202552-00081.warc.gz"}
https://asmedigitalcollection.asme.org/JRC/proceedings-abstract/JRC2012/44656/23/266882
The aim of the paper is to classify restraining rail, discuss the advantages and disadvantages of each type of restraining rail, derive the formula to determine the flangeway gap, and finally, to suggest a type of restraining rail for use. Three types of restraining rail are classified as: 1. Active restraining rail: Defined as the restraining rail that reduces the angle of attack (AOA) by more than 50%. 2. Semi-active restraining rail: Defined as the restraining rail that reduces the AOA by 50% or less, preferably between 40% ∼ 50%. 3. Passive restraining rail: Defined as the restraining rail that does not reduce the AOA. In other words, it plays a passive role in steering the wheel. A design procedure is established to estimate the flangeway gap. The advantages and disadvantages of each type of restraining rail are discussed from design, maintenance, and functional points of view. The issue of optimization of the rail/wheel profile is also discussed in context with the presence of restraining rail. One component of the flangeway gap is the space required for the angularity of the wheel. 2D CAD drawings are not efficient for this purpose, as the drawings cannot consider the AOA and the height of restraining rail on top of the rail level. In this context, Nytram plot is a solution; however, the plot needs a number of sectional drawings at gauge point level, rail level, and the top-of-restraining-rail level from the 3D drawing. A mathematical model that counts both the AOA and the height of restraining rail from the top of the rail level is developed here to capture the essence of the Nytram plot, and thereby to assess the space required for the angularity of the wheel. Finally, a semi-active restraining rail, with a formula for flangeway gap, is suggested for use. Being less elaborate and less time consuming, the formula is easier and quicker than the Nytram plot to estimate flangeway gap. Moreover, one can quickly assess the effect of wheel size, the height of the restraining rail from the top of rail, and the radius of the curve on the flangeway gap. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127024173736572, "perplexity": 1718.6973481144269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146681.47/warc/CC-MAIN-20200227094720-20200227124720-00275.warc.gz"}
https://proofwiki.org/wiki/Condition_for_Agreement_of_Family_of_Mappings
# Condition for Agreement of Family of Mappings ## Theorem Let $\family {A_i}_{i \mathop \in I}, \family {B_i}_{i \mathop \in I}$ be families of non empty sets. Let $\family {f_i}_{i \mathop \in I}$ be a family of mappings such that: $\forall i \in I: f_i \in \map \FF {A_i, B_i}$ We have that: $\ds \bigcup_{i \mathop \in I} f_i \in \map \FF {\bigcup_{i \mathop \in I} A_i, \bigcup_{i \mathop \in I} B_i}$ $\ds \forall i, j \in I: \Dom {f_i} \cap \Dom {f_j} \ne \O \implies \paren {\forall a \in \paren {\Dom {f_i} \cap \Dom {f_j} }, \tuple {a, b} \in f_i \implies \tuple {a, b} \in f_j}$ ## Proof Let $\family {A_i}_{i \mathop \in I}, \family {B_i}_{i \mathop \in I}$ be families of non empty sets. Let $\family {f_i}_{i \mathop \in I}$ be a family of mappings such that: $\forall i \in I: f_i \in \map \FF {A_i, B_i}$ ### Sufficient Condition Let: $\ds \bigcup_{i \mathop \in I} f_i \in \map \FF {\bigcup_{i \mathop \in I} A_i, \bigcup_{i \mathop \in I} B_i}$ Let $i, j \in I$ be such that: $\Dom {f_i} \cap \Dom {f_j} \ne \O$ Let $a \in \paren {\Dom {f_i} \cap \Dom {f_j} }$ Let $\ds b \in \bigcup_{i \mathop \in I} B_i$ be such that: $\tuple {a, b} \in f_i$ $\tuple {a, b} \notin f_j$ As $a \in \paren {\Dom {f_i} \cap \Dom {f_j} }$: $\ds \exists c \in \bigcup_{i \mathop \in I} B_i: \tuple {a, c} \in f_j$ As $\tuple {a, b} \in f_i$: $\ds \tuple {a, b} \in \bigcup_{i \mathop \in I} f_i$ Thus: $\ds \tuple {a, b}, \tuple {a, c} \in \bigcup_{i \mathop \in I} f_i$ such that $b \ne c$ and $\ds \bigcup_{i \mathop \in I} f_i$ is a mapping. Thus the supposition that the fact $\tuple {a, b} \notin f_j$ was false. So: $\tuple {a, b} \in f_j$ $\Box$ ### Necessary Condition Let: $\forall i, j \in I: \Dom {f_i} \cap \Dom {f_j} \ne \O \implies \paren {\forall a \in \paren {\Dom {f_i} \cap \Dom {f_j} }, \tuple {a, b} \in f_i \implies \tuple {a, b} \in f_j}$ Let $\ds a \in \bigcup_{i \mathop \in I} A_i$. Hence: $\exists k \in I: a \in A_k$ Let $k \in I$. Thus: $a \in \operatorname{Dom} f_k$ Let $l = \map {f_k} a$. It follows that: $\tuple {a, l} \in f_k$ and so: $\ds \tuple {a, l} \in \bigcup_{i \mathop \in I} f_i$ $\ds \exists m \in \bigcup_{i \mathop \in I} B_i: \paren {\tuple {a, m} \in \bigcup_{i \mathop \in I} f_i \land m \ne l}$ Let $\ds m \in \bigcup_{i \mathop \in I} B_i$. Let $j \in I$ be such that: $\tuple {a, m} \in f_j$ We have: $a \in \paren {\Dom {f_k} \cap \Dom {f_j} }$ As $\tuple {a, l} \in f_k$: $\tuple {a, l} \in f_j$. Therefore: $\tuple {a, m}, \tuple {a, l} \in f_j$ where $f_j \in \map \FF {A_j, B_j}$ and $m \ne l$. This contradicts the definition of mapping. So: $\ds \nexists m \in \bigcup_{i \mathop \in I} B_i: \paren {\tuple {a, m} \in \bigcup_{i \mathop \in I} f_i \land m \ne l}$ and so: $\ds \bigcup_{i \mathop \in I} f_i \in \map \FF {\bigcup_{i \mathop \in I} A_i, \bigcup_{i \mathop \in I} B_i}$ $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730125665664673, "perplexity": 1677.9669021330635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00225.warc.gz"}
http://mathhelpforum.com/number-theory/8371-number-theory-solve-diophantine-equation.html
Thread: number theory, solve diophantine equation 1. number theory, solve diophantine equation Prove that if (2 to the power n) - 15 = x square, then n = 4 or n = 6. Thanks very much guys 2. Originally Posted by suedenation Prove that if (2 to the power n) - 15 = x square, then n = 4 or n = 6. Thanks very much guys We need, $2^n-15=x^2$ If, $n=2k$ Then, $(2^k)^2-15=x^2$ Thus, $(2^k)^2-x^2=15$ Thus, $(2^k-x)(2^k+x)=15$ The factorization of 15 is, $1\cdot 15,3\cdot 5$ Examine each of the cases to get the result. Note, $2^k-x<2^k+x$ Thus, we have only two possibilities, $\left\{ \begin{array}{c}2^k-x=1\\2^k+x=15\end{array} \right\}$ $\left\{ \begin{array}{c}2^k-x=3\\2^k+x=5\end{array} \right\}$ The equations respectively, $2\cdot 2^k=2^{k+1}=16\to k+1=4\to k=3$ $2\cdot 2^k=2^{k+1}=8\to k+1=3\to k=2$ In each case we have, $n=2(3)=6$ $n=2(2)=4$ ---- If, $n=1$ then, $2-15=x^2$--->Impossible. If $n>1$ and $n=2k+1$ Then, $2(2^k)^2-15=x^2$ Thus, $x^2-2(2^k)^2=-15$ This is a Pellian look-a-like equation. (Now I am hoping that it has no solution to it). Basically I need to show, $a^2-2b^2=-15$ Has no solutions. 3. Aha! Solved it. The diophantine equation, $a^2-2b^2=-15$ Has no solution! The left hand side needs to be divisible by 15 Thus, $a^2-2b^2\equiv 0(\mbox{ mod }3)$ $a^2-2b^2\equiv 0(\mbox{ mod }5)$ This means the Legendre symbols have the value, $(2b^2/3)=(2/3)=1$ $(2b^2/5)=(2/5)=1$ But that is not true! Because by Euler's criterion, $2^{\frac{3-1}{2}}\equiv -1 (\mbox{ mod } 3)$ $2^{\frac{5-1}{2}}\equiv -1 (\mbox{ mod }5)$ 4. Thanks sooooo much, you are a genius....hehe
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9215435981750488, "perplexity": 3250.8392780607096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00280-ip-10-171-10-108.ec2.internal.warc.gz"}
http://hepnp.ihep.ac.cn/article/2013/2
2013 Vol. 37, No. 2 column Display Method:          | 2013, 37(2): 023101. doi: 10.1088/1674-1137/37/2/023101 Abstract: Using the analytical NU technique as well as an acceptable physical approximation to the centrifugal term, the bound-state solutions of the Duffin-Kemmer-Petiau equation are obtained for arbitrary quantum numbers. The solutions appear in terms of the Jacobi Polynomials. Various explanatory figures and tables are included to complete the study. 2013, 37(2): 023102. doi: 10.1088/1674-1137/37/2/023102 Abstract: Mixing between the 23S1 and 13D1 Ds is studied within the 3P0 model. If mixing between these two 1- states exists, Ds1*(2700)± and DsJ*(2860)± could be interpreted as the two orthogonal mixed states with mixing angle θ≈ -80° in the case of a special β for each meson. However, in the case of a universal β for all mesons, Ds1*(2700)± could be interpreted as the mixed state of 23S1 and 13D1 with mixing angle 12° < θ < 21° but DsJ*(2860)± seems difficult to interpret as the orthogonal partner of Ds1*(2700)±. 2013, 37(2): 023103. doi: 10.1088/1674-1137/37/2/023103 Abstract: Using a semi-relativistic potential model we investigate the spectra and decays of the bottomonium (bb) system. The Hamiltonian of our model consists of a relativistic kinetic energy term, a vector Coulomb-like potential and a scalar confining potential. Using this Hamiltonian, we obtain a spinless wave equation, which is then reduced to the form of a single particle Schrodinger equation. The spin dependent potentials are introduced as a perturbation. The three-dimensional harmonic oscillator wave function is employed as a trial wave function and the bbmass spectrum is obtained by the variational method. The model parameters and the wave function that reproduce the the bbspectrum are then used to investigate some of their decay properties. The results obtained are then compared with the experimental data and with the predictions of other theoretical models. 2013, 37(2): 024101. doi: 10.1088/1674-1137/37/2/024101 Abstract: One of the latest trends in the advancement of experimental high-energy physics is to identify the quark gluon plasma (QGP) predicted qualitatively by quantum chromodynamics (QCD). We discuss whether nuclear transparency effect which is considered an important phenomenon, connected with dynamics of hadron-nuclear and nuclear-nuclear interactions could reflect some particular properties of the medium. FASTMC is used for Au-Au collision at RHIC energies. Critical change in the transparency is considered a signal on the appearance of new phases of strongly interacting matter and the QGP. 2013, 37(2): 024102. doi: 10.1088/1674-1137/37/2/024102 Abstract: The yields of fragments produced in the 60Ni+12C reactions at 80 A and 140 A MeV, and with maximum impact parameters of 1.5, 2 and 7.3 fm at 80 A MeV are calculated by the statistical abrasion-ablation model. The yields of fragments are analyzed by the isobaric yield ratio (IYR) method to extract the coefficient of symmetry energy to temperature (asym/T). The incident energy is found to influence asym/T very little. It's found that asym/T of fragments with the same neutron-excess I=N-Z increases when A increases, while asym/T of isobars decreases when A increases. The asym/T of prefragments is rather smaller than that of the final fragments, and the asym/T of fragments in small impact parameters is smaller than that of the larger impact parameters, which both indicate that asym/T decreases when the temperature increases. The choice of the reference IYRs is found to have influence on the extracted asym/T of fragments, especially on the results of the more neutron-rich fragments. The surface-symmetry energy coefficient (bs/T) and the volume-symmetry energy coefficient (bv/T) are also extracted, and the bs/bv is found to coincide with the theoretical results. 2013, 37(2): 025101. doi: 10.1088/1674-1137/37/2/025101 Abstract: In this article, we assume that a cold charged perfect fluid is constructing a spherical relativistic star. Our purpose is the investigation of the dynamical properties of its exterior geometry, through simulating the geodesic motion of a charged test-particle, while moving on the star. 2013, 37(2): 026001. doi: 10.1088/1674-1137/37/2/026001 Abstract: In order to observe gamma rays in the 100 TeV energy region, the 4500 m2 underground muon detector array using water Cherenkov technique is constructed, forming the TIBET Ⅲ+MD hybrid array. Because the showers induced by primary gamma rays contain much fewer muons than those induced by primary hadrons, significant improvement of the gamma ray sensitivity for TIBET Ⅲ+MD array is expected. In this paper, the design and performance of the MD-A detector with large Tyvek bag is reported. 2013, 37(2): 026002. doi: 10.1088/1674-1137/37/2/026002 Abstract: The two-dimensional interpolating readout, a new readout concept based on resistive anode structure, was studied for the micro-pattern gaseous detector. Within its high spatial resolution, the interpolating resistive readout structure leads to an enormous reduction of electronic channels compared with pure pixel devices, and also makes the detector more reliable and robust, which is attributed to its resistive anode relieving discharge. A GEM (gaseous electron multiplier) detector with 2D interpolating resistive readout structure was set up and the performance of the detector was studied with 55Fe 5.9 keV X-ray. The detector worked stably at the gain up to 3.5×104 without any discharge. An energy resolution of about 19%, and a spatial resolution of about 219 μ (FWHM) were reached, and good imaging performance was also obtained. 2013, 37(2): 026003. doi: 10.1088/1674-1137/37/2/026003 Abstract: Neutron background measurement is always very important for dark matter detection due to almost the same effect for the recoiled nucleus scattered off by the incident neutron and dark matter particle. For deep under-ground experiments, the flux of neutron background is so low that large-scale detection is usually necessary. In this paper, by using Geant4, the relationship between detection efficiency and volume is investigated, meanwhile, two geometrical schemes for this detection including a single large-sized detector and arrayed multi-detector are compared under the condition of the same volume. The geometrical parameters of detectors are filtrated and detection efficiencies obtained under the similar background condition of China Jingping Underground Laboratory (CJPL). The results show that for a large-scale Gd-doped liquid scintillation detector, the detection efficiency increases with the size of detector at the beginning and then trends toward a constant. Under the condition of the same length and cross section, the arrayed multi-detector has almost similar detection performance as the single large-sized detector, while too much detector number could cause degeneration of detection performance. Considering engineering factors, such as testing, assembling and production, the 4×4 arrayed detector scheme is flexible and more suitable. Furthermore, the conditions for using fast and slow signal coincidence detection and the detectable lower limit of neutron energy are evaluated by simulating the light process. 2013, 37(2): 026004. doi: 10.1088/1674-1137/37/2/026004 Abstract: The low energy particle detector (LEPD) is one of the main payloads onboard the China seismic electromagnetic satellite (CSES). The detector is designed to ascertain space electrons (0.1-10 MeV) and protons (2-50 MeV). It has the capability of identifying the electrons and protons, to measure the energy spectrum and the incident angle of the particles. The LEPD is made up of a silicon tracker system, a CsI (Tl) mini-calorimeter, an anti-coincidence system made by plastic scintillator, as well as electronics and a data acquisition system (DAQ). The tracker is also a kind of ΔE-E telescope; it consists of two layers of double-sided silicon strip detectors (DSSD). The signals emerging from the silicon tracker can be read out by two pieces of application specific integrated circuit (ASIC), which also can generate an event trigger for the LEPD. The functions of the DSSD system in the LEPD for charged particles were tested by 241Am @ 5.486 MeV α particles. The results show that the DSSD system works well, and has high performance to detect charged particles and measure the position of incident particles. 2013, 37(2): 026201. doi: 10.1088/1674-1137/37/2/026201 Abstract: The position effect of the photoelectron multiplier tube (PMT) of the electromagnetic calorimeter (ECAL) of Alpha Magnetic Spectrometer-02 (AMS-02) has been studied with beam-test data. The reconstructed deposited energy in a layer versus incidence position in the cell can be described by Gaussian distribution, maximum and minimum value can be obtained when the particle passes across the center and the edge of a cell respectively. The distribution can be used to correct the effect of incidence position on energy reconstruction. Much better energy resolution was acquired be got with the correction, for 100 GeV electrons, energy resolution improved from 3% to 2%. 2013, 37(2): 026202. doi: 10.1088/1674-1137/37/2/026202 Abstract: Resorting to Hessian matrix, the analytical formula is obtained to determine the optimal luminosity proportion for the experiment of τ mass scan. Comparison of numerical results indicate the consistency between the present analytical evaluation and the previous computation based on the sampling technique. 2013, 37(2): 027001. doi: 10.1088/1674-1137/37/2/027001 Abstract: Superconducting (SC) cavities currently used for the acceleration of protons at a low velocity range are based on half wave resonators. Due to the rising demand on high current, the issue of beam loading and space charge problems has arisen. Qualities of low cost and high accelerating efficiency are required for SC cavities, which are properly fitted by using an SC quarter wave resonator (QWR). We propose a concept of using QWRs with frequency 162.5 MHz to accelerate high current proton beams. The electromagnetic design and optimization of the prototype have been finished at Peking University. An analytical model derived by the transmission line theory is used to predict an optimal combination of the geometrical parameters, with which the calculation by Microwave Studio shows a good agreement. The thermal analysis to identify the temperature rise of the demountable bottom plate under various levels of thermal contact also has been done, and the maximum increment is less than 0.5 K even though the contact state is poor. 2013, 37(2): 027002. doi: 10.1088/1674-1137/37/2/027002 Abstract: A re-buncher with spiral arms for a heavy ion linear accelerator named as SSC-LINAC at HIRFL (the heavy ion research facility of Lanzhou) has been constructed. The re-buncher, which is used for beam longitudinal modulation and matching between the RFQ and DTL, is designed to be operated in continuous wave (CW) mode at the Medium-Energy Beam-Transport (MEBT) line to maintain the beam intensity and quality. Because of the longitudinal space limitation, the re-buncher has to be very compact and will be built with four gaps. We determined the key parameters of the re-buncher cavity from the simulations using Microwave Studio software, such as the resonant frequency, the quality factor Q and the shunt impedance. The detailed design of a 53.667 MHz spiral cavity and measurement results of its prototype will be presented. 2013, 37(2): 027003. doi: 10.1088/1674-1137/37/2/027003 Abstract: Laser plasma accelerators (LPAs) have made great progress, achieving electron beam with energy up to 1 GeV from a centimeter scale capillary plasma waveguide. Here, we report the measurement of optical transition radiation (OTR) from the capillary-based LPA electron beams. Transition radiation images, produced by electrons passing through two separate foils (located at 2.3 m and 3.8 m away from the exit of the LPA) were recorded with a high resolution imaging system, respectively. Two magnetic quadrupole lenses were placed right after the capillary to focus and collimate the electron beams. Significant localized spikes appeared in the OTR images when the electron beam was focused by the magnetic quadrupole lenses, indicating the coherence of the radiation and the existence of ultrashort longitudinal structures inside the electron beam. 2013, 37(2): 027004. doi: 10.1088/1674-1137/37/2/027004 Abstract: The linac to the transmuter beam transport line (LTBT) connecting the end of the linac to the spallation target is a critical sub-system in the accelerator driven system (ADS). It has the function of transporting the accelerated high power proton beam to the target with a beam footprint satisfying the special requirements of the minor actinide (MA) transmuter. In this paper, a preliminary conceptual design of the hurling magnet to transmuter beam transport section (HTBT), as a part of the LTBT, for the China ADS (C-ADS) system is proposed and developed. In this design, a novel hurling magnet with a two dimensional amplitude modulation (AM) of 1 kHz and scanning of more than 10 kHz at 360° in transverse directions is used to realize a 300 mm diameter uniform distribution of beam on target. The preliminary beam optics design of C-ADS HTBT optimized to minimize the beam loss on the vacuum chamber and the radiation damage caused by back-scattering neutrons will be reported. 2013, 37(2): 028001. doi: 10.1088/1674-1137/37/2/028001 Abstract: The cricket is a truculent insect with stiff and sharp teeth as a fighting weapon. The structure and possible biomineralization of cricket teeth are always interesting. Synchrotron radiation X-ray fluorescence, X-ray diffraction, and small angle X-ray scattering techniques were used to probe the element distribution, possible crystalline structures and size distribution of scatterers in cricket teeth. A scanning electron microscope was used to observe the nanoscaled structure. The results demonstrate that Zn is the main heavy element in cricket teeth. The surface of a cricket tooth has a crystalline compound like ZnFe2(AsO4)2(OH)2(H2O)4. The interior of the tooth has a crystalline compound like ZnCl2, which is from the biomineralization. The ZnCl2-like biomineral forms nanoscaled microfibrils and their axial direction points towards the top of the tooth cusp. The microfibrils aggregate randomly into intermediate filaments, forming a hierarchical structure. A sketch map of the cricket tooth cusp is proposed and a detailed discussion is given in this paper. 2013, 37(2): 028002. doi: 10.1088/1674-1137/37/2/028002 Abstract: The multilayer Laue lens (MLL) is a novel diffraction optics which can realize nanometer focusing of hard X-rays with high efficiency. In this paper, a 7.9 μm-thick MLL with the outmost layer thickness of 15 nm is designed based on dynamical diffraction theory. The MLL is fabricated by first depositing the depth-graded multilayer using direct current (DC) magnetron sputtering technology. Then, the multilayer sample is sliced, and both cross-sections are thinned and polished to a depth of 35-41 μ. The focusing property of the MLL is measured at the Shanghai Synchrotron Facility (SSRF). One-dimensional (1D) focusing resolutions of 205 nm and 221 nm are obtained at E=14 keV and 18 keV, respectively. It demonstrates that the fabricated MLL can focus hard X-rays into nanometer scale. 2013, 37(2): 028003. doi: 10.1088/1674-1137/37/2/028003 Abstract: Ca-based additives have been widely used as a sulfur adsorbent during coal pyrolysis and gasification. The Ca speciation and evolution during the pyrolysis of coal with Ca additives have attracted great attention. In this paper, Ca species in the coal chars prepared from the pyrolysis of Ca(OH)2 or CaCO3-added coals are studied by using Ca K-edge X-ray absorption near-edge structural spectroscopy. The results demonstrate that Ca(OH)2, CaSO4, CaS and CaO coexist in the Ca(OH)2-added chars, while Ca(OH)2 and CaSO4 are the main species in the Ca(OH)2-added chars. Besides, a carboxyl-bound Ca is also formed during both the pyrolysis for the Ca(OH)2-added and the CaCO3-added coals. A detailed discussion about the Ca speciation is given. 2013, 37(2): 028101. doi: 10.1088/1674-1137/37/2/028101 Abstract: With the development of the XFEL (X-ray free electron laser), high quality diffraction patterns from nanocrystals have been achieved. The nanocrystals with different sizes and random orientations are injected to the XFEL beams and the diffraction patterns can be obtained by the so-called "diffraction-and-destruction" mode. The recovery of orientations is one of the most critical steps in reconstructing the 3D structure of nanocrystals. There is already an approach to solve the orientation problem by using the automated indexing software in crystallography. However, this method cannot distinguish the twin orientations in the cases of the symmetries of Bravais lattices higher than the point groups. Here we propose a new method to solve this problem. The shape transforms of nanocrystals can be determined from all of the intensities around the diffraction spots, and then Fourier transformation of a single crystal cell is obtained. The actual orientations of the patterns can be solved by comparing the values of the Fourier transformations of the crystal cell on the intersections of all patterns. This so-called "multiple-common-line" method can distinguish the twin orientations in the XFEL diffraction patterns successfully. 2013, 37(2): 028102. doi: 10.1088/1674-1137/37/2/028102 Abstract: The enhanced high gain harmonic generation (EHGHG) scheme has been proposed and shown to be able to significantly enhance the performance of HGHG FEL. In this paper we investigate the EHGHG scheme with negative dispersion. The bunching factor at the entrance of the radiator is analyzed, which indicates that the scheme with negative dispersion can further weaken the negative effect of the dispersive strength on the energy spread correction factor. The numerical results from GENESIS (3D-code) are presented, and are in good agreement with our analysis. Then we comparatively study the effects of the initial beam energy spread and the relative phase shift on the radiation power. The results show that the EHGHG scheme with negative dispersion has a larger tolerance on the initial beam energy spread and a nearly equal wide good region of the relative phase shift compared with the case of positive dispersion. IF: 3.298 Monthly founded in 1977 ISSN 1674-1137 CN 11-5641/O4 Original research articles, Ietters and reviews Covering theory and experiments in the fieids of • Particle physics • Nuclear physics • Particle and nuclear astrophysics • Cosmology Author benefits News Meet Editor
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126571178436279, "perplexity": 1927.3824617377074}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00014.warc.gz"}
http://physics.stackexchange.com/questions/103097/integration-constants-in-maxwells-equations-ambiguousness
# Integration constants in Maxwell's equations (ambiguousness?) In classical electrodynamics, if the electric field (or magnetic field, either of the two) is fully known (for simplicity: in a vacuum with $\rho = 0, \vec{j} = 0$), is it possible to unambiguously calculate the other field from Maxwell's equations? For example, let's assume that $\vec{E}(\vec{r}, t)$ is known with $\vec{E} = 0$. From Maxwell's equations, we know that $$\nabla \times \vec{E} = - \frac{\partial \vec{B}}{\partial t} \Leftrightarrow \vec{B} = - \int \nabla \times \vec{E} \; \mathrm{d}t$$ However, this (as far as I can tell) results in $\vec{B}(\vec{r}, t) = \left( \begin{smallmatrix} C_1\\C_2\\C_3 \end{smallmatrix} \right)$ with unknown constants $C_i$. This result satisfies the Maxwell equations, since $$\nabla \cdot \left( \begin{smallmatrix} C_1\\C_2\\C_3 \end{smallmatrix} \right) = 0 \quad\text{ and }\quad \nabla \times \left( \begin{smallmatrix} C_1\\C_2\\C_3 \end{smallmatrix} \right) = \varepsilon_0 \mu_0 \frac{\partial \vec{E}}{\partial t} = 0$$ Does this really mean that if we are given $\vec{E}$ and the source-free Maxwell equations that we cannot determine whether there is no magnetic field at all or whether there is a constant magnetic field filling all of space using the theory? Note: I am asking this question because in my physics class, when considering planar electromagnetic waves of the form $\vec{E} = \vec{E}_0 e^{i(\vec{k} \cdot \vec{r} - \omega t)}$, we were often asked to calculate $\vec{B}$ given the electric field of the wave using Maxwell's equations. We wondered about the integration constants, but since it was always assumed that the fields are of the form $$\vec{E}(\vec{r}, t) = \vec{E}_0 e^{i(\vec{k} \cdot \vec{r} - \omega t)}$$ $$\vec{B}(\vec{r}, t) = \vec{B}_0 e^{i(\vec{k} \cdot \vec{r} - \omega t)}$$ the constants were naturally set to zero. However, I'm wondering if this assumption is safe and what the reasoning behind it is. - The most important statement in this answer to your question is: Yes, you can superimpose a constant magnetic field. The combined field remains a solution of Maxwell's equations. $\def\vB{{\vec{B}}}$ $\def\vBp{{\vec{B}}_{\rm p}}$ $\def\vBq{{\vec{B}}_{\rm h}}$ $\def\vE{{\vec{E}}}$ $\def\vr{{\vec{r}}}$ $\def\vk{{\vec{k}}}$ $\def\om{\omega}$ $\def\rot{\operatorname{rot}}$ $\def\grad{\operatorname{grad}}$ $\def\div{\operatorname{div}}$ $\def\l{\left}\def\r{\right}$ $\def\pd{\partial}$ $\def\eps{\varepsilon}$ $\def\ph{\varphi}$ Since you are using plane waves you even cannot enforce the fields to decay sufficiently fast with growing distance to the origin. That would make the solution of Maxwell's equations unique for given space properties (like $\mu,\varepsilon,\kappa$, and maybe space charge $\rho$ and an imprinted current density $\vec{J}$). But, in your case you would not have a generator for the field. Your setup is just the empty space. If you enforce the field to decay sufficiently fast with growing distance you just get zero amplitudes $\vec{E}_0=\vec{0}$, $\vec{B}_0=\vec{0}$ for your waves. Which is certainly a solution of Maxwell's equations but also certainly not what you want to have. For my point of view you are a bit too fast with the integration constants. You loose some generality by neglecting that these constants can really depend on the space coordinates. Let us look what really can be deduced for $\vB(\vr,t)$ from Maxwell's equations for a given $\vE(\vr,t)=\vE_0 \cos(\vk\vr-\om t)$ in free space. At first some recapitulation: We calculate a particular B-field $\vBp$ that satisfies Maxwell's equations: $$\begin{array}{rl} \nabla\times\l(\vE_0\cos(\vk\vr-\om t)\r)&=-\pd_t \vBp(\vr,t)\\ \l(\nabla\cos(\vk\vr-\om t)\r)\times\vE_0&=-\pd_t\vBp(\vr,t)\\ -\vk\times\vE_0\sin(\vk\vr-\om t) = -\pd_t \vBp(\vr,t) \end{array}$$ This leads us with $\pd_t \cos(\vk\vr-\om t) = \om \sin(\vk\vr-\om t)$ to the ansatz $$\vBp(\vr,t) = -\vk\times\vE_0 \cos(\vk\vr-\om t)/\om.$$ The divergence equation $\div\vBp(\vr,t)=-\vk\cdot(\vk\times\vE_0)\cos(\vk\vr-\om t)/\om=0$ is satisfied and the space-charge freeness $0=\div\vE(\vr,t) = \vk\cdot\vE_0\sin(\vk\vr-\om t)$ delivers that $\vk$ and $\vE_0$ are orthogonal. The last thing to check is Ampere's law $$\begin{array}{rl} \rot\vBp&=\mu_0 \eps_0 \pd_t\vE\\ \vk\times(\vk\times\vE_0)\sin(\vk\vr-\om t)/\om &= -\mu_0\eps_0 \vE_0 \sin(\vk\vr-\om t) \om\\ \biggl(\vk \underbrace{(\vk\cdot\vE_0)}_0-\vE_0\vk^2\biggr)\sin(\vk\vr-\om t)/\om&= -\mu_0\eps_0 \vE_0 \sin(\vk\vr-\om t) \om \end{array}$$ which is satisfied for $\frac{\omega}{|\vk|} = \frac1{\sqrt{\mu_0\eps_0}}=c_0$ (the speed of light). Now, we look which modifications $\vB(\vr,t)=\vBp(\vr,t)+\vBq(\vr,t)$ satisfy Maxwell's laws. $$\begin{array}{rl} \nabla\times\vE(\vr,t) &= -\pd_t\l(\vBp(\vr,t)+\vBq(\vr,t)\r)\\ \nabla\times\vE(\vr,t) &= -\pd_t\vBp(\vr,t)-\pd_t\vBq(\vr,t)\\ 0 &= -\pd_t\vBq(\vr,t) \end{array}$$ That means, the modification $\vBq$ is independent of time. We just write $\vBq(\vr)$ instead of $\vBq(\vr,t)$. The divergence equation for the modified B-field is $0=\div\l(\vBp(\vr,t)+\vBq(\vr)\r)=\underbrace{\div\l(\vBp(\vr,t)\r)}_{=0} + \div\l(\vBq(\vr)\r)$ telling us that the modification $\vBq(\vr)$ must also be source free: $$\div\vBq(\vr) = 0$$ Ampere's law is $$\begin{array}{rl} \nabla\times(\vBp(\vr,t)+\vBq(\vr)) &= \mu_0\eps_0\pd_t \vE,\\ \rot(\vBq(\vr))&=0. \end{array}$$ Free space is simply path connected. Thus, $\rot(\vBq(\vr))=0$ implies that every admissible $\vBq$ can be represented as gradient of a scalar potential $\vBq(\vr)=-\grad\ph(\vr)$. From $\div\vBq(\vr) = 0$ there follows that this potential must satisfy Laplace's equation $$0=-\div(\vBq(\vr)) = \div\grad\ph = \Delta\ph$$ That is all what Maxwell's equations for the free space tell us with a predefined E-field and without boundary conditions: The B-field can be modified through the gradient of any harmonic potential. The thing is that with problems in infinite space one is often approximating some configuration with finite extent which is sufficiently far away from stuff that could influence the measurement significantly. How are plane electromagnetic waves produced? One relatively simple generator for electromagnetic waves is a dipole antenna. These do not generate plane waves but spherical curved waves as shown in the following nice picture from the Wikipedia page http://en.wikipedia.org/wiki/Antenna_%28radio%29. Nevertheless, if you are far away from the sender dipol and there are no reflecting surfaces around you then in your close neighborhood the electromagnetic wave will look like a plane wave and you can treat it as such with sufficiently exact results for your practical purpose. In this important application the plane wave is an approximation where the superposition with some constant electromagnetic field is not really appropriate. We just keep in mind if in some special application we need to superimpose a constant field we are allowed to do it. - I'm not entirely sure I understand what you're getting at. Let's say we include the generator, e.g. a dipole antenna, in the setup. Now the only thing we are given is the electric field generated by the antenna. Given that, is it possible to determine what the generated magnetic field is, or will there still be an ambiguity in that you can add arbitrary constants and it will remain a solution? I mean, of course one could argue that there's no reason for an additional constant field to be there "out of nowhere", but can you determine that just from the equations? –  Socob Mar 13 at 13:43 Yes you can superimpose a constant magnetic field with $\operatorname{rot}\vec H=\vec0$ and $\operatorname{div}\vec B=0$. With $\mu=\mu_0$ every $\vec H=-\operatorname{grad}\phi$ for any harmonic function $\phi$ is admissible. The field does not come out of nowhere but the source for the field is just not in the considered neighborhood. For an instanace it could be the Earth's magnetic field. –  Tobias Mar 13 at 13:57 The answer is no - you cannot fully determine the magnetic field from the electric field (or vice-versa) without boundary conditions. The reason is as you rightly surmise that there is an integration "constant", which is only constant with respect to time, not to position. This additional stationary field can be produced by a time-independent scalar potential with a non-zero gradient. The root cause of this ambiguity is that in Maxwell's equations, E-field is generated from the partial derivative of B-field wrt time and vice-versa. So a stationary B-field has no influence on the E-field and vice-versa. But we know this from common sense - the presence of the Earth's magnetic field has no influence on a light beam I shine across the classroom - the time-dependent E- and B-field associated with the EM wave are still in the directions and have the same amplitude that they would have if the Earth's magnetic field were not there. - For a finite size antenna you must impose Sommerfeld's radiation condition http://en.wikipedia.org/wiki/Radiation_condition . A constant B does not satisfy this and obviously has no finite energy. While you may add a constant B to the equation it is excluded on these grounds as nonphysical. The plane waves are also nonphysical not just because they have infinite energy but only an infinite size radiator may generate them. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617207050323486, "perplexity": 228.4742578717264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899124.21/warc/CC-MAIN-20141030025819-00198-ip-10-16-133-185.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/questions/30591/are-there-any-interesting-open-questions-having-to-do-with-submodularity-specia
# Are there any interesting open questions having to do with submodularity, specially in the intersection of theoretical machine learning? I was interested in knowing about open research topics related with sub modularity, specially within its intersection with theoretical machine learning (and related topics). I am particularly interested in algorithms (specially their application to machine learning, and whenever possible, proving theoretical guarantees for them, wether these guarantees are in complexity or learnability). I'd be interested in knowing if there are any "heuristic" algorithms out there in the area of sub modularity and machine learning, with unknown guarantees. From my current knowledge of the topic, the most common heuristic seems to be the "greedy" heuristic, and it seems to be quite well understood. Hence, I was wondering if there was any open problems in the area of designing new algorithms (or improving current ones) or algorithms that already exists, that don't have any guarantees. Is there anything open questions related with theoretical guarantees that is open in sub modularity? Or of if there aren't any such open question, are there any open question in the intersection of sub modularity and machine learning? 1. Given a non-negative submodular function on a universe $U$, find a set $A$ of size at most $k$ maximizing $f(A)$. The best known approximation ratio is $1/e+0.004$ (BFNS14). When "at most" is replaced by "exactly", the best known approximation ratio is 0.356 (also BFNS14). 2. Given a non-negative monotone submodular function and $k$ matroids, find a set $A$ belonging to all different matroids and maximizing $f(A)$. When $k=1$ the best approximation ratio is $1-1/e$, and this is optimal. For larger $k$, LSV give a $1/k$ approximation, but this is probably not tight. 3. There are two algorithms for maximizing a non-negative monotone submodular function over a matroid that give the optimal approximation ratio $1-1/e$: the continuous greedy algorithm and the non-oblivious local search algorithm. Both are randomized. The best known deterministic algorithm is the greedy algorithm, giving a $1/2$ approximation. Can we separate deterministic and randomized algorithms in this oracle setting? A similar question arises in the non-monotone unconstrained case, in which BFNS12 give an optimal randomized $1/2$ approximation algorithm, but the best known deterministic algorithms (one in BFNS12 and an earlier local search algorithm due to FMV) give only a $1/3$ approximation. 4. Both the continuous greedy algorithm and the non-oblivious local search algorithm are rather slow. Is there a fast, $\tilde{O}(n)$ algorithm for maximizing a monotone submodular function over a matroid? (see BV for the case of a uniform matroid) 5. Algorithms for minimizing submodular functions are also a bit slow, the fastest running in time $O(n^5)$ (see Iwata's survey).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655780792236328, "perplexity": 286.47450070855706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00379.warc.gz"}
https://math.stackexchange.com/questions/2387204/primes-generated-by-a-continued-fraction
# primes generated by a continued fraction On the post Is there an explanation for the behaviour of this finite continued fraction in connection with prime numbers? I asked this question in a non-generalized form focusing only on the $4th$ partial convergent but now generalizing the problem and clarifying it ,we have Given the continued fraction which satisfies the property proposed in one of my old posts $G(q)=\cfrac{1}{1-q+\cfrac{q(1-q)^2}{1-q^3+\cfrac{q(1-q^2)^2}{1-q^5+\cfrac{q(1-q^3)^2}{1-q^7+\cfrac{q(1-q^4)^2}{1-q^9+\dots}}}}}$ and $kth$ partial convergent of the continued fraction $\cfrac{1}{1-q+\cfrac{q(1-q)^2}{1-q^3+\cfrac{q(1-q^2)^2}{1-q^5+\cfrac{q(1-q^3)^2}{\ddots+\cfrac{q(1-q^k)^2}{1-q^{2k+1}}}}}}=\exp\Big(\sum_{n=2}^{\infty} (-1)^n\phi_{k}(n)\,q^n\Big)$ where $|q|\lt\frac{1}{4}$,and $\phi_{k}(n)$ is our symbol of choice that represents coefficients of the series(please note that it doesn't represent any standard function) depending on the $kth$ partial convergent $k\gt2$. For $k\gt2$,every partial convergent of the continued fraction seems to have the property that: For all values of $n$ but a few(that are exceptions to the rule) ,$\phi_{k}(n)$ is integer when $n$ is prime and non-integer when $n$ is composite. For example on the $7th$ partial convergent of the continued fraction,there's only one exception $n=15$ in $1\lt n\lt200$ Formally we may call $\phi_{k}(n)$ an arithmetic function which returns an integer when $n$ is a prime number and non-integer when $n$ is a composite number for all natural numbers $n$ but a few for $k\gt2$. So the question is Why is $\phi_{k}(n)$ integer when $n$ is prime and non-integer when it is composite for all values of $n$ but a few for $k\gt2$? We may be led to conjecture that whenever $n=prime$,the arithmetic function $\phi_{k}(n)$ is always integer. • Perhaps a case of the "law of small numbers". Extend the search in order to see whether this is the case. Aug 9, 2017 at 14:19 • It would help to write it as a sequence of $q$-series and see what are their coefficients Aug 12, 2017 at 8:04 • @reuns:I have tried it already ,but unfortunately the OEIS doesn't seem to recognize it.Moreover the radius of convergence for q-series is the unit circle $|q|\lt1$ ,while for this particular series is $|q|\lt\frac{1}{4}$ Aug 12, 2017 at 8:13 • Did you find an induction rule for the (coefficients of) the sequence of $q$-series ? Aug 12, 2017 at 8:17 Take any power series on the form $F(q)=1+a_2q^2+a_3q^3+\cdots$ with integer coefficients: ie, the constant term is $1$ and the linear term is zero, otherwise it is a general power series. Your $G(q)$ takes this form, as do the $k$th partial convergents. Next, rewrite $F(q)$ on the form $$F(q)=\prod_{k=2}^{\infty}(1+\alpha_k q^k)=(1+\alpha_2 q^2)(1+\alpha_3 q^3)\cdots$$ which can always be done and gives $\alpha_i\in\mathbb{Z}$. Now, for $F(q)=\exp f(q)$, you ask why $f(q)=f_1q+f_2q^2+\cdots$ gives integer coefficients for $f_n$ whenever $n$ is a prime, but not when $n$ is composite. To see why, write $f(q)=\ln F(q)$ and take the power expansion $$f(q) = \ln F(q) = \sum_{k=2}^\infty \ln(1+\alpha_k q^k) = \sum_{k=2}^\infty \sum_{i=1}^\infty (-1)^{i-1}\frac{\alpha_k^i q^{ki}}{i}.$$ Contributions to the coefficient of $q^n$ come from pairs $(k,i)$ where $ki=n$. If $n$ is prime, then $k=n$ and $i=1$: the opposite, $k=1$ and $i=n$, does not contribute as $k\ge2$ (or, equivalently, the coefficient $\alpha_1=0$). When $k=n$ and $i=1$ is the only contributing term to the coefficient of $q^n$, the coefficient $f_n=\alpha_n$. When $n$ is not prime, there may be terms contributing to the coefficient of $q^n$ where $i>1$ and which may therefore be non-integral.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381282329559326, "perplexity": 254.83677958277312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00735.warc.gz"}
https://www.physicsforums.com/threads/calculating-speed-with-kinematics.152955/
# Calculating Speed with Kinematics 1. Jan 24, 2007 ### darkmagicianoc I need help with this problem using the kinematic equations. My professor gave me the answer to this problem and wants to know the steps to achieve this answer. This is the example: The driver of a car slams on the brakes when he sees a tree blocking the road. The car slows uniformly with an acceleration of -5.60 m/s^2 for 4.20 s, making straight skid marks 62.4 m long ending at the tree. With what speed does the car then strike the tree? The professor's answer is: The speed is 3.10 m/s. Thank you in advance for taking the time to help me! --Dan 2. Jan 25, 2007 ### KingNothing Try using this equation: D = VoT + 1/2AT^2 Where Vo is the initial velocity. Once you find this, it is fairly straightforward to calculate the final velocity. Or, for a simpler method, just enter it as a positive acceleration and Vo will work out to be the final velocity! Last edited: Jan 25, 2007 3. Jan 25, 2007 ### darkmagicianoc Thank you for your help! I took your advice and it worked perfectly! I really appreciate the time and effort you took to help me! Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Calculating Speed with Kinematics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704943060874939, "perplexity": 1298.9250415653298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187945.85/warc/CC-MAIN-20170322212947-00302-ip-10-233-31-227.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/70969/when-is-an-integral-transfrom-trace-class
# When is an integral transfrom trace class? Given a measure space $(X, \mu)$ and a measurable integral kernel $k : X \times X \rightarrow \mathbb{C}$, the operator $$K f(\xi) =\int_{X} f(x) k(x,\xi) d \mu(x),$$ the operator $K$ is Hilbert Schmidt iff $k \in L^2(X \times X, \mu \otimes\mu)$! Q1:The main point of this questions, what are necessary and sufficient conditions for it to be trace class? I know various instances, where $$\mathrm{tr} K = \int_X k(x,x) d \mu(x).$$ Q2:What are counterexamples, where $x \mapsto k(x,x)$ is integrable, but the operator is not trace class? Q3:What are counterexamples for a $\sigma$ finite measure space, where $k$ is compactly supported and continuous, but the kernel transformation is not trace class and the above formula fails? Q4: Is there a good survey/reference for these questions. - There are many results of the kind you ask about in the book I. C. Gohberg and M. G. Krein, Introduction to the theory of linear nonselfadjoint operators. Providence, RI: American Mathematical Society, 1969. It contains both necessary and sufficient conditions, and counter-examples. - Okay, there is a chapter "Tests for nuclearity of integral transforms and computation of the trace" pg.112ff. Thanks a lot. I will have to check, wether this does the complete job here. –  plusepsilon.de Jul 22 '11 at 8:07 This only for an interval in the real line, though=( –  plusepsilon.de Jul 22 '11 at 9:11 @pm : there are many $(X,\mu)$ measurably isomorphic to an interval of the real line. See "standard probability space" in wikipedia. Of course, this is not good when you consider continuity of the kernel... –  BS. Jul 22 '11 at 16:19 When is a group with Haar measure a standard measure space? –  plusepsilon.de Jul 25 '11 at 8:10 A group with Haarmeasure is a standard measure space, if and only if the group is a Polish group. In fact, Polish group with $G$ quasi invariant measures are locally compact. –  plusepsilon.de Sep 9 '11 at 9:51 It may be worth noting the phenomena that can appear in Hilbert spaces, where study of the things is more decisive, both positive and negative. First, I like the "definition" of "trace class" $T:X\rightarrow Y$ with Hilbert spaces $X,Y$ to be that $T$ is a composition of two Hilbert-Schmidt operators (which are defined as being in the HS-norm completion of the algebraic tensor product $X^*\otimes_{\mathrm {alg}}Y$. This gives an intrinsic definition... which, if desired, is provably equivalent to the (ugly) requirement that $\sum |\langle Tx_i,y_i\rangle| <\infty$ for every pair of orthonormal bases. The reason I recall this cliche is that, in many applications of interest (to me!), natural operators are visibly Hilbert-Schmidt (if compact at all), and the issue becomes to prove trace-class. In practice (for me) it often happens that we know that every one of these integral operators is a finite sum of compositions of two such, proving trace-class. Sometimes proof of the latter is highly non-trivial, as in the Cartier/Dixmier-Malliavin proof that test functions on Lie groups are finite linear combinations of convolutions of pairs of such. The totally-disconnected group analogue is trivial. That summing or integrating down the diagonal fails is easy to illustrate with not-normal operators: the shift operator on one-sided or two-sided $\ell^2$ might seem to have trace absolutely summing to $0$, but it is not trace class at all. Integral analogues of this are clear. Edit: in response to question about reference, etc.: in Lang's "SL(2,R)" the equivalence of the coordinate-dependent definition of "trace class", and the definition as composition of two Hilbert-Schmidt, are carefully compared. Further, in that same source, various conditions on a kernel assuring that its trace is equal to its integral over the diagonal are carefully treated. (I must say "... in contrast to dangerously glib treatments elsewhere"). Further edit: in response to Yemon Choi's comments: yes, the space of trace-class operators is also the closure of finite-rank operators with respect to the "trace norm"... At the moment, verification of the equivalence seems straightforward. - Some guidance as to the purported unhelpfulness of this answer would be appreciated, if not inconvenient. –  paul garrett Jul 22 '11 at 17:45 I'd also prefer an explanation for downvotes here. –  plusepsilon.de Jul 25 '11 at 8:09 @Paul: I just upvoted this response of yours. Some references to precise theorems (e.g. to your alternative definition of trace class) would be helpful. –  GH from MO Jan 29 '12 at 21:38 Why not define trace class as those in the range of the map $H\widehat{\otimes} H^* \to B(H)$, given cokernel norm? –  Yemon Choi Feb 4 '12 at 2:09 That said, I find the observation in your third paragraph, and the results mentioned in the fourth paragraph, quite interesting; so I am not trying to deny the merits of the definition you suggest –  Yemon Choi Feb 4 '12 at 2:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461914300918579, "perplexity": 600.1299041037939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834883.60/warc/CC-MAIN-20140820021354-00255-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/extended-power-derivative.763884/
# Extended power derivative • Start date • #1 65 2 I am having difficulty calculating the following derivative $${ \frac{2x^2-1}{(3x^4+2)^2}}$$ Could someone demonstrate the first step algebraically? Assuming c is the exponent on the variable expression, n is the numerator and d is the denominator, I tried: $$c\frac{n(x)}{d(x)}\frac{n'(x)*d(x)-d'(x)*n(x)}{d(x)^2}$$ Which gives me $$2\frac{2x^2-1}{3x^4+2}\frac{[4x(3x^4+2)]-[(12x^3)(2x^2-1)]}{(3x^4+2)^2}$$ Which simplifies to $$\frac{-48 x^7+72 x^5+8 x^3-16 x}{(3x^4+2)^3}$$ However, the book lists the answer as being $$\frac{-36x^5+24x^3+8x}{(3x^4+2)^2}$$ Related Calculus and Beyond Homework Help News on Phys.org • #2 Char. Limit Gold Member 1,204 14 I'm not quite sure what rule you're trying to use, but if it's the quotient rule, then you've got it written down wrong. The correct way is: $$\frac{d}{dx} \frac{n(x)}{d(x)} = \frac{n'(x) d(x) - d'(x) n(x)}{d(x)^2}$$ Since you've already worked the latter expression out, it should be easy to finish for you. • #3 Are you sure that : $$\frac{d}{dx} (3x^4+2)^2=12x^3$$ ? • #4 65 2 I'm not quite sure what rule you're trying to use, but if it's the quotient rule, then you've got it written down wrong. The correct way is: $$\frac{d}{dx} \frac{n(x)}{d(x)} = \frac{n'(x) d(x) - d'(x) n(x)}{d(x)^2}$$ Since you've already worked the latter expression out, it should be easy to finish for you. My book lists a rule called the "extended power rule," which goes as follows: "Suppose g(x) is a differentiable function of x. Then, for any real number k, $$\frac {d}{dx}[g(x)]^k=k[g(x)]^{k-1}*\frac{d}{dx}[g(x)]$$ Here's a link to the text: http://www.scribd.com/doc/13142109/1-7-the-Chain-Rule I could easily solve the problem by expanding the binomial expression (3x^4+2)^2 and then using the standard product rule, but I need to know how to use the extended product rule as one of the sample question is raised to the power of 7, and there is no way that I am going to expand that. If you can offer me an alternative method to dealing with the derivatives of high order expressions, I would accept that as well. • #5 Char. Limit Gold Member 1,204 14 The extended power rule isn't exactly relevant here. And you got that particular part correct. The problem was that the quotient rule, for whatever reason, was done incorrectly. I'm not sure where you got a c or the first part of that product. • #6 65 2 The extended power rule isn't exactly relevant here. And you got that particular part correct. The problem was that the quotient rule, for whatever reason, was done incorrectly. I'm not sure where you got a c or the first part of that product. The book says to use the extended power rule in addition to the quotient rule to solve this particular problem, so it has to be relevant >_> According to the extended power rule, I take the exponent off the expression, k (i accidentally put c), and multiply it by g(x). The text has a step-by-step example of how to use the extended power rule in conjunction with another quotient problem, $$\sqrt[4]{\frac{x+3}{x-2}}$$ in which they use the setup $$k\frac{n(x)}{d(x)}\frac{n'(x)*d(x)-d'(x)*n(x)}{d(x)^2}$$, but that form doesn't appear to work here. • #7 I could easily solve the problem by expanding the binomial expression (3x^4+2)^2 and then using the standard product rule, but I need to know how to use the extended product rule as one of the sample question is raised to the power of 7, and there is no way that I am going to expand that. If you can offer me an alternative method to dealing with the derivatives of high order expressions, I would accept that as well. You don't need to expend these high order derivatives, just use the chain rule. As a reminder $$(3x^4+2)^2$$ can be seen as a function of this type : $$f(g(x))\ \mbox{where}\ g(x)=3x^4+2\ \mbox{and}\ f(x) = x^2\\ \mbox {Now consider that the x in}\ x^2\ \mbox{actually is your function g(x), that is, f(x) is applied to g(x), the x in brackets become g(x).}\\ \mbox{You then have your function h(x) = f(g(x)), which is} (3x^4+2)^2\\ \mbox{You can now differentiate h(x), and as you can see, it's simply the derivative of f(g(x)).}\\ \mbox{Use the chain rule:} \frac{d}{dx}f(g(x)) = f'(g(x)) * g'(x)\ \mbox{and you've got the derivative.}\\ \mbox{You can now differentiate polynomials, for instance :} \frac{d}{dx}(4x^5+3)^9 = 9*(4x^5+3)^8 * 20x^4\\ \mbox{In general :}\ \frac{d}{dx}(P(x))^n = n(P(x))^{n-1} * P'(x)$$ Hope this helps! Likes 1 person • #8 65 2 you don't need to expend these high order derivatives, just use the chain rule. As a reminder $$(3x^4+2)^2$$ can be seen as a function of this type : $$f(g(x))\ \mbox{where}\ g(x)=3x^4+2\ \mbox{and}\ f(x) = x^2\\ \mbox {now consider that the x in}\ x^2\ \mbox{actually is your function g(x), that is, f(x) is applied to g(x), the x in brackets become g(x).}\\ \mbox{you then have your function h(x) = f(g(x)), which is} (3x^4+2)^2\\ \mbox{you can now differentiate h(x), and as you can see, it's simply the derivative of f(g(x)).}\\ \mbox{use the chain rule:} \frac{d}{dx}f(g(x)) = f'(g(x)) * g'(x)\ \mbox{and you've got the derivative.}\\ \mbox{you can now differentiate polynomials, for instance :} \frac{d}{dx}(4x^5+3)^9 = 9*(4x^5+3)^8 * 20x^4\\ \mbox{in general :}\ \frac{d}{dx}(p(x))^n = n(p(x))^{n-1} * p'(x)$$ hope this helps! thank you! • Last Post Replies 5 Views 16K • Last Post Replies 1 Views 1K • Last Post Replies 2 Views 2K • Last Post Replies 2 Views 101 • Last Post Replies 1 Views 2K • Last Post Replies 6 Views 332 • Last Post Replies 3 Views 2K • Last Post Replies 1 Views 3K • Last Post Replies 2 Views 3K • Last Post Replies 2 Views 6K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327687621116638, "perplexity": 394.03233214642654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00252.warc.gz"}
http://www.chegg.com/homework-help/modeling-and-analysis-of-dynamic-systems-3rd-edition-chapter-3-solutions-9780471394426
# Chegg Textbook Solutions for Modeling and Analysis of Dynamic Systems 3rd Edition: Chapter 3 Chapter: Problem: • Step 1 of 6 Refer Figure 2.16(a) in the textbook. The given system contains a single free-body diagram for mass. Assume that the displacement is taken as positive in downward direction. As is positive, the springand dashpot are stretched by creating upward forces ofand on. The inertial force act opposite to the positive direction of acceleration that means, acts in upward direction which is opposite to the assumed positive direction of acceleration in down wards. The gravitational force acting on mass is will be in down ward direction and also the external force applied is in downward direction. • Step 2 of 6 The following figure represents the free body diagram of mass: • Step 3 of 6 Figure 1 • Step 4 of 6 (a) Write the state-variable model as follows, when and are state variables: Add the individual forces acting on free-body diagram mass to generate differential equation: ...... (1) Substitute in place of and write the equation (1). Therefore, the state-variable equation having inputs and is: Therefore, the output is the energy stored in the spring is: Hence, the required state-variable model is determined. • CrazyBandana5550 How did you find the output energy stored in the spring? • Step 5 of 6 (b) Calculate the state-variable model as follows, when and are state variables: Write the equation for displacement caused by the input. ...... (2) Substitute in place of and write the equation (2). Therefore, the state-variable equation having inputs and is: . • CrazyBandana5550 Why is the Mg term ignored in this equation? • Step 6 of 6 Write the output is the energy stored in the spring and the output equation. Assume that is non-zero and mass is moving then. ...... (3) Assume thatis zero and mass is not moving then and from equation (1) get the following relation. Substitute value in equation (3). Therefore, the output is the energy stored in the spring is: Hence, the required state-variable model is determined.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898247241973877, "perplexity": 1492.1853591531121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455241471.82/warc/CC-MAIN-20150501044041-00081-ip-10-235-10-82.ec2.internal.warc.gz"}
https://kyushu-u.pure.elsevier.com/en/publications/influence-of-modified-neutron-emission-spectrum-on-tritium-produc
# Influence of modified neutron emission spectrum on tritium production performance in blanket systems with NBI-heated deuterium plasma Tomoki Urakawa, Hideaki Matsuura, Yasuko Kawamoto, Satoshi Konishi Research output: Contribution to journalArticlepeer-review 1 Citation (Scopus) ## Abstract In high-temperature plasma sustained by neutral beam injection heating, the emission spectrum of the neutrons produced by fusion reactions is known to have a modified Gaussian distribution. In this study, neutron transport simulation is carried out for a plasma source that emits neutrons with a modified spectrum and the influence of this modified neutron emission spectrum on the tritium production performance is investigated. The results show that in tritium production using D(d,n)3He neutrons, the rate of the 9Be(n,2n)2α neutron multiple reactions, which has a threshold energy of 2.0 MeV, is significantly enhanced compared with that when a source with a Gaussian neutron emission spectrum was assumed. In addition, the influence of the blanket composition on the tritium production performance is discussed. Original language English 678-682 5 Fusion Engineering and Design 136 https://doi.org/10.1016/j.fusengdes.2018.03.056 Published - Nov 2018 ## All Science Journal Classification (ASJC) codes • Civil and Structural Engineering • Nuclear Energy and Engineering • Materials Science(all) • Mechanical Engineering ## Fingerprint Dive into the research topics of 'Influence of modified neutron emission spectrum on tritium production performance in blanket systems with NBI-heated deuterium plasma'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113909602165222, "perplexity": 4564.020703520064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00686.warc.gz"}
https://brilliant.org/problems/balanced-to-the-minimum/
# Balanced to the minimum Algebra Level 5 For positive $$a,b$$ and $$c$$, find the minimum value of $\frac{(a+b)^{2}+(a+b+4c)^2}{abc}\, (a+b+c).$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212268948554993, "perplexity": 2366.5645578358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156613.38/warc/CC-MAIN-20180920195131-20180920215531-00003.warc.gz"}
https://hsm.stackexchange.com/questions/6059/who-first-had-the-idea-to-study-surfaces-via-rings-of-functions-as-in-algebraic
# Who first had the idea to study surfaces via rings of functions, as in algebraic geometry? This idea provides the foundations of algebraic geometry now; and they have certainly gone down the rabbit hole with it. As a student studying this subject, I have always found it such a great leap to think that some ring of functions could have such a strong influence on geometry. Given the idea, it seems natural. But to be the one that first had the idea, that seems a great leap. Does anyone have any information about the history of this idea? Who first thought about? Perhaps why they thought about it? I believe the answer is Riemann, when studying what we now call Riemann surfaces. But, he doesn't seem to have "gone down the rabbit hole" with it, so far as I can uncover. Were there any ideas like this beforehand? A canonical reference on this is Dieudonne's History of Algebraic Geometry. An abridged version Historical Development of Algebraic Geometry is freely available, see also Easton's slides. Let me make a general comment first. When we wonder "however did someone first connect these two [modern ideas]?" we tacitly presuppose that they were always separately available, waiting to be connected. But the truth often is that they were developed connected to each other. Riemann was indeed instrumental in creating the modern algebro-geometric framework, but he did not have the idea to study surfaces via rings of functions for the simple reason than in his time the (general) concept of Riemann surfaces, let alone of rings of functions, did not exist. He was studying Abelian integrals, this led him to consider surfaces on which holomorphic and meromorphic functions, such as Abelian integrals, are defined. And by the time Kronecker and Dedekind-Weber developed the suitable algebraic concepts they already had the connection on display in Riemann's work. So nobody had such an idea first. Here are some details as described by Dieudonne: "It is quite a paradox that in the work of this prodigious genius , out of which algebraic geometry emerges entirely regenerated, there is almost no mention of algebraic curve, it is from his theory of algebraic functions and their integrals that all of the birational geometry of the nineteenth and the beginning of the twentieth century issues. [...] Instead of starting (as would all his predecessors and most of his immediate successors) from an algebraic equation $F(s, z) = 0$ and the Riemann surface of the algebraic function $s$ of $z$ which it defines, his initial object is an $n$-sheeted Riemann surface without boundary and with a finite number of ramification points, given a priori without any reference to an algebraic equation... Thus, the abstract Riemann surface $S$ is, in fact, identical to that of algebraic function $s(z)$ defined by $F(s, z) = 0$, and Riemann attaches to it what will, after Dedekind's time, be called the field of meromorphic (or rational) functions on $S$. [emphasis Dieudonne's] Riemann's insights were absorbed in two foundational papers from 1882, by Kronecker (Grundzüge einer arithmetischen Theorie der algebraischen Grössen, Crelle's journal, 92, 1–122) and Dedekind-Weber (Journal für die reine und angewandte Mathematik, 92, 181-290): "The first task to which each school of algebraic geometry addressed itself was therefore the systematization of the birational theory of algebraic plane curves, incorporating most of Riemann's results with proofs in conformity with the principles of the school... just as Riemann had revealed the close relationship between algebraic varieties and the theory of complex manifolds, Kronecker and Dedekind-Weber brought to light for the first time the deep similarities between algebraic geometry and the burgeoning theory of algebraic numbers... this conception of algebraic geometry is for us the clearest and simplest one, due to our familiarity with abstract algebra." Kronecker started defining varieties in terms of rings of polynomials vanishing on them, and developed the notions of subvariety and dimension in terms of ideals (which he called Modulsystems). "The goal of Dedekind and Weber in their fundamental paper was quite different and much more limited; namely, they gave purely algebraic proofs for all the algebraic results of Riemann. They start from the fact that, for Riemann, a class of isomorphic Riemann surfaces corresponds to a field $K$ of rational functions, which is a finite extension of the field $C(X)$ of rational fractions in one indeterminate over the complex field; what they set out to do, conversely, if a finite extension $K$ of the field $C(X)$ is given abstractly, is to reconstruct a Riemann surface $S$ such that $K$ will be isomorphic to the field of rational functions on $S$. • Peter Freyd likes to say that asking who first invented an idea is the wrong question. The right question is who last invented it. Who invented it so well that no one else ever had to invent it again. – Colin McLarty Jul 3 '17 at 1:34 The idea is usually attributed to Dedekind and Weber in Theorie der algebraischen Functionen einer Veränderlichen (1882): [1, 2, 3, 4, 5,...]. • thanks for the references; this looks like it is exactly what I need to read. – User0112358 May 24 '17 at 0:31 I don't think this was Riemann, or that Riemann knew that any ring of functions determines the surface. In fact, Riemann studied compact surfaces on which the ring of regular functions is trivial, and he studied the field of meromorphic functions instead. The idea that a ring of functions determines the space is of much later origin. It can be traced to Gelfand's theory of commutative Banach algebras, and was brought to algebraic geometry by Grothendieck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833286166191101, "perplexity": 440.235333026342}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255562.23/warc/CC-MAIN-20190520041753-20190520063753-00161.warc.gz"}
http://mathhelpforum.com/calculus/174769-finding-equation-parabola-print.html
# Finding the equation of parabola • Mar 16th 2011, 10:33 AM Ellla Finding the equation of parabola Area (S) is 2. How do I find the equation of this parabola? ;\ http://img846.imageshack.us/img846/3081/function.png I guess first of all I should start at this: $\int_{0}^1 ?? dx=2$ but it does not seem to help me at all... • Mar 16th 2011, 11:03 AM emakarov If the function whose graph is this parabola is $f(x)=ax^2+bx+c$, you have three unknowns: a, b and c. You also have three equations: $\int_0^1f(x)\,dx=2$, f(0) = 0 and f(1) = 0. • Mar 16th 2011, 11:20 AM TheChaz In other words, we know that x = 0 and x = 1 are roots, so x and (x - 1) are factors. So your quadratic is of the form ax(x - 1)...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852287530899048, "perplexity": 505.9387835162801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00378-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/54696-partly-conceptual.html
# Math Help - Partly Conceptual 1. ## Partly Conceptual What is $\int x^x$ And if it is a non integrable function then what is the reason behind it? 2. note x^x = (e^ln(x))^x = e^(xln(x)). This function isn't integrable. The proof goes similar to the proof that e^(x²) isn't integable which is probably in your textbook.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648228049278259, "perplexity": 1800.9281722760554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990114.79/warc/CC-MAIN-20150728002310-00189-ip-10-236-191-2.ec2.internal.warc.gz"}
https://brilliant.org/practice/circle-properties-level-2-3-challenges/
Geometry Circle Properties: Level 2 Challenges Three semicircles (with equal radii) are drawn inside the large semicircle so that their diameters all sit on the diameter of the large semicircle. What is the ratio of the red area to the blue area? $O$ is the center of the circle to the right. Find, in degrees, $\angle QPR + \angle ORQ.$ A circle is inscribed in a square as shown above. A smaller circle is drawn tangent to two sides of the square and externally tangent to the inscribed circle. Find the area of the blue shaded region to two decimal places. Five congruent circles overlap. A line is drawn connecting the bottom of the first circle to the top of the fifth circle. The area enclosed by the circles under the line is shaded gold. The overlapping areas of two circles each have an area of 5, and the gold area is 35. Find the area of one circle. $O$ is the center of the circle. Find $h.$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8643835783004761, "perplexity": 266.17855568508764}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526536.46/warc/CC-MAIN-20190720153215-20190720175215-00082.warc.gz"}
https://chemistry.stackexchange.com/questions/119220/which-electronic-transition-of-the-hydrogen-spectrum-corresponds-to-the-third-l
Which electronic transition of the hydrogen spectrum corresponds to "the third line from the red end"? Question In Bohr series of lines of hydrogen spectrum, the third line from the red end corresponds to which one of the following inner-orbit jumps of the electron for Bohr orbits in an atom of hydrogen? (A) $$3\to2$$ (B) $$5\to2$$ (C) $$4\to1$$ (D) $$2\to5$$ Only one option is correct. My approach I just drew the electron transition diagram like this: Then, I counted the third line from the red end, i.e., from the left side starting from the first line of the particular series. I then counter checked with the given options. Surprisingly two options matched, i.e., option B as well as C. I don't know further how to decide which is the correct answer out of B and C, since only one answer is correct. There is no other details mentioned in the question and thus I am confused. • 4-1 and 5-2 fit the question as you suspect. Aug 15 '19 at 7:32 • It is kind of an odd question. However since the question mentions a red line, I'd assume that it is referring to the Balmer series which is in the visible range. Thus the answer would be (B). – MaxW Aug 15 '19 at 8:57 1 Answer Since only one answer is correct, we must consider the transitions in the visible region only i.e., the Balmer Series. If multiple answers are correct then it is wise to choose both the options B and C. The transition $$n_2 \to n_1$$ can be easily calculated by using the following formula and it is not necessary to draw the transitions, which might be difficult for higher transitions where drawing the transitions would be time-consuming. $$n_2=n_1 + n$$ where $$n$$ represents the $$n^{th}$$ line in a particular series. Since here we are talking about Balmer series, $$n_1=2$$ and for the third line $$n=3$$. Substituting the values $$n_2=2+3=5$$ Hence the required transition is given by option B.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317401051521301, "perplexity": 283.17179495345476}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363376.49/warc/CC-MAIN-20211207105847-20211207135847-00096.warc.gz"}
https://www.arxiv-vanity.com/papers/astro-ph/9910097/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # A New Cosmological Model of Quintessence and Dark Matter Varun Sahni and Limin Wang Inter-University Centre for Astronomy & Astrophysics, Post Bag 4, Pune 411007, India Department of Physics, 538 West 120 Street, Columbia University, New York NY 10027, USA May 29, 2020 ###### Abstract We propose a new class of quintessence models in which late times oscillations of a scalar field give rise to an effective equation of state which can be negative and hence drive the observed acceleration of the universe. Our ansatz provides a unified picture of quintessence and a new form of dark matter we call Frustrated Cold Dark Matter (FCDM). FCDM inhibits gravitational clustering on small scales and could provide a natural resolution to the core density problem for disc galaxy halos. Since the quintessence field rolls towards a small value, constraints on slow-roll quintessence models are safely circumvented in our model. ###### pacs: PACS number(s): 04.62.+v, 98.80.Cq [ ] The recent discovery that type Ia high redshift supernovae are fainter than they would be in an Einstein-de Sitter universe suggests that the universe may be accelerating, fuelled perhaps by a cosmological constant or some other field possessing long range ‘repulsive’ effects [3, 4]. The acceleration of the universe is related to the equation of state of matter through the Einstein equation ¨aa=−4πG3[ρc+ρX(1+3wX)] (1) for cold matter and X-matter with equation of state . Clearly a necessary (but not sufficient) condition for the universe to accelerate is . In other words the equation of state of X-matter must violate the strong energy condition (SEC) so that . Investigations of cosmological models with have demonstrated that these models outperform most others in predicting the correct form for the large scale clustering spectrum, accounting for CMB anisotropies on large and intermediate angular scales and providing excellent agreement with the luminosity-distance relation obtained from observations of high redshift supernovae [5]. In addition, flat models are compelling from a theoretical viewpoint since they agree with generic predictions made by the inflationary scenario. The literature describing phenomenological forms of matter violating the SEC is vast (see [6] for a recent review). Nevertheless two kinds of matter have been singled out in recent times as being of special interest: 1. A cosmological constant (), 2. A scalar field rolling down a potential . For fields rolling sufficiently slowly and , so that plays the role of a time-dependent -term. Although appealing, models with the simplest potentials including run into problems similar to those encountered by a cosmological constant. The enormous overdamping of the scalar field equation during radiation and matter dominated epochs causes to remain unchanged virtually from the Planck epoch to [7] resulting in an enormous difference in the scalar field energy density and that of matter/radiation at early times. This leads to a fine tuning problem: the relative values of and must be set to very high levels of accuracy in order to ensure at precisely the present epoch. A more reasonable assumption might be if the energy density in the -field were comparable to that of radiation at very early times – say at the end of inflation [8]. This might even be expected if the -field were to be an inflationary relic, its energy set by an equipartition ansatz. However for the -field to remain subdominant until recently its energy density must decrease rapidly at early times. Such behaviour clearly cannot arise for polynomial potentials , for which will rapidly dominate the total density resulting in a colossal -term today if initially. Fortunately there do exist families of potentials for which the behaviour of is more flexible. To illustrate this, consider a minimally coupled scalar field rolling down the potential V(ϕ)=V0(coshλϕ−1)p. (2) has asymptotic forms: V(ϕ) ≃ ~V0e−pλϕ  for |λϕ|≫1 (ϕ<0), (3) V(ϕ) ≃ ~V0(λϕ)2p  for |λϕ|≪1 (4) where . Scalar field models with the potential have the attractive property that the energy density in tracks the the radiation/matter component as long as the value of is large and negative, so that [9]: ρϕρB+ρϕ=3(1+wB)p2λ2 (5) ( respectively for dust, radiation). During later times the form of changes to a power law (4) resulting in rapid oscillations of about . The change in the form of the scalar field potential is accompanied by an important change in the equation of state of the scalar field. As long as is described by (3), the kinetic energy of the scalar remains larger than its potential energy and the scalar field equation of state mimicks background matter . However during the oscillatory phase can become smaller than , the virial theorem then gives the following expression for the mean equation of state [10] ⟨wϕ⟩=⟨12˙ϕ2−V(ϕ)12˙ϕ2+V(ϕ)⟩=p−1p+1. (6) The corresponding value of the scalar field density and expansion factor is given by ρϕ∝a−3(1+wϕ) (7) a∝tc, c=23(1+⟨wϕ⟩)−1. (8) From (6), (7) & (8) we find that the mean equation of state, the scalar field density and the expansion rate of the universe depend sensitively upon the value of the parameter in the potential (2). Three values of should be singled out for particular attention since they give rise to cosmologically interesting solutions: 1. : In this case the scalar field equation of state behaves like that of pressureless (cold) matter or dust , a scalar field potential with this value of could therefore play the role of cold dark matter (CDM) in the universe. 2. : This results in and . This choice of the parameter leads to a ‘coasting’ form for the scale factor at late times: . A flat universe under the influence of the potential will therefore have exactly the same expansion properties as an open universe without being plagued by the ‘omega problem’ ! (See [11] for related scenarios.) 3. Smaller values lead to . From (7) we find that the scalar field density falls off slower than either radiation () or cold matter (). The scalar field therefore dominates the mass density in the universe at late times leading to accelerated expansion according to . The epoch of scalar field dominance commences at the cosmological redshift . We therefore find that a scalar field with the potential (2) might serve as a good candidate for quintessence. Figure 1 confirms this by showing the density parameter as a function of the cosmological scale factor. We find that the ratio remains approximately constant during the prolonged epoch of radiation/matter domination as the -field tracks the dominant radiation/matter component. ( is necessary in order to satisfy nucleosynthesis constraints.) At the end of the matter dominated epoch begins to grow as the scalar field equation of state turns negative in response to rapid oscillations of . As in earlier tracker quintessence models [8] present-day values , , consistent with supernovae [3, 4] and other observations [5], can be obtained for a large class of initial conditions. (General potentials leading to oscillatory quintessence must satisfy where angular brackets denote the time average over a single oscillation [12]. We assume that the quintessence field couples very weakly to other matter fields so that rapid oscillations of do not result in particle production of the kind associated with ‘preheating’.) Our potential belongs to the general category of exponential potentials which are frequently encountered in field theory [13], condensed matter physics [14] and as solutions to the non-perturbative renormalization group equations [15]. In figure 2 we compare the redshift dependence of the luminosity distance for a specific realisation of our quintessence model with obtained from supernovae observations. FIG. 2. The luminosity distance vs. redshift for the model shown in fig. 1 (solid line). The dashed line is the standard CDM model (shown for comparison) and the horizontal line corresponds to the fiducial empty Milne universe . The filled circles show supernovae data from High-Z Supernova Search Team[4] and the opaque circles show data from Supernova Cosmology Project[3]. The low-z supernovae are from the Calan-Tololo sample. We would also like to draw attention to the possibility of a unified picture of quintessence and cold dark matter in which both components are described by a pair of scalar fields evolving under the action of the potential (2) but with different values of the exponent : V(ϕ,ψ)=Vϕ(coshλϕϕ−1)pϕ+Vψ(coshλψψ−1)pψ (9) in the case of CDM and in the case of quintessence. This approach (along to the lines suggested by [16]) ameliorates the ‘coincidence problem’ between dark matter (CDM) and quintessence which arises in standard cosmology. It also significantly reduces the discrepancy between the present value of and that at the end of Inflation. Figure 3 shows a working example. It is interesting that the CDM particle in this scenario can be ultra-light, its mass () is related to the epoch when begins to oscillate and its Compton wavelength can easily be of order a kilo-parsec or smaller. (In the cosmological model illustrated in figure 3 the CDM field begins to oscillate at so that parsec.) Cold dark matter made up of a condensate of ultra-light particles would be frustrated in its attempts to cluster on scales smaller than because of the uncertainty principle, the resulting Frustrated Cold Dark Matter model (FCDM) might provide a natural explanation for two major difficulties faced by the standard CDM scenario. (i) The dearth of halo dwarf galaxies: the number of dwarf’s in the local group is an order of magnitude smaller than predicted by N-body simulations of SCDM [17]. (ii) The discrepency between observed shapes of galaxy rotation curves and simulated dark matter halos. Recent observations of low surface brightness (LSB) galaxies show them to possess rotation curves which indicate a constant mass density in the central core region. These observations are difficult to accomodate within the SCDM model since high resolution N-body simulations of SCDM halos indicate a cuspy central density profile having the form in the core region [18, 19, 20, 21, 22]. FIG. 3. The evolution of the dimensionless density parameter for the CDM field (dashed line) and quintessence field (thin solid line). Baryon (dash-dotted line) and radiation densities (thick solid line) are also shown. The parameters for have been given in the caption of figure 1. The parameters for are: , , . It is both interesting and revealing that a physical mechanism suppressing small scale clustering arises in the FCDM model at the purely classical level. To demonstrate this we note that once begins oscillating acquires the form . Using the relationship [23] and assuming that the gradient energy is subdominant we obtain the following expressions for the mean pressure and the mean density : , . The motion of is driven mainly by the quadratic term (which also provides the dominant contribution to the energy density). As a result , which can be used to establish . Substituting the resulting expression for the speed of sound into the Jeans length , we get λJ≃√m2Pl2V0≡λ√2(mPlm). (10) We therefore find that the Jeans length is larger than the Compton wavelength () if [24]. For the FCDM model illustrated in figure 3, one finds 1 kpc. A lighter particle will possess a larger Jeans length while is smaller for a -field which began oscillating much before , a very massive field will resemble standard CDM. By inhibiting gravitational clustering on scales smaller than kpc, FCDM is expected to give rise to galaxy halos which are less centrally concentrated leading to better agreement between theory and observations. Finally we note that quintessence-type potentials could also arise in particle physics models which invoke the Peccei-Quinn mechanism to solve the strong CP problem in QCD. Consider for instance the following simple modification to the symmetry breaking potential responsible for the axion V(ϕ)=λ(|ϕ|2−f22)2+m2f2(1−cosθ)p. (11) The second term in (11) when expanded about acquires the form . Accordingly rapid oscillations of the field about now give rise to an equation of state described by (6), resulting in for . (In the standard scenario , , is the axion mass and is the Peccei-Quinn symmetry breaking scale.) As a result an axion-like scalar with will be a candidate for quintessence since its energy density will diminish more slowly than that of either matter or radiation, leading to the dominance of the -field at late times and the accelerated expansion of the universe. It should be noted that motion under the action of potentials (2) & (11) is well defined even though for , is weakly singular at . One can make mathematically more appealing by the field redefinition , , this will not affect our results in any significant way. It is also worth mentioning that during the oscillatory stage if . This is likely to affect very long wavelength fluctuations in for the potential (2) since the scalar field begins oscillating fairly recently in this case. Finally we would like to point out an important distinction which exists between the quintessence models suggested by us and those of [8]. In our models the quintessence field or oscillates about a small value at late times (formally as ). Additionally the potential is not constraind to be flat since the field does not have to slow-roll in order to give rise to a negative equation of state. As a result quantum corrections which might be significant in quintessence models in which the scalar field rolls down a flat potential to large values [25] can safely be ignored in models of the kind discussed in the present paper. We acknowledge the hospitality of the organisers of the Santa Fe Summer Workshop on Structure Formation and Dark Matter, June 28 - July 16, 1999 where this work was initiated. We thank Lloyd Knox for pointing out that the Compton wavelength of our CDM particle has an astrophysically interesting length scale which subsequently led to many interesting discussions. Useful discussions with Carlos Frenk, Marc Kamionkowski, Somak Raychaudhury, Jim Peebles, Rachel Somerville, Alexei Starobinsky and Neil Turok are also acknowledged. LW was supported by DoE grant DE-FG02-92ER40699.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045538902282715, "perplexity": 729.5053312173593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880665.3/warc/CC-MAIN-20200714114524-20200714144524-00466.warc.gz"}
https://en.wikipedia.org/wiki/Magnetic_tweezers
# Magnetic tweezers Magnetic tweezers (MT) are scientific instruments for the manipulation and characterization of biomolecules or polymers. These apparatus exert forces and torques to individual molecules or groups of molecules. It can be used to measure the tensile strength or the force generated by molecules. Most commonly magnetic tweezers are used to study mechanical properties of biological macromolecules like DNA or proteins in single-molecule experiments. Other applications are the rheology of soft matter, and studies of force-regulated processes in living cells. Forces are typically on the order of pico- to nanonewtons. Due to their simple architecture, magnetic tweezers are a popular biophysical tool. In experiments, the molecule of interest is attached to a magnetic microparticle. The magnetic tweezer is equipped with magnets that are used to manipulate the magnetic particles whose position is measured with the help of video microscopy. ## Construction principle and physics of magnetic tweezers A magnetic tweezers apparatus consists of magnetic micro-particles, which can be manipulated with the help of an external magnetic field. The position of the magnetic particles is then determined by a microscopic objective with a camera. Typical configuration for magnetic tweezers; only the experimental volume is shown. ### Magnetic particles Magnetic particles for the operation in magnetic tweezers come with a wide range of properties and have to be chosen according to the intended application. Two basic types of magnetic particles are described in the following paragraphs; however there are also others like magnetic nanoparticles in ferrofluids, which allow experiments inside a cell. Superparamagnetic beads are commercially available with a number of different characteristics. The most common is the use of spherical particles of a diameter in the micrometer range. They consist of a porous latex matrix in which magnetic nanoparticles have been embedded. Latex is auto-fluorescent and may therefore be advantageous for the imaging of their position. Irregular shaped particles present a larger surface and hence a higher probability to bind to the molecules to be studied.[1] The coating of the microbeads may also contain ligands able to attach the molecules of interest. For example, the coating may contain streptavidin which couples strongly to biotin, which itself may be bound to the molecules of interest. When exposed to an external magnetic field, these microbeads become magnetized. The induced magnetic moment ${\displaystyle {\overrightarrow {m}}({\overrightarrow {B}})}$ is proportional to a weak external magnetic field ${\displaystyle {\overrightarrow {B}}}$: ${\displaystyle {\overrightarrow {m}}({\overrightarrow {B}})={\frac {V\chi {\overrightarrow {B}}}{\mu _{0}}}}$ where ${\displaystyle \mu _{0}}$ is the vacuum permeability. It is also proportional to the volume ${\displaystyle V}$ of the microspheres, which stems from the fact that the number of magnetic nanoparticles scales with the size of the bead. The magnetic susceptibility ${\displaystyle \chi }$ is assumed to be scalar in this first estimation and may be calculated by ${\displaystyle \chi =3{\frac {\mu _{r}-1}{\mu _{r}+2}}}$, where ${\displaystyle \mu _{r}}$ is the relative permeability. In a strong external field, the induced magnetic moment saturates at a material dependent value ${\displaystyle {\overrightarrow {m}}_{sat}}$. The force ${\displaystyle {\overrightarrow {F}}}$ experienced by a microbead can be derived from the potential ${\displaystyle U=-{\frac {1}{2}}{\overrightarrow {m}}({\overrightarrow {B}})\cdot {\overrightarrow {B}}}$ of this magnetic moment in an outer magnetic field:[2] ${\displaystyle {\overrightarrow {F}}=-{\overrightarrow {\nabla }}U={\begin{cases}{\frac {V\chi }{2\mu _{0}}}{\overrightarrow {\nabla }}\left|{\overrightarrow {B}}\right|^{2}&\qquad {\text{in a weak magnetic field}}\\{\frac {1}{2}}{\overrightarrow {\nabla }}\left({\overrightarrow {m}}_{sat}\cdot {\overrightarrow {B}}\right)&\qquad {\text{in a strong magnetic field}}\end{cases}}}$ The outer magnetic field can be evaluated numerically with the help of finite element analysis or by simply measuring the magnetic field with the help of a Hall effect sensor. Theoretically it would be possible to calculate the force on the beads with these formulae; however the results are not very reliable due to uncertainties of the involved variables, but they allow estimating the order of magnitude and help to better understand the system. More accurate numerical values can be obtained considering the Brownian motion of the beads. Due to anisotropies in the stochastic distribution of the nanoparticles within the microbead the magnetic moment is not perfectly aligned with the outer magnetic field i.e. the magnetic susceptibility tensor cannot be reduced to a scalar. For this reason, the beads are also subjected to a torque ${\displaystyle {\overrightarrow {\Gamma }}}$which tries to align ${\displaystyle {\overrightarrow {m}}}$ and ${\displaystyle {\overrightarrow {B}}}$: ${\displaystyle {\overrightarrow {\Gamma }}={\overrightarrow {m}}\times {\overrightarrow {B}}}$ The torques generated by this method are typically much greater than ${\displaystyle 10^{3}\mathrm {pNnm} }$, which is more than necessary to twist the molecules of interest.[3] Ferromagnetic nanowires The use of ferromagnetic nanowires for the operation of magnetic tweezers enlarges their experimental application range. The length of these wires typically is in the order of tens of nanometers up to tens of micrometers, which is much larger than their diameter. In comparison with superparamagnetic beads, they allow the application of much larger forces and torques. In addition to that, they present a remnant magnetic moment. This allows the operation in weak magnetic field strengths. It is possible to produce nanowires with surface segments that present different chemical properties, which allows controlling the position where the studied molecules can bind to the wire.[1] ### Magnets To be able to exert torques on the microbeads at least two magnets are necessary, but many other configurations have been realized, reaching from only one magnet that only pulls the magnetic microbeads to a system of six electromagnets that allows fully controlling the 3-dimensional position and rotation via a digital feedback loop.[4] The magnetic field strength decreases roughly exponentially with the distance from the axis linking the two magnets on a typical scale of about the width of the gap between the magnets. Since this scale is rather large in comparison to the distances, when the microbead moves in an experiment, the force acting on it may be treated as constant. Therefore, magnetic tweezers are passive force clamps due to the nature of their construction in contrast to optical tweezers, although they may be used as positive clamps, too, when combined with a feedback loop. The field strength may be increased by sharpening the pole face of the magnet which, however, also diminishes the area where the field may be considered as constant. An iron ring connection the outer poles of the magnets may help to reduce stray fields. Magnetic tweezers can be operated with both permanent magnets and electromagnets. The two techniques have their specific advantages.[3] Permanent Magnets Permanent magnets of magnetic tweezers are usually out of rare earth materials, like neodymium and can reach field strengths exceeding 1.3 Tesla.[5] The force on the beads may be controlled by moving the magnets along the vertical axis. Moving them up decreases the field strength at the position of the bead and vice versa. Torques on the magnetic beads may be exerted by turning the magnets around the vertical axis to change the direction of the field. The size of the magnets is in the order of millimeters as well as their spacing.[3] Electromagnets The use of electromagnets in magnetic tweezers has the advantage that the field strength and direction can be changed just by adjusting the amplitude and the phase of the current for the magnets. For this reason, the magnets do not need to be moved which allows a faster control of the system and reduces mechanical noise. In order to increase the maximum field strength, a core of a soft paramagnetic material with high saturation and low remanence may be added to the solenoid. In any case, however, the typical field strengths are much lower compared to those of permanent magnets of comparable size. Additionally, using electromagnets requires high currents that produce heat that may necessitate a cooling system.[1] The displacement of the magnetic beads corresponds to the response of the system to the imposed magnetic field and hence needs to be precisely measured: In a typical set-up, the experimental volume is illuminated from the top so that the beads produce diffraction rings in the focal plane of an objective which is placed under the tethering surface. The diffraction pattern is then recorded by a CCD-camera. The image can be analyzed in real time by a computer. The detection of the position in the plane of the tethering surface is not complicated since it corresponds to the center of the diffraction rings. The precision can be up to a few nanometers. For the position along the vertical axis, the diffraction pattern needs to be compared to reference images, which show the diffraction pattern of the considered bead in a number of known distances from the focal plane. These calibration images are obtained by keeping a bead fixed while displacing the objective, i.e. the focal plane, with the help of piezoelectric elements by known distances. With the help of interpolation, the resolution can reach precision of up 10 nm along this axis.[6] The obtained coordinates may be used as input for a digital feedback loop that controls the magnetic field strength, for example, in order to keep the bead at a certain position. Non-magnetic beads are usually also added to the sample as a reference to provide a background displacement vector. They have a different diameter as the magnetic beads so that they are optically distinguishable. This is necessary to detect potential drift of the fluid. For example, if the density of magnetic particles is too high, they may drag the surrounding viscous fluid with them. The displacement vector of a magnetic bead can be determined by subtracting its initial position vector and this background displacement vector from its current position. ## Force Calibration The determination of the force that is exerted by the magnetic field on the magnetic beads can be calculated considering thermal fluctuations of the bead in the horizontal plane: The problem is rotational symmetric with respect to the vertical axis; hereafter one arbitrarily picked direction in the symmetry plane is called ${\displaystyle x}$. The analysis is the same for the direction orthogonal to the x-direction and may be used to increase precision. If the bead leaves its equilibrium position on the ${\displaystyle x}$-axis by ${\displaystyle \delta x}$ due to thermal fluctuations, it will be subjected to a restoring force ${\displaystyle F_{\chi }}$ that increases linearly with ${\displaystyle \delta x}$ in the first order approximation. Considering only absolute values of the involved vectors it is geometrically clear that the proportionality constant is the force exerted by the magnets ${\displaystyle F}$ over the length ${\displaystyle l}$ of the molecule that keeps the bead anchored to the tethering surface: Geometry of the forces acting on the magnetic bead. ${\displaystyle F_{\chi }={\frac {F}{l}}\delta x}$. The equipartition theorem states that the mean energy that is stored in this "spring" is equal to ${\displaystyle {\frac {1}{2}}k_{B}T}$ per degree of freedom. Since only one direction is considered here, the potential energy of the system reads: ${\displaystyle \langle E_{p}\rangle ={\frac {1}{2}}{\frac {F}{l}}\langle \delta x^{2}\rangle ={\frac {1}{2}}k_{B}T}$. From this, a first estimate for the force acting on the bead can be deduced: ${\displaystyle F={\frac {lk_{B}T}{\langle \delta x^{2}\rangle }}}$. For a more accurate calibration, however, an analysis in Fourier space is necessary. The power spectrum density ${\displaystyle P(\omega )}$ of the position of the bead is experimentally available. A theoretical expression for this spectrum is derived in the following, which can then be fitted to the experimental curve in order to obtain the force exerted by the magnets on the bead as a fitting parameter. By definition this spectrum is the squared modulus of the Fourier transform of the position ${\displaystyle X(\omega )}$ over the spectral bandwidth ${\displaystyle \Delta f}$: ${\displaystyle P(\omega )={\frac {\left|X(\omega )\right|^{2}}{\Delta f}}}$ ${\displaystyle X(\omega )}$ can be obtained considering the equation of motion for a bead of mass ${\displaystyle m}$: ${\displaystyle m{\frac {\partial ^{2}x(t)}{\partial t^{2}}}=-6\pi R\eta {\frac {\partial x(t)}{\partial t}}-{\frac {F}{l}}x(t)+f(t)}$ The term ${\displaystyle 6\pi R\eta {\frac {\partial x(t)}{\partial t}}}$ corresponds to the Stokes friction force for a spherical particle of radius ${\displaystyle R}$ in a medium of viscosity ${\displaystyle \eta }$ and ${\displaystyle {\frac {F}{l}}x(t)}$ is the restoring force which is opposed to the stochastic force ${\displaystyle f(t)}$ due to the Brownian motion. Here, one may neglect the inertial term ${\displaystyle m{\frac {\partial ^{2}x(t)}{\partial t^{2}}}}$, because the system is in a regime of very low Reynolds number ${\displaystyle \left(\mathrm {Re} <10^{-5}\right)}$.[1] The equation of motion can be Fourier transformed inserting the driving force and the position in Fourier space: {\displaystyle {\begin{aligned}f(t)=&{\frac {1}{2\pi }}\int F(\omega )\mathrm {e} ^{i\omega t}\mathrm {d} t\\x(t)=&{\frac {1}{2\pi }}\int X(\omega )\mathrm {e} ^{i\omega t}\mathrm {d} t.\end{aligned}}} ${\displaystyle X(\omega )={\frac {F(\omega )}{6\pi iR\eta \omega +{\frac {F}{l}}}}}$. The power spectral density of the stochastic force ${\displaystyle F(\omega )}$ can be derived by using the equipartition theorem and the fact that Brownian collisions are completely uncorrelated:[7] ${\displaystyle {\frac {\left|F(\omega )\right|^{2}}{\Delta f}}=4k_{B}T\cdot 6\pi \eta R}$ This corresponds to the Fluctuation-dissipation theorem. With that expression, it is possible to give a theoretical expression for the power spectrum: ${\displaystyle P(\omega )={\frac {24\pi k_{B}T\eta R}{36\pi ^{2}R^{2}\eta ^{2}\omega ^{2}+\left({\frac {F}{l}}\right)^{2}}}}$ The only unknown in this expression, ${\displaystyle F}$, can be determined by fitting this expression to the experimental power spectrum. For more accurate results, one may subtract the effect due to finite camera integration time from the experimental spectrum before doing the fit.[6] Another force calibration method is to use the viscous drag of the microbeads: Therefore, the microbeads are pulled through the viscous medium while recording their position. Since the Reynolds number for the system is very low, it is possible to apply Stokes law to calculate the friction force which is in equilibrium with the force exerted by the magnets: ${\displaystyle F=6\pi \eta Rv}$. The velocity ${\displaystyle v}$ can be determined by using the recorded velocity values. The force obtained via this formula can then be related to a given configuration of the magnets, which may serve as a calibration.[8] ## Typical experimental set-up Schematic torsion extension curves of DNA at different forces in the pico Newton range. This section gives an example for an experiment carried out by Strick, Allemand, Croquette[9] with the help of magnetic tweezers. A double-stranded DNA molecule is fixed with multiple binding sites on one end to a glass surface and on the other to a magnetic micro bead, which can be manipulated in a magnetic tweezers apparatus. By turning the magnets, torsional stress can be applied to the DNA molecule. Rotations in the sense of the DNA helix are counted positively and vice versa. While twisting, the magnetic tweezers also allow stretching the DNA molecule. This way, torsion extension curves may be recorded at different stretching forces. For low forces (less than about 0.5 pN), the DNA forms supercoils, so called plectonemes, which decrease the extension of the DNA molecule quite symmetrically for positive and negative twists. Augmenting the pulling force already increases the extension for zero imposed torsion. Positive twists lead again to plectoneme formation that reduces the extension. Negative twist, however, does not change the extension of the DNA molecule a lot. This can be interpreted as the separation of the two strands which corresponds to the denaturation of the molecule. In the high force regime, the extension is nearly independent of the applied torsional stress. The interpretation is the apparition of local regions of highly overwound DNA. An important parameter of this experiment is also the ionic strength of the solution which affects the critical values of the applied pulling force that separate the three force regimes.[9] ## History and development Crick at Cambridge University Applying magnetic theory to the study of biology is a biophysical technique that started to appear in Germany in the early 1920s. Possibly the first demonstration was published by Alfred Heilbronn in 1922; his work looked at viscosity of protoplasts.[10] The following year, Freundlich and Seifriz explored rheology in echinoderm eggs. Both studies included insertion of magnetic particles into cells and resulting movement observations in a magnetic field gradient.[11] Dr. Fell at her lab in Cambridge in the 1950s In 1949 at Cambridge University, Francis Crick and Arthur Hughes demonstrated a novel use of the technique, calling it "The Magnetic Particle Method." The idea, which originally came from Dr. Honor Fell, was that tiny magnetic beads, phagocytoced by whole cells grown in culture, could be manipulated by an external magnetic field The tissue culture was allowed to grow in the presence of the magnetic material, and cells that contained a magnetic particle could be seen with a high power microscope. As the magnetic particle was moved through the cell by a magnetic field, measurements about the physical properties of the cytoplasm were made.[12] Although some of their methods and measurements were self-admittedly crude, their work demonstrated the usefulness of magnetic field particle manipulation and paved the way for further developments of this technique. The magnetic particle phagocytosis method continued to be used for many years to research cytoplasm rheology and other physical properties in whole cells.[13][14] An innovation in the 1990s lead to an expansion of the technique's usefulness in a way that was similar to the then-emerging optical tweezers method. Chemically linking an individual DNA molecule between a magnetic bead and a glass slide allowed researchers to manipulate a single DNA molecule with an external magnetic field. Upon application of torsional forces to the molecule, deviations from free-form movement could be measured against theoretical standard force curves or Brownian motion analysis. This provided insight into structural and mechanical properties of DNA, such as elasticity.[15][16] Magnetic tweezers as an experimental technique has become exceptionally diverse in use and application. More recently, the introduction of even more novel methods have been discovered or proposed. Since 2002, the potential for experiments involving many tethering molecules and parallel magnetic beads has been explored, shedding light on interaction mechanics, especially in the case of DNA-binding proteins.[17] A technique was published in 2005 that involved coating a magnetic bead with a molecular receptor and the glass slide with its ligand. This allows for a unique look at receptor-ligand dissociation force.[18] In 2007, a new method for magnetically manipulating whole cells was developed by Kollmannsberger and Fabry. The technique involves attaching beads to the extracellular matrix and manipulating the cell from the outside of the membrane to look at structural elasticity.[11] This method continues to be used as a means of studying rheology, as well as cellular structural proteins.[19] A study appeared in a 2013 that used magnetic tweezers to mechanically measure the unwinding and rewinding of a single neuronal SNARE complex by tethering the entire complex between a magnetic bead and the slide, and then using the applied magnetic field force to pull the complex apart.[20] ## Biological applications ### Magnetic tweezer rheology Magnetic tweezers can be used to measure mechanical properties such as rheology, the study of matter flow and elasticity, in whole cells. The phagocytosis method previously described is useful for capturing a magnetic bead inside a cell. Measuring the movement of the beads inside the cell in response to manipulation from the external magnetic field yields information on the physical environment inside the cell and internal media rheology: viscosity of the cytoplasm, rigidity of internal structure, and ease of particle flow.[12][13][14] A whole cell may also be magnetically manipulated by attaching a magnetic bead to the extracellular matrix via fibronectin-coated magnetic beads. Fibronectin is a protein that will bind to extracellular membrane proteins. This technique allows for measurements of cell stiffness and provides insights into the functioning of structural proteins.[11] The schematic shown at right depicts the experimental setup devised by Bonakdar and Schilling, et al. (2015)[19] for studying the structural protein plectin in mouse cells. Stiffness was measured as proportional to bead position in response to external magnetic manipulation. ### Single-molecule experiments Magnetic tweezers as a single-molecule method is decidedly the most common use in recent years. Through the single-molecule method, molecular tweezers provide a close look into the physical and mechanical properties of biological macromolecules. Similar to other single-molecule methods, such as optical tweezers, this method provides a way to isolate and manipulate an individual molecule free from the influences of surrounding molecules.[17] Here, the magnetic bead is attached to a tethering surface by the molecule of interest. DNA or RNA may be tethered in either single-stranded or double-stranded form, or entire structural motifs can be tethered, such as DNA Holliday junctions, DNA hairpins, or entire nucleosomes and chromatin. By acting upon the magnetic bead with the magnetic field, different types of torsional force can be applied to study intra-DNA interactions,[21] as well as interactions with topoisomerases or histones in chromosomes .[17] ### Single-complex studies Magnetic tweezers go beyond the capabilities of other single-molecule methods, however, in that interactions between and within complexes can also be observed. This has allowed recent advances in understanding more about DNA-binding proteins, receptor-ligand interactions,[18] and restriction enzyme cleavage.[17] A more recent application of magnetic tweezers is seen in single-complex studies. With the help of DNA as the tethering agent, an entire molecular complex may be attached between the bead and the tethering surface. In exactly the same way as with pulling a DNA hairpin apart by applying a force to the magnetic bead, an entire complex can be pulled apart and force required for the dissociation can be measured.[20] This is also similar to the method of pulling apart receptor-ligand interactions with magnetic tweezers to measure dissociation force.[18] ## Comparison to other techniques This section compares the features of magnetic tweezers with those of the most important other single-molecule experimental methods: optical tweezers and atomic force microscopy. The magnetic interaction is highly specific to the used superparamagnetic microbeads. The magnetic field does practically not affect the sample. Optical tweezers have the problem that the laser beam may also interact with other particles of the biological sample due to contrasts in the refractive index. In addition to that, the laser may cause photodamage and sample heating. In the case of atomic force microscopy, it may also be hard to discriminate the interaction of the tip with the studied molecule from other nonspecific interactions. Thanks to the low trap stiffness, the range of forces accessible with magnetic tweezers is lower in comparison with the two other techniques. The possibility to exert torque with magnetic tweezers is not unique: optically tweezers may also offer this feature when operated with birefringent microbeads in combination with a circularly polarized laser beam. Another advantage of magnetic tweezers is that it is easy to carry out in parallel many single molecule measurements. An important drawback of magnetic tweezers is the low temporal and spatial resolution due to the data acquisition via video-microscopy.[3] However, with the addition of a high-speed camera, the temporal and spatial resolution has been demonstrated to reach the Angstrom-level.[22] ## References 1. ^ a b c d Tanase, Monica; Biais, Nicolas; Sheetz, Michael (2007). "Chapter 20: Magnetic Tweezers in Cell Biology". In Wang, Yu-li; Discher, Dennis E. (eds.). Cell Mechanics. Methods in Cell Biology. 83. Elsevier Inc. pp. 473–493. ISBN 978-0-12-370500-6. 2. ^ Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H. (June 2009). "Quantitative Modeling and Optimization of Magnetic Tweezers". Biophysical Journal. 96 (12): 5040–5049. Bibcode:2009BpJ....96.5040L. doi:10.1016/j.bpj.2009.03.055. ISSN 0006-3495. PMC 2712044. PMID 19527664. 3. ^ a b c d Neuman, Keir C; Nagy, Attila (June 2008). "Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy". Nature Methods. 5 (6): 491–505. doi:10.1038/NMETH.1218. ISSN 1548-7091. PMC 3397402. PMID 18511917. 4. ^ Gosse, Charlie; Croquette, Vincent (June 2002). "Magnetic Tweezers: Micromanipulation and Force Measurement at the Molecular Level". Biophysical Journal. 82 (6): 3314–3329. Bibcode:2002BpJ....82.3314G. doi:10.1016/S0006-3495(02)75672-5. ISSN 0006-3495. PMC 1302119. PMID 12023254. 5. ^ Zacchia, Nicholas A.; Valentine, Megan T. (May 2015). "Design and optimization of arrays of neodymium iron boron-based magnets for magnetic tweezers applications". Review of Scientific Instruments. 86 (5): 053704. doi:10.1063/1.4921553. PMID 26026529. 6. ^ a b Vilfan, I. D.; Lipfert, J.; Koster, D. A.; Lemay, S. G.; Dekker, N. H. (2009). "Chapter 13: Magnetic Tweezers for Single-Molecule Experiments". In Hinterdorfer, Peter; van Oijen, Antoine (eds.). Handbook of Single-Molecule Biophysics. Springer. pp. 371–395. doi:10.1007/978-0-387-76497-9. ISBN 978-0-387-76496-2. 7. ^ de Groth, Barth G. (1999). "A simple model for Brownian motion leading to the Langevin equation". American Journal of Physics. 67 (12): 1248–1252. Bibcode:1999AmJPh..67.1248D. doi:10.1119/1.19111. ISSN 0002-9505. 8. ^ Haber, Charbel; Wirtz, Denis (December 2000). "Magnetic tweezers for DNA micromanipulation" (PDF). Review of Scientific Instruments. 71 (12): 4561–4570. Bibcode:2000RScI...71.4561H. doi:10.1063/1.1326056. ISSN 0034-6748. 9. ^ a b Strick, T. R.; Allemand, J.-F.; Croquette, V.; Croquette, V. (April 1998). "Behavior of Supercoiled DNA". Biophysical Journal. 74 (4): 2016–2028. Bibcode:1998BpJ....74.2016S. doi:10.1016/S0006-3495(98)77908-1. ISSN 0006-3495. PMC 1299542. PMID 9545060. 10. ^ Heilbronn, A (1922). "Eine neue methode zur bestimmung der viskosität lebender protoplasten". Jahrb. Wiss. Bot. 61: 284. 11. ^ a b c Kollmannsberger, Philip; Fabry, Ben (2007-11-01). "BaHigh-force magnetic tweezers with force feedback for biological applications". Review of Scientific Instruments. 78 (11): 114301–114301–6. Bibcode:2007RScI...78k4301K. doi:10.1063/1.2804771. ISSN 0034-6748. PMID 18052492. 12. ^ a b Crick, F.H.C.; Hughes, A.F.W. (1950). "The physical properties of cytoplasm". Experimental Cell Research. 1 (1): 37–80. doi:10.1016/0014-4827(50)90048-6. 13. ^ a b Valberg, P. A.; Albertini, D. F. (1985-07-01). "Cytoplasmic motions, rheology, and structure probed by a novel magnetic particle method". The Journal of Cell Biology. 101 (1): 130–140. doi:10.1083/jcb.101.1.130. ISSN 0021-9525. PMC 2113644. PMID 4040136. 14. ^ a b Valberg, P.A.; Feldman, H.A. (1987). "Magnetic particle motions within living cells. Measurement of cytoplasmic viscosity and motile activity". Biophysical Journal. 52 (4): 551–561. Bibcode:1987BpJ....52..551V. doi:10.1016/s0006-3495(87)83244-7. PMC 1330045. PMID 3676436. 15. ^ Smith, S. B.; Finzi, L.; Bustamante, C. (1992-11-13). "Direct mechanical measurements of the elasticity of single DNA molecules by using magnetic beads". Science. 258 (5085): 1122–1126. Bibcode:1992Sci...258.1122S. doi:10.1126/science.1439819. ISSN 0036-8075. PMID 1439819. 16. ^ Strick, T. R.; Allemand, J.-F.; Bensimon, D.; Bensimon, A.; Croquette, V. (1996-03-29). "The Elasticity of a Single Supercoiled DNA Molecule". Science. 271 (5257): 1835–1837. Bibcode:1996Sci...271.1835S. doi:10.1126/science.271.5257.1835. ISSN 0036-8075. PMID 8596951. 17. ^ a b c d De Vlaminck, Iwijn; Dekker, Cees (2012-05-11). "Recent Advances in Magnetic Tweezers". Annual Review of Biophysics. 41 (1): 453–472. doi:10.1146/annurev-biophys-122311-100544. ISSN 1936-122X. PMID 22443989. 18. ^ a b c Danilowicz, Claudia; Greenfield, Derek; Prentiss, Mara (2005-05-01). "Dissociation of Ligand−Receptor Complexes Using Magnetic Tweezers". Analytical Chemistry. 77 (10): 3023–3028. doi:10.1021/ac050057+. ISSN 0003-2700. PMID 15889889. 19. ^ a b Bonakdar, Navid; Schilling, Achim; Spörrer, Marina; Lennert, Pablo; Mainka, Astrid; Winter, Lilli; Walko, Gernot; Wiche, Gerhard; Fabry, Ben (2015-02-15). "Determining the mechanical properties of plectin in mouse myoblasts and keratinocytes". Experimental Cell Research. 331 (2): 331–337. doi:10.1016/j.yexcr.2014.10.001. PMC 4325136. PMID 25447312. 20. ^ a b Min, Duyoung; Kim, Kipom; Hyeon, Changbong; Cho, Yong Hoon; Shin, Yeon-Kyun; Yoon, Tae-Young (2013-04-16). "Mechanical unzipping and rezipping of a single SNARE complex reveals hysteresis as a force-generating mechanism". Nature Communications. 4: 1705. Bibcode:2013NatCo...4.1705M. doi:10.1038/ncomms2692. ISSN 2041-1723. PMC 3644077. PMID 23591872. 21. ^ Berghuis, Bojk A.; Köber, Mariana; van Laar, Theo; Dekker, Nynke H. (2016-08-01). "High-throughput, high-force probing of DNA-protein interactions with magnetic tweezers". Methods. Single molecule probing by fluorescence and force detection. 105: 90–98. doi:10.1016/j.ymeth.2016.03.025. PMID 27038745. 22. ^ Lansdorp, Bob M.; Tabrizi, Shawn J.; Dittmore, Andrew; Saleh, Omar A. (April 2013). "A high-speed magnetic tweezer beyond 10,000 frames per second". Review of Scientific Instruments. 84 (4): 044301–044301–5. Bibcode:2013RScI...84d4301L. doi:10.1063/1.4802678. PMID 23635212.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 50, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293371796607971, "perplexity": 1572.8901011435266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700988.64/warc/CC-MAIN-20200127143516-20200127173516-00018.warc.gz"}
https://www.physicsforums.com/threads/how-can-simpsons-rule-have-a-large-margin-of-error.381659/
# How can Simpson's Rule have a large margin of error? 1. Feb 25, 2010 ### mathew350z $$\int tan(x)dx$$ with the limits of 0 to 1.55, with n=10. Using Simpson's Rule, my answer was 4.923651704. But I don't understand why Simpson's Rule varies from my calculator's answer (TI-84 Plus) which is 3.873050987. I thought the higher N with Simpson's Rule would make your answer even more accurate? 2. Feb 25, 2010 ### Count Iblis The calculator seems to give you the exact answer, which is -Log[Cos(1.55)]. When you do numerical integration, you have to be careful when you are close to singularities. In this case 1.55 is close to pi/2 where tan diverges. 3. Feb 25, 2010 ### mathew350z Yes, I understand that tan(pi/2) diverges but I still am having a little trouble as to why that Simpson's Rule over-estimates the actual answer. 4. Feb 25, 2010 ### Count Iblis I guess you have to do a detailed investigation of the error term in Simpson's rule, study how it behaves near an 1/x -like singularity. Similar Discussions: How can Simpson's Rule have a large margin of error?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102928638458252, "perplexity": 1115.1286062270601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811243.29/warc/CC-MAIN-20180218003946-20180218023946-00073.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-section-2-1-complex-numbers-exercise-set-page-314/26
# Chapter 2 - Section 2.1 - Complex Numbers - Exercise Set - Page 314: 26 $-\dfrac{12}{13} - \dfrac{18}{13}i$ #### Work Step by Step Rationalize the denominator by multiplying the conjugate of the denominator, which is $3-2i$, to both the numerator and the denominator to obtain: $=\dfrac{-6i(3-2i)}{(3+2i)(3-2i)} \\=\dfrac{-18i+12i^2}{(3+2i)(3-2i)}$ Use the rule $(a-b)(a+b) = a^2-b^2$ to obtain: $=\dfrac{-18i+12i^2}{3^2-(2i)^2} \\=\dfrac{-18i+12i^2}{9-4i^2}$ Use the fact that $i^2=-1$ to obtain: $=\dfrac{-18i+12(-1)}{9-4(-1)} \\=\dfrac{-18i-12}{9+4} \\=\dfrac{-12-18i}{13} \\=-\dfrac{12}{13} - \dfrac{18}{13}i$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429324865341187, "perplexity": 373.635076445729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00596.warc.gz"}
https://www.physicsforums.com/threads/refractive-index.879465/
# Homework Help: Refractive Index 1. Jul 20, 2016 ### Firben 1. The problem statement, all variables and given/known data This is from a old exam Humid air has a different refractive index than dry air: in the microwave region n'(dry) > n'(humid) and n''(dry) < n''(humid), where n' and n'' denote the real and imaginary part of the refractive index, respectively. How are the effects asked for above different (qualitatively) on dry and humid days ? 2. Relevant equations n =c/v 3. The attempt at a solution What are they asking for ? It is how the < changes during dry/humid days ? 2. Jul 20, 2016 ### kq6up It is asking for a qualitative answer. It is not asking for any solution to be worked out, but what does the real and imaginary parts mean. Hint: Read about the complex refractive index in the wikipedia article on refractive index. Regards, KQ6UP 3. Jul 24, 2016 ### rude man What ARE they asking for? The problem states "... the effects asked for above ..." so there must have been something "above"? Share this great discussion with others via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583611249923706, "perplexity": 2390.15658019462}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866917.70/warc/CC-MAIN-20180525004413-20180525024413-00162.warc.gz"}
http://phukienphunu.canhodangcapsg.com/a78sdx2/bny5ved.php?tynundghs=interval-calculator-symbolab
# Interval calculator symbolab ## Interval calculator symbolab Integral calculator symbolab. The ( ) braces representing open interval gives open circle and [ ] braces representing closed interval gives closed circle on the graph. Free integral calculator - solve indefinite, definite and multiple Shows you the step-by-step solutions solving absolute value equations! This calculator will solve your problems. Type in any integral to get the solution, free steps and graph. Confidence Interval Calculator . Related Symbolab blog posts. The numbers that are between the two numbers is an interval. Learn more about different types of probabilities, or explore hundreds of other calculators covering the topics of math, finance, fitness, and health, among others. For example, suppose you carried out a survey with 200 respondents. Bisection method. There are different types of interval they are open interval, interval calculator | integral calculator | integral | integral definition | integral solver | integral meaning | integral synonym | integral of lnx | integral Related Symbolab blog posts Intermediate Math Solutions – Functions Calculator, Function Composition Function composition is when you apply one function to the results of another function. Didn't find the calculator you need? The calculator will find the radius and interval of convergence of the given power series. Author tinspireguru Posted on December 5, 2018 Categories Uncategorized Tags confidence intervals, margin of error, tinspire cx Post navigation Previous Previous post: TiNspire and Finances/Business : Compute Taxes, Time ValueCalculator Use Calculate and present basic descriptive statistics for a sample data set including minimum, maximum, sum, count, mean, median, mode, standard deviation and variance. To increase accuracy, the integration interval can be divided in a few parts, for each of which definite integral can be calculated separately with any integration rule. This calculator is used to compare the means of two independent groups to determine if they are significantly different from one another. Integral Applications Calculator Related Symbolab blog posts. Derivatives. Exponential growth calculator. If c be the mid-point of the interval, it can be defined as: c = ( a+b)/2. Free integral calculator - solve indefinite, definite and multiple integrals with all the steps. The mean value theorem expresses the relationship between the slope of the tangent to the curve at and the slope of the line through the points and . You can easily enter the expression using the buttons on this online calculator or use the default one to speed up the process. A big thank you for stop by here . View integral x^2(2x^3+3)^4dx - Indefinite Integral Calculator - Symbolab from MATH 122 at Oakland University. com is always the excellent destination to check-out!Plugging numbers into a calculator is only a problem if you don't teach them anything that can be applied to other problems - you know, the ones that make math really shine. Interval Calculator gives you a quick interval reference at your fingertips. Interval Calculator. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. Mean of a Function. Caution: Changing format will erase your data. If possible distribute this interval notation calculator picture at buddies, family via google plus, facebook, twitter, instagram or any other social media site. Uploaded by. Interval is in set of all positive and negative real numbers. Now divide 3 on both sides of the equation. Trying to find unique concepts is probably the fun activities but it can be also annoyed whenever we might not obtain the wanted concept. x 0 is the initial value at time t=0. integral Calculator- Symbolab Use the Trapezoidal Rule, the Midpoint Rule, and Simpson's Rule to approximate the given integral with the specified value of n. In set builder form it can be given as {x : a < x < b}. Confidence Interval Calculator. Free definite integral calculator - solve definite integrals with all the steps. It helps you in solving basic simple math problems to Algebra, Trigonometry and Calculus problems in a clear way. Free area under the curve calculator - find functions area under the curve step-by-step. The idea behind the Intermediate Value Theorem is this: When we have two points connected by a continuous curve: Yes, there is a solution to x 5 - 2x 3 - 2 = 0 in the interval [0, 2] An Interesting Thing! The Intermediate Value Theorem Can Fix a Wobbly Table. Average Rate of Change =. Use this step-by-step solver to calculate the binomial coefficient. com/solver/integral-calculator. com is always the excellent destination to check-out! The calculator will approximate the integral using the Trapezoidal Rule, with steps shown. Practice, practice, practice. Integral Calculator - Symbolab symbolab. How does this QTc calculator work? This is a handy health tool that can estimate the QT corrected interval by using the heart rate expressed in beats per minute and QT interval expressed either in seconds or milliseconds. Imagine that you have to find the average of y=f(x). About Outlier Calculator . Free functions domain calculator - find functions domain step-by-step. Therefore you have to calculate in a different way. com delivers great tips on interval notation calculator, graphing linear inequalities and inverse functions and other math subjects. t is the time in discrete intervals and selected time units. Symbolab Math Solver is composed of over a hundred of Symbolab most powerful calculators: Integral Calculator Derivative Calculator Similar Apps to Symbolab Calculator TI-84 CE Calculator Manual. The calculator will find the average value of the function on the given interval, with steps shown. List of Integrals of Hyperbolic Fu. Then you have to average it from the interval from ‘a’ to ‘b’. How to use the corrected QT Calculator Enter the measure of the QT Interval (from the beginning of the QRS complex to the end of the T wave) and the Heart Rate of the Electrocardiogram. Whats people lookup in this blog: Washer Method Calculator Symbolab Interval Calculator. Enter mean, N and SEM. com. 0 of the Free Statistics Calculators! These statistics calculators are free to be used by scientists, researchers, students, or any other curious or interested party. For more about how to use the Integral Calculator, go to "Help" or take a look at the examples. Example 2: Find the solution of the equation and write the solution in interval notation 2 x + 4 < 20. Specifically, if a number is less than Q1 - 1. The calculator will then perform the t-test for every possible combination of groups. John Siegfred Alulod. Definite Integral Calculator. Interval notation is (5, ∞). f''(x) = 6x 6x = 0x = 0. Go back to Math category Suggested. e. an E-value for the minimum amount of unmeasured confounding needed to move the estimate and confidence interval to your specified true value Time Calculator. Series Calculator. Algebra-calculator. Series Calculator computes sum of a series over the given interval. 02 in your calculator, you will find that this is off by a little over 10. Laplace Transform Calculator Find the Laplace and inverse Laplace transforms of functions step-by-step. Confidence Interval Calculator Sources and External Resources Wikipedia – Confidence Interval Stat Trek – Confidence Interval Wolfram Math World – Confidence Interval More Statistics Calculators…graphing calculator free download - Graphing Calculator, Desmos Graphing Calculator, Symbolab Graphing Calculator, and many more programs. Single-Sample Confidence Interval Calculator. g. Your browser is more than 18 months old and doesn't support the technologies used by this calculator. Enter the initial Pearson Correlation Coefficient Calculator. Practice Makes Perfect. Find the second derivative and calculate its roots. Find the numerical values, inequality, interval and set builder notation by entering the interval values. Search for a tool. Integral Calculator computes an indefinite integral (anti-derivative) of a function with respect to a given variable using analytical integration. Final integral value is the sum of integral for each partial intervals. Derivatives Derivative Applications Limits Integrals Integral Applications Series ODE Laplace Transform Taylor/Maclaurin Series. This is an online Confidence Limits for Mean calculator to find out the lower and upper confidence limits for the given confidence intervals. Confidence Interval Calculator getcalc. The Corrected QT Interval (QTc) adjusts the QT interval correctly for heart rate extremes. ). ujjwal2110. Symbolab. This is an Excel spreadsheet that can be used to calculate confidence intervals for a mean, the difference between two means, a proportion or odds, comparisons of two proportions (the absolute risk reduction, number needed to treat, relative risk, relative risk reduction and odds ratio), sensitivity, Interval graph calculator keyword after analyzing the system lists the list of keywords related and the list of Functions Monotone Intervals Calculator - Symbolab. Calculates the interval, the distance from one note to another. Scientific notation converter Scientific notation calculations. Outlier. This displacement calculator finds the displacement (distance traveled) by an object using its initial and final velocities as well as the time traveled. It allows you to enter the information for as many as 10 independent groups at time. This calculator is used to find the confidence interval (or accuracy) of a proportion given a survey´s sample size and results, for a chosen confidence level. Share on Facebook. Advanced Math Solutions – Integral Calculator, integration by parts, Part II. Symbolab's "getting started" series is moving on to help you solve high school level algebra and calculus. Calculus. Integration Practice problem with solution . Music Interval Calculator is and will be always free. Learning math takes integral Calculator- Symbolab Use the Trapezoidal Rule, the Midpoint Rule, and Simpson's Rule to approximate the given integral with the specified value of n. Although this may seem like a large number, bear in mind that this function's rate of increase is astronomical, and a difference of 10 is therefore quite understandable. …This confidence interval calculator is designed for sampling population proportions. This algebra lesson explains interval notation. It is capable of computing sums over finite, infinite (inf) and parametrized sequencies (n). 2. Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. This calculator will compute the 99%, 95%, and 90% confidence intervals for a regression coefficient, given the value of the regression coefficient, the standard Find The Domain of Functions - Calculator A step by step calculator to find the domain of a function. , A intersect B (AnB) which means the elements that are commonly present in both the sets. function-monotone-intervals-calculator. 5 Miles 2 Miles 3 Miles 4 Miles 5 Miles 6 Miles 8 Miles 10 Miles 12 Miles 10 Kilometers 12 Kilometers 15 Kilometers 20 Kilometers Your Time: : : Independent T-test. txt) or read online. If you check for the real value of x = 1. com delivers great tips on interval notation calculator, graphing linear inequalities and inverse functions and other math subjects. The calculator will find the radius and interval of convergence of the given power series. You can use it with any arbitrary confidence level. In the cases where series cannot be reduced to a closed form expression an approximate answer could be obtained using definite integral calculator. The Pearson correlation coefficient is used to measure the strength of a linear association between two variables, where the value r = 1 means a perfect positive correlation and the value r = -1 means a perfect negataive correlation. interval calculator | integral calculator | integral | integral definition | integral solver | integral meaning | integral synonym | integral of lnx | integral Then function defined on the half-line and integrable on any interval The limit of the integral and is called the improper integral of the first kind of function a to and . It also allows to draw graphs of the function and its integral. The domain can be called as a collection of various closed and open intervals. symbolab. For 2 number x1 and x2: The Function Analysis Calculator computes critical points, roots and other properties with the push of a button. Use a t-interval if: the population standard deviation is unknown and either the population is normally distributed or the sample size is larger than 30. interval calculator symbolabFree functions Monotone Intervals calculator - find functions monotone intervals step-by-step. Single-Sample Confidence Interval Calculator. Confidence Interval Calculator. Asymptotes · Critical Points · Inflection Points · Monotone Intervals · Extreme Points Free calculus calculator - calculate limits, integrals, derivatives and series step-by-step. Inverse intervals and get informations about simple and compounds intervals. Given is an inequality 2 x + 4 < 20. poisson distribution calculator - an online tool of discrete probability & statistics data analysis to find the probability of given number of events that occurred in a fixed interval of time with respect to the known average rate of events occurred. Display the interval for a specified starting note, interval type, and key. Advanced Math Solutions – Integral Calculator, integration by parts. Interval Notation Calculator. Your Web Browser Is Outdated. (-∞,-1] = half open interval, left value of -∞ NOT INCLUDED, right value of -1 INCLUDED To view a graph of the solution in interval notation, open the link shown below. The bisection method in mathematics is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. Type in any integral to get the solution, steps and graph. Confidence Interval Calculator for the Mean (Unknown… In case you have any suggestion, or if you would like to report a broken solver/calculator, please do not hesitate to contact us . Standard Deviation and Mean. Use the calculator below to find the area P shown in the normal distribution to the right, as well as the confidence intervals for a range of confidence levels. With the help of integration, you can define the area between a certain graph and an X-axis and restore an initial function using its derivative. 100% Free. Basic Statistics Package Chebyshev's Inequality Chebyshev's Rule Chebyshev's Rule Calculator Probability Calculator Probability Solver Statistics SolverGet the free "Arc Length Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. 4 stars based on 80 reviews theflightfinder. The average velocity of the object is multiplied by the time traveled to find the displacement. Symbolab Math Solver is composed of over a hundred of Symbolab most powerful calculators Interval Notation Calculator. CategoryAuthor: patrickJMTViews: 164KAverage Rate of Change Calculator - Byju'shttps://byjus. Fourier Series Calculator is a Fourier Series on line utility, simply enter your function if piecewise, introduces each of the parts and calculates the Fourier coefficients may also represent up to 20 coefficients. Confidence interval calculator. Simplify Calculator Gcd Calculator Scientific notation calculator. Z-Score to Percentile Calculator Sample Size Calculator for Discovering Problems in a User Interface Graph and Calculator for Confidence Intervals for Task Times. Example 1: calculate Riemann sum for y = x^2 over the interval [0, 2] for 4 equal intervals. Its initial use was to work out the time at which an Internet Confidence Interval Calculator for the Population Mean (when population std dev is known) Cumulative Area Under the Standard Normal Curve Calculator Cumulative Distribution Function (CDF) Calculator for the Normal Distribution Binomial Theorem Calculator Binomial Theorem Calculator This calculators lets you calculate __expansion__ (also: series) of a binomial. To use the confidence interval calculator simply enter the sample size used to collect data along with the actual population number. Algebra Calculator - get free step-by-step solutions for your algebra math problems Derivatives Derivative Applications Limits Integrals Integral Applications Series ODE Laplace Transform Welcome to our new "Getting Started" math solutions series. time | time_2 To display the time between intervals (eg. Show Instructions. ) (a) the Trapezoidal Rule (b) the Midpoint Rule (c) Simpson's Rule . The result is in its most 1. Integration is the inverse of differentiation. Whereas a confidence interval describes a likely range or interval of values, a point estimate describes a single value- a point as an estimate of an unknown parameter in the population. Free Circle calculator - Calculate circle area, center, radius and circumference step-by-step Definition Of Interval And Radius Convergence Chegg Com -> Source Integral calculator symbolab integral calculator symbolab integral calculator symbolab solid geometry calculator symbolab. For example an interval 5 x 9 contains all elements which are lies from 5 to 9. You can use this Z-Score Calculator to calculate the standard normal score (z-score) based on the raw score value, the population mean, and the standard deviation of the population. r is the growth rate when r>0 or decay rate when r<0, in percent. Scientific notation calculator. The calculator supports conversion from scientific notation to decimal, and vice versa. Symbolab; Solutions Graphing Calculator Derivatives Derivative Applications Limits Integrals Integral Applications Series ODE Laplace Transform Taylor/Maclaurin Series. In the event that you have to have assistance on value as well as elementary algebra, Solve-variable. Tweet on Twitter. Confidence Interval Calculator Substitute a value from the interval into the derivative to determine if the function is increasing or decreasing. Free functions calculator - explore function domain, range, intercepts, extreme points and asymptotes step-by-step. Enter the value of a =. About Outlier Calculator . A confidence interval is an interval (corresponding to the kind of interval estimators) that has the property that is very likely that the population parameter is contained by it (and this likelihood is measure by the confidence level). Find more Mathematics widgets in Wolfram|Alpha. In the previous post we covered substitution, but substitution is not always straightforward, for instance integrals Read More. , they like your product, they own a car, or they can speak a second language). Free trigonometric equation calculator - solve trigonometric equations step-by-step. Interval notation problem solver keyword after analyzing the system lists the list of keywords related Symbolab. Free functions extreme points calculator - find functions extreme and saddle points step-by-step. For 2 number x1 and x2: The Corrected QT Interval (QTc) adjusts the QT interval correctly for heart rate extremes. Use this confidence interval calculator to easily calculate the confidence bounds for a one-sample statistic, or for differences between two proportions or means (two independent samples). Quick review: Integration by parts is essentially the reverse Read More. If f''(x) > 0 it is convex. Advanced Math Solutions – Integral Calculator, advanced trigonometric functions, Part II. The confidence interval is the range of possible values that we believe the population mean could be, based on a desired confidence level for the statement. See More Examples » Use this step-by-step solver to calculate the binomial coefficient. We might, for example, ask how many customers visit a store each day, or how many home runs are hit in a season of baseball. Compare the values found for each value of in order to determine the absolute maximum and minimum over the given interval. This is calculator which finds function root using bisection method or interval halving method. Free inequality calculator - solve linear, quadratic and absolute value inequalities step-by-step. Functions. First up is solving high school level inequalities; that is quadratic inequalities and inequalities involving algebraic fractions. This is a lot like solving simple inequalities (See …Confidence Interval Calculator. com/average-rate-of-change-calculatorEnter the value of f(b) =. An outlier in a distribution is a number that is more than 1. Know Your Symbolab Calculator App – Windows XP/7/8/10 and MAC PC. Please remember that the computed indefinite integral belongs to a class of functions F(x)+C, where C is an arbitrary constant. Determining the corrected QT interval can allow practitioners to compare QT values over time at a variety of heart rates. Please enter a function, starting point, ending point, and how many divisions with which you want to use Trapezoidal Rule to evaluate. Related Symbolab blog posts High School Math Solutions – Derivative Calculator, Products & Quotients In the previous post we covered the basic derivative rules (click here to see previous post). com Essay. tutorcircle. Cris Jackson. This free probability calculator can calculate the probability of two events, as well as that of a normal distribution. Byju's Average Rate of Change Calculator is a tool which makes calculations very simple and interesting. Inequality Calculator. Choose a value in each interval and determine the sign that is in the second derivative. Find the critical points and the intervals of the increase and decrease of the function $f(x)=(x+5)^2(x-2)^5$ The critical points are : $-3,-5,2$ The four intervals - from left to right these 2 Lines Intersection Calculator-- Enter Line 1 Equation-- Enter Line 2 Equation (only if you are not pressing Slope) 2 Lines Intersection Video. Sample Size Calculator Terms: Confidence Interval & Confidence Level The confidence interval is the plus-or-minus figure usually reported in newspaper or television opinion poll results. Integrals involving radicals for instance, we want to get rid of the square root. These free statistics calculators are offered humbly in the hope that they will contribute in some small way to the advancement of science and the betterment of humanity. Interval Notation Calculator represents the interval in terms of inequality and plots its graph on number line. 9% will yield the largest range of all the confidence intervals. Type in any integral to get the solution, steps and graphFree functions Monotone Intervals calculator - find functions monotone intervals step-by-step. Type in any inequality to get the solution, steps and graph SymbolabInterval Notation Calculator. The Art of Convergence Tests. (Round your answers to six decimal places. A function basically relates an input to an output, there’s an Confidence Interval Calculator Sources and External Resources Wikipedia – Confidence Interval Stat Trek – Confidence Interval Wolfram Math World – Confidence Interval More Statistics Calculators… Independent Samples Confidence Interval Calculator. Related Resources. Advanced Math Solutions – Integral Calculator, trigonometric substitution In the previous posts we covered substitution, but standard substitution is not always enough. Interval notation is representing a pair of numbers as an inequality. Learning math takes practice, lots of practice. Double Integrals Calculator Solve double integrals step-by-step. Single-Sample Confidence Interval Calculator Using the Z Statistic This simple confidence interval calculator uses a Z statistic and sample mean ( M ) to generate an …Online algebra calculator to calculate union of two sets (A union B) AUB. Any numbers which lies between any two given real numbers is an interval in math and also the set of all real numbers. Just another example where I find the radius and interval of convergence for a power series. Advanced Math Solutions – Integral Calculator, the basics. Produces the result Note that function must be in the integrable functions space or L 1 on selected Interval as we shown at theory sections. interval calculator | integral calculator | integral | integral definition | integral solver | integral meaning | integral synonym | integral of lnx | integral Free functions asymptotes calculator - find functions vertical and horizonatal asymptotes step-by-step function-monotone-intervals-calculator. interval notation calculator symbolab, interval notation calculator with absolute value, interval notation calculator with steps, interval notation How does this QTc calculator work? This is a handy health tool that can estimate the QT corrected interval by using the heart rate expressed in beats per minute and QT interval expressed either in seconds or milliseconds. Compute confidence intervals around continuous data using either raw or summary data. How To Interpret The Results. Both notes may be not more than 12 half-steps (one octave) apart, to allow a calculation. 5×IQR or greater than Q3 + 1. Get the free "Arc Length Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. Should you need to have help on subtracting polynomials or maybe absolute, Algebra-calculator. The numbers that are between the two numbers is an interval. Let’s take a look two other Calculus examples that the TiNspire’s Calculus Made Easy APP and the Symbolab Calculator solve step by step. Free functions critical points calculator - find functions critical and stationary points Critical Points · Inflection Points · Monotone Intervals · Extreme Points Free area under the curve calculator - find functions area under the curve step-by-step. Symbolab; Follow @symbolab. Solve-variable. 5×IQR, then it is an outlier. This calculator computes confidence intervals of a sum, difference, quotient or product of two means, assuming both groups follow a Gaussian distribution. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. Sure as heck teaching the theory behind positional number systems and how it applies to simple operations (long addition etc. Calculator Use. Free Statistics Calculators: Home. Please change one or two values and click the according button to calculate. Algebra-calculator. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). In music theory, an interval is the difference Algebra Calculator is a calculator that gives step-by-step help on algebra problems. The confidence interval of 99. Please note that a safe confidence interval is set to be between 3 and 5 %. Confidence Interval for the Mean Video. This simple confidence interval calculator uses a Z statistic and sample mean (M) to generate an interval estimate of a population mean (μ). Integration Problems. If f''(x) < 0 it is concave. Your private math tutor, solves any math problem with steps! Integrals, Derivatives, Equations, Limits and much more. In the previous post we covered integration by parts. Welcome to version 4. Absolute Value Inequalities Calculator Related Symbolab blog posts. interval notation calculator, inequality calculator,absolute value interval notation,compound inequality, set builder notation calculatorFree volume of solid of revolution calculator - find volume of solid of revolution step-by-step. 2-Work & Energy (Problems) Uploaded by. There are different types of interval representation namely, closed, open, half open and half closed. Interval Notation Calculator (set builder notation calculator) is used to find the inequality and graph on a number line for the given interval. com Inequalities Calculator Solve linear The QTc calculator is aimed at determining the corrected QT interval. Process Sigma Calculator. The confidence interval calculator calculates the confidence interval by taking the standard deviation and dividing it by the square root of the sample size, according to the formula, σ x = σ/√n. 0025 for a 95% confidence interval). In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. com is always the excellent destination to check-out!Integral Calculator If in the process of doing your homework you realize that you can't cope with an integral problem, this step-by-step calculator will come in hand. In confusing terms, the confidence interval is an estimate of the accuracy of the estimate provided through the sample size of a given population. This simple confidence interval calculator uses a t statistic and two sample means (M 1 and M 2) to generate an interval estimate of the difference between two population means (μ 1 and μ 2). For 2 number x1 and x2:Confidence Interval Calculator Sources and External Resources Wikipedia – Confidence Interval Stat Trek – Confidence Interval Wolfram Math World – Confidence Interval More Statistics Calculators…Population Proportion – Confidence Interval Calculators Use this calculator to determine a confidence interval for your sample proportion where you are estimating the proportion of your population that possesses a particular property (e. Caution: you should really use the standard deviation of the entire population! But you can use the standard deviation of your sample if you have enough observations: at least n=30, the more the better. Definite Integral Calculator computes definite integral of a function over an interval using numerical integration. Even though derivatives are The calculator will find the average rate of change of the given function on the given interval, with steps shown. In this confidence limits calculator enter the percentage of confidence limit level, which ranges from 90 % to 99 %, sample size, mean and standard deviation to know the lower and upper confidence limits. You can also check your answers! Interactive graphs/plots help visualize and better understand the functions. λ is a positive real number, is the mean or equal to the expected number of occurrences for the given interval. Symbolab is an online tool for equation search and math solver. One and two-sided intervals are supported, as well …Interval Calculator Display the interval for a specified starting note, interval type, and key. Advanced Math Solutions – Integral Calculator, substitution. Free calculus calculator - calculate limits, integrals, derivatives and series step-by-step. Here is a little bit of information about the uniform distribution probability so you can better use the the probability calculator presented above: The uniform distribution is a type of continuous probability distribution that can take random values on the the interval $$[a, b]$$, and it zero outside of this interval. This interval can be used to estimate the accuracy of your data. Confidence Interval Calculator for the Population Mean. Aneesh Amitesh Chand. Advanced Math Solutions Average Value of a Function Calculator. Closed interval, represented as [a, b]. Riemann Sum Calculator. For example, given a group of 15 footballers, there is exactly \$$\binom {15}{11} = 1365\$$ ways we can form a football team. One and two-sided intervals are supported, as well …With the Symbolab - Math solver app, you can build equations and see answers to a wide variety of math problems. The solution to the given inequality is all the values greater than 5. Sample Size Calculator. Double Integrals Calculator Solve double integrals step-by-step. Symbolab; Solutions Graphing Calculator Follow @symbolab. Use the Standard Deviation Calculator to calculate your sample's standard deviation and mean. Then function defined on the half-line and integrable on any interval The limit of the integral and is called the improper integral of the first kind of function a to and . The interval could be anything - a unit of time, length, volume, etc. Symbolab Math Solver is composed of over a hundred of Symbolab most powerful calculators: Integral Calculator. This calculator is used to find the confidence interval (or accuracy) of a proportion given a survey´s sample size and results, for a chosen confidence level. Attn Minitab 17 Experts: Calculating Confidence Intervals in Graphical Summaries by Tom Gause. Enter 4 or more values and up to 5000 values separated by commas such as 1, 2, 4, 7, 7, 10, 2, 4, 5. We will show examples of square roots Indefinite Integral Calculator - Symbolab - Download as PDF File (. Have feedback about this calculator? About the Creator. confidence interval of the mean calculator. Interval Calculator computes the date and time at the end of an interval entered by the user. Advanced Math Solutions – Integral Calculator, advanced trigonometric functions. Calculus Examples. Online algebra calculator that calculates the intersection of two sets ie. This simple confidence interval calculator uses a t statistic and two sample means ( M1 and M2) to generate an interval estimate of the difference between two population means (μ 1 and μ 2 ). One and two-sided intervals are supported, as well as …Sample Size Calculator Terms: Confidence Interval & Confidence Level The confidence interval (also called margin of error) is the plus-or-minus figure usually reported in …The Duration Calculator calculates the number of days, months and years between two dates. ) is pointless if the only use you get out of Free power series calculator - Find convergence interval of power series step-by-stepIntegral Calculator If in the process of doing your homework you realize that you can't cope with an integral problem, this step-by-step calculator will come in hand. Confidence Interval Calculator for Proportions. That is because \$$\binom {n} {k} \$$ is equal to the number of distinct ways \$$k\$$ items can be picked from n items. The first step in iteration is to calculate the mid-point of the interval [ a, b ]. Related Symbolab blog posts Intermediate Math Solutions – Functions Calculator, Function Composition Function composition is when you apply one function to the results of another function. The function is evaluated at ‘c’, which means f(c) is calculated. Formula to calculate average value of a function is given by: Enter the average value of f(x), value of interval a and b in the below online average value of a function calculator and then click calculate button to find the output with steps. The method is also called the interval halving method. Solution can be expressed either in radians or degrees. (Steps require a one-time in-app purchase. Then function defined on the half-line and integrable on any interval The limit of the integral and is called the improper integral of the first kind of function a to and . Confidence Interval Calculator for Proportions. Ruck Calculator July 27, 2015 Exercises SSD Assessment Distance: 1 Mile 1. Scientific notation calculator. There are different types of interval they are open interval,Power Series Calculator Find convergence interval of power series step-by-step. Enter the raw score value; Enter the mean, standard deviation, and click the "Calculate" button to see the results. 1 day, 15 hours ago. Related Symbolab blog posts Advanced Math Solutions – Integral Calculator, common functions In the previous post we covered the basic integration rules (click here). Power Series Calculator Find convergence interval of power series step-by-step. The first main topic of study in a Calculus class are limits. Jul 02, 2011 · Interval and Radius of Convergence for a Series, Ex 4. Learn more about how the half-life formula is used, or explore hundreds of other math, finance, fitness, and health calculators. x(t) is the value at time t. Open interval, represented as (a, b). Follow @symbolab. Free Ellipse Area calculator - Calculate ellipse area given equation step-by-step Interval Notation Calculator (set builder notation calculator) is used to find the inequality and graph on a number line for the given interval. Substitute a value from the interval into the derivative to determine if the Calculator Project. $3. Added Oct 18, 2012 in Mathematics. Exponential growth/decay formula. Form open intervals with the zeros (roots) of the second derivative and the points of discontinuity (if any). Polynomial graphing calculator This page help you to explore polynomials of degrees up to 4. https://www. This simple confidence interval calculator uses a t statistic and sample mean (M) to generate an interval estimate of a population mean (μ). It's just a lot simpler! Let's look at the intervals we did with the set-builder notation:Intermediate Value Theorem. where the function is equal to 0). E-value calculator. Sliders are provided to move either or . Here is a little bit of information about the uniform distribution probability so you can better use the the probability calculator presented above: The uniform distribution is a type of continuous probability distribution that can take random values on the the interval $$[a, b]$$, and it zero outside of this interval. Interval Notation Calculator - Hello beloved visitor. This calculator computes the output values of poisson and cumulative poisson distribution with respect to the input values of average rate of Series Calculator computes sum of a series over the given interval. Calculator Use Calculate and present basic descriptive statistics for a sample data set including minimum, maximum, sum, count, mean, median, mode, standard deviation and variance. Free Circle calculator - Calculate circle area, center, radius and circumference step-by-step This trigonometric equations solver will find exact or approximate solutions on custom range. How to use the calculator. Integral Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. In music theory, an interval is the difference between two pitches. The Average Rate of Change Calculator an online tool which shows Average Rate of Change for the given input. Confidence Interval for the Mean VideoA simple online calculator to find the average rate of change of a function over a given interval. It can calculate and graph the roots (x-intercepts), signs , Local Maxima and Minima , Increasing and Decreasing Intervals , Points of Inflection and Concave Up/Down intervals . Step-by-Step Examples. interval notation calculator, inequality calculator,absolute value interval notation,compound inequality, set builder notation calculator Confidence Interval Calculator. For example,$1 < x \leq 7$, interval are all the numbers that lies from 1 to 7. To find Confidence Intervals of Proportions using Statistics Made Easy at symbolab (1) Taylor Series (1) calculator (3) calculator app (7) On-Line Fourier Series Calculator is an interactive app to calculate Fourier Series coefficients (Up to 10000 elements) for user-defined piecewise functions up to 5 pieces, for example. In cases where you might need service with math and in particular with free interval notation solver or monomials come pay a visit to us at Rational-equations. See More Examples » Online exponential growth/decay calculator. 49The calculator can also convert between half-life, mean lifetime, and decay constant given any one of the three values. Free online statistics calculator that helps you to determine the number of experimental units required for the survey with the given confidence interval, level and population size. Step 3: Create three intervals We are able to pick three intervals from looking at the number and seeing where the function crosses the x-axis (i. Interval can be positive and negative numbers. To perform this calculation you need to know your two sample means, The QT interval correction calculates the corrected QT interval using a patients heart rate in beats per minute and QT interval in milliseconds or seconds. Radius and Interval of Convergence Calculator. This calculator generate the output value of confidence interval according to the respective input values of confidence level, sample size, population and percentage. Related Symbolab …Free integral calculator - solve indefinite, definite and multiple integrals with all the steps. Step 4: Pick a number in each inequality and see if it satisfies the original inequalityThe Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. 11/1/2015 integralx^2(2x^3+3 Integrals, Derivatives, Equations, Limits and much more. Integral Applications Calculator Find integral application solutions step-by-step. Free absolute value inequality calculator - solve absolute value inequalities with all the steps. 5 times the length of the box away from either the lower or upper quartiles. 9 Biases That Affect Survey Responses. Limit calculator This is a calculator which computes the limit of a given function at a given point. Instructions: Use this Confidence Interval Calculator to compute a confidence interval for the population mean $$\mu$$, in the case that the population standard deviation $$\sigma$$ is known. 5 times the length of the box away from either the lower or upper quartiles. Applications of Differentiation. com is the ideal place to head to! Calculate Five number summary A five number summary consists of these five statistics: the minimum value, the first quartile, the median, the third quartile, and the maximum value of a set of numbers (data). For example, (a, b) is an interval whose notation can be given as a x b. All value may be different but they represent a same quantity an approximated area under the curve. com Solve-variable. Free integral calculator - solve indefinite, definite and multiple Free Circle Area calculator - Calculate circle area given equation step-by-step Integral calculator symbolab. DavidMonteroDelaCruz. For example,$1 < x \leq 7\$ , interval are all the numbers that lies from 1 to 7. Interval Notation Calculator. Equation Calculator. Radical equations are equations involving radicals of any order. Middle School Math Improper integrals calculator is the instant online tool which can quickly evaluate an improper integral. Even though derivatives are fairly straight forward, integrals areRelated Symbolab blog posts. This confidence interval calculator estimates the margin of error/accuracy of a survey by considering its sample & population sizes and a given percentage of choosing specific choice. n: k: Result. Follow the steps below to calculate the confidence interval for your data. A function basically relates an input to an output, there’s an Interval Notation Calculator (set builder notation calculator) is used to find the inequality and graph on a number line for the given interval. Confidence Interval Calculator is an online Probability and Statistics tool for data analysis programmed to calculate the statistical accuracy of a survey-based estimate. One and two-sided intervals are supported, as well as …If possible distribute this interval notation calculator picture at buddies, family via google plus, facebook, twitter, instagram or any other social media site. We can calculate Riemann sum with various approaches. Free functions Monotone Intervals calculator - find functions monotone intervals step-by-step. Free Ellipse Area calculator - Calculate ellipse area given equation step-by-stepFree Ellipse Area calculator - Calculate ellipse area given equation step-by-stepFree Hyperbola Eccentricity calculator - Calculate hyperbola eccentricity given equation step-by-stepTool to compute the mean of a function in order to find the average value of its integral over a given interval. Compounding of interest Compound interest is the concept of adding accumulated interest back to the principal sum, so that interest is earned on top of interest from that moment on. Interval Calculator. A confidence interval is an indicator of your measurement's precision. Enter mean, N and SD. Short QTc Interval: Less than 340ms is accepted as pathological. The average velocity calculator solves for the average velocity using the same method as finding the average of any two numbers. Independent Samples Confidence Interval Calculator. Definite integral could be represented as the signed area in the XY-plane bounded by the function graph as shown on the image below. The QTc calculator is aimed at determining the corrected QT interval. High School Math Solutions – Inequalities Calculator, Absolute Value Inequalities Part I. formula. Should you wish to calculate without compounding, give the simple interest calculator a try. splits), enter cumulative times in the format: Arbitrary zeros (and the inability to calculate ratios because of it) are one reason why the ratio scale — which does have meaningful zeros — is sometimes preferred. The Interval Calculator is a valuable tool for anyone studying music theory. Ch12 Multiple Integrals. Symbolab consists of most powerful calculators which includes Symbolab Math solver, Step by step calculator, Integral Calculator, Calculus, Derivative calculator Calculator Use. How to Find Open Intervals of A Function? | Math Math. pdf), Text File (. over the interval $$[a,b]$$ Example: Calculate the mean of the function $$f(x) = x$$ This trigonometric equations solver will find exact or approximate solutions on custom range. Author: Jake Binnema. For example, (a, b) is an interval whose notation can be given as a < x < b. Simple one-variable integral calculator. Search a tool on dCode by keywords: Go. Scientific notation calculator This calculator supports multiplication and division numbers in scientific notation. This calculator will walk you through approximating the area using Trapezoidal Rule. Derivative Calculator. Free integral calculator - solve indefinite, definite and multiple integrals with all the steps. Calculus II Calculators. interval calculator symbolab . Topic: Difference and Slope, Differential Calculus, Functions, Secant Line or Secant, Tangent Line or Tangent. To use a calculator, you still have to know the correct formulas and how to apply them. This confidence interval calculator is a tool that will help you find the confidence interval for a sample of given mean, standard deviation and size. High School Math Solutions – Radical Equation Calculator. Question 454329: Use a graphing calculator to find the intervals on which the function f(x)=x^3-2x^2 is increasing or decreasing and find any relative maxima or minima. 3. Just like running, it takes The calculator will find the radius and interval of convergence of the given power series. This is based on the use of the QT interval, as extracted from an ECG test and the pulse rate of …This interval calculator helps you determine the confidence interval for the population mean based on a sample of observations from that population. The online Outlier Calculator is used to calculate the outliers of a set of numbers. To use it, enter the observed proportion, sample size, and alpha (half of the desired confidence level; so . And if you don't know that, than removing a calculator from the equation (no pun intended) is only going to hinder your learning. Scientific Thinking For Better Design. Over the next few weeks, we'll be showing how Symbolab Read More. Related Symbolab blog posts. Download this app from Microsoft Store for Windows 10, Windows 8. Check my answers please if not right what should it be. The bisection method in mathematics is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. See screenshots, read the latest customer reviews, and compare ratings for Binomial Confidence Interval Calculator. The sum of the initial and final velocity is divided by 2 to find the average. ProsEasy to enter math problems: The app comes Equation Calculator. Mainly just to test out how the widget creation process works. more. Intervals and Tests. Tool to compute the mean of a function in order to find the average value of its integral over a given interval. 1, Windows 10 Mobile, Windows 10 Team (Surface Hub), HoloLens, Xbox One. es. It is a interval notation solver that represent the interval as per the interval given. Related Symbolab blog posts Advanced Math Solutions – Integral Calculator, integration by parts, Part II In the previous post we covered integration by parts. advertisement In both cases, you can either use the formula to compute the interval by hand or use a graphing calculator (or other software). This confidence interval calculator estimates the margin of error/accuracy of a survey by considering its sample & population sizes and a given percentage of choosing specific choice. This app can be used to find the slopes of secants to the curve of (in blue). This calculator will compute the 99%, 95%, and 90% confidence intervals for the mean of a normal population, given the sample mean, the sample size, and the sample standard deviation. txt) or read online. Improper Integral Calculator Follow @symbolab. The calculator supports both one-sided and two-sided limits. x(t) = x 0 × (1 + r) t. This notation is my favorite for intervals. Advanced Math Solutions – Integral Calculator, the complete guide. The act of declaring interest to be principal is called compounding Independent T-test. The equation x = ½( v + u)t can be manipulated, as shown below, If is continuous on the interval and differentiable on , then at least one real number exists in the interval such that . Calculator Project. com is the ideal place to head to! Confidence Interval for the Mean Calculator. Example 1: Evaluate the integral of the given function, f(x) = 1/x 3 with the limits of integration [1, ∞). If the interval is closed then it is represented as: [x, y] and if it is open then we write it as: (x, y). Algebra Calculator is a calculator that gives step-by-step help on algebra problems. Limit calculator. com TopEvery function has a Domain which contains the values at which the function is defined. Calculus Calculator Calculate limits, integrals, derivatives and series step-by-step. The chance that the sample point estimate is the same as the unknown population completion rate is extremely unlikely. It is also an indicator of how stable your estimate is, which is the measure of how close your measurement will be to the original estimate if you repeat your experiment. FB's Interval Calculator Please enter a tonic (Ex: C, D#, Bb), then enter the desired interval (Ex: P12, m3, A6, d11). The equation x = ½( v + u)t can be manipulated, as shown below,The QT c calculator is designed to compute the corrected QT interval, which is an estimation of the QT interval when a heart is beating at a rate of 60 bpm. Tap for more steps Replace the variable with in the expression . 1. The maximum will occur at the highest value and the minimum will occur at the lowest value. Interval Variable An interval variable is a variable that falls on the interval scale. Free Statistics Calculators: Home researchers, students, or any other curious or interested party. The QT values can be obtained from the ECG test. Interval notation calculator - solve-variable. Documents Similar To Definite Integral Calculator - Symbolab. Enter how many in the sample, the mean and standard deviation, choose a confidence level, and the calculation is done live. Enter N Enter X Enter σ or s Enter Confidence Interval % Rounding Digits . Now, three cases may arise: f(c) = 0: c is the required root of the equation. Enter the value of b =. Free Circle calculator - Calculate circle area, center, radius and circumference step-by-stepIndependent Samples Confidence Interval Calculator This simple confidence interval calculator uses a t statistic and two sample means ( M 1 and M 2 ) to generate an interval estimate of the difference between two population means (μ 1 and μ 2 ). The average velocity calculator solves for the average velocity using the same method as finding the average of any two numbers. Probability of a Normal Distribution. This is based on the use of the QT interval, as extracted from an ECG test and the pulse rate of the patient, measured in beats per minute. pdf), Text File (. High School Math Solutions – Partial Fractions Calculator. Secant Slope Calculator. interval calculator | integral calculator | integral | integral definition | integral solver | integral meaning | integral synonym | integral of lnx | integral Re: Service Interval Calculator I have included an example of what I was given to work with It's no wonder you're having a problem, you were given pictures instead of a spreadsheet with data that can be manipulated. If the function is defined piecewise, enter the upper limit of the first interval in the field labeled "Sub-interval 1" and The calculator will find the domain, range, x-intercepts, y-intercepts, derivative, integral, asymptotes, intervals of increase and decrease, critical points, extrema (minimum and maximum, local, absolute, and global) points, intervals of concavity, inflection points, limit, Taylor polynomial, and graph of the single variable function. We maintain a lot of great reference tutorials on subject areas varying from mathematics content to formulas Indefinite Integral Calculator - Symbolab - Download as PDF File (. Input Results User Experience Salaries & Calculator (2018) How To Make Personas More Scientific. Formula to calculate average value of a function is given by: Enter the average value of f(x), value of interval a and b in the below online average value of a function calculator and then click calculate button to find the output with steps. Asymptotes · Critical Points · Inflection Points · Monotone Intervals · Extreme Points Free functions extreme points calculator - find functions extreme and saddle points step-by-step. This calculator will find either the equation of the hyperbola (standard form) from the given parameters or the center, vertices, co-vertices, foci, asymptotes, focal parameter, eccentricity, (semi)major axis length, (semi)minor axis length, x-intercepts, and y-intercepts of the entered hyperbola. com gives useful advice on interval notation calculator, rational numbers and arithmetic and other algebra subjects. Integral Calculator computes an indefinite integral (anti-derivative) of a function with respect to a given variable using analytical integration. Single-Sample Confidence Interval Calculator Using the Z Statistic. Homework 1. This simple confidence interval calculator uses a t statistic and sample mean (M) to generate an interval estimate of a …Polynomial graphing calculator This page help you to explore polynomials of degrees up to 4. Enter the function f(x), A and B values in the average rate of change calculator to know the f(a), f(b), f(a)-(b), (a-b), and the rate of change. Javascript interval calculator This tool will give you the rational approximations (and what are known as semi-convergents) of a given interval in the left box, and the equal-division of the octave approximations in the right box. A notation to express the interval as a pair of numbers is called as the interval notation. Enter the value of f(a) =. Riemann integral sums are used to calculate area under the curve. Related Symbolab blog posts Advanced Math Solutions – Integral Calculator, common functions In the previous post we covered the basic integration rules (click here). Read Confidence Intervals to learn more. com's Confidence Interval calculator , formulas & workout with steps to estimate the confidence limits for a unknown value of parameter about to lie between the intervals in statistical surveys or experiments. Multiple Integrals Calculator Follow @symbolab. Documents Similar To Indefinite Integral Calculator - Symbolab. Therefore you have to calculate in a different way. This calculator computes the output values of poisson and cumulative poisson distribution with respect to the input values of average rate of success and the random variables. Advanced Math Solutions – Laplace Calculator, Laplace Transform. A Poisson experiment examines the number of times an event occurs during a specified interval
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540435671806335, "perplexity": 962.5779236452128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202199.51/warc/CC-MAIN-20190320024206-20190320050206-00009.warc.gz"}
https://koreauniv.pure.elsevier.com/en/publications/search-for-neutral-color-octet-weak-triplet-scalar-particles-in-p
# Search for neutral color-octet weak-triplet scalar particles in proton-proton collisions at √s= 8TeV The CMS Collaboration Research output: Contribution to journalArticlepeer-review 1 Citation (Scopus) ## Abstract Abstract: A search for pair production of neutral color-octet weak-triplet scalar particles (Θ0) is performed in processes where one Θ0 decays to a pair of b quark jets and the other to a Z boson plus a jet, with the Z boson decaying to a pair of electrons or muons. The search is performed with data collected by the CMS experiment at the CERN LHC corresponding to an integrated luminosity of 19.7 fb−1 of proton-proton collisions at √s = 8TeV. The number of observed events is found to be in agreement with the standard model predictions. The 95% confidence level upper limit on the product of the cross section and branching fraction is obtained as a function of the Θ0 mass. The 95% confidence level lower bounds on the Θ0 mass are found to be 623 and 426 GeV, for two different octo-triplet theoretical scenarios. These are the first direct experimental bounds on particles predicted by the octo-triplet model. Original language English 201 Journal of High Energy Physics 2015 9 https://doi.org/10.1007/JHEP09(2015)201 Published - 2015 Sep 1 • Exotics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898083209991455, "perplexity": 1158.910567691298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00343.warc.gz"}
http://en.m.wikibooks.org/wiki/Space_Transport_and_Engineering_Methods/Orbital_Mechanics
Last modified on 9 December 2014, at 03:18 # Section 1.2 - Orbital Mechanics ## IntroductionEdit Astrodynamics or Orbital Mechanics is mainly concerned with motions under gravity, either purely as a single force, or in combination with forces like thrust, drag, lift, light pressure, and others. As a topic it has a long history, with the motions of the planets, Moon, and Sun studied since ancient times, and with a scientific base starting about 500 years ago. With the advent of human-built spacecraft it has shifted from a merely observing the motions of natural bodies, to planning and executing missions. The relevance to Space Systems engineering is, of course, the need to deliver hardware to a desired destination or orbit. We will present some of the key ideas here. A more detailed and advanced introduction can be found in the Wikibook Astrodynamics and a set of Wikipedia articles at Astrodynamics - A Compendium. A good introductory printed textbook is Fundamentals of Astrodynamics, and there is an MIT Astrodynamics open course with downloadable materials. ## OrbitsEdit An Orbit is the path that an object will follow when only affected by gravity. Orbits around uniform single bodies are Conic Sections, which are shapes generated by slicing a cone. In order of eccentricity these are circle, ellipse, parabola, and hyperbola. Circular and elliptical orbits are bound to the body being orbited and will repeat. Parabolic and hyperbolic orbits are not bound to the body, although influenced by its gravity. They will not repeat. Simple orbit calculations only consider the nearest massive body. This is suitable when that body's attraction is much greater than other bodies, and for short time periods. More detailed and accurate calculations have to consider non-uniformity of the main body, and all other bodies with enough gravity to influence the accuracy of the result. Since gravity varies as the inverse square of distance, it never falls to zero, and every object in the Universe attracts every other object. But for the purpose of a given calculation only sufficiently near and massive objects will make enough of a difference to affect the result, and the rest of the Universe can be ignored. First let's consider the ideal case of a single uniform massive object being orbited. Circular orbits have a constant velocity and distance from the center of mass of the body. This also means they have a constant Orbital Period, the time to complete one revolution around the body and return to the starting point. The circular orbit velocity, vo, for any body can be found from: $v_o = \sqrt(GM/r)$ Where G is the Gravitational constant (6.67 x 10-11 Nm2/s2), M is the Mass of the body orbited (in kg), and r is the radius to the center of the body orbited (in meters). G is a universal constant, and the mass of the Earth is essentially constant (neglecting falling meteors and things we launch away from Earth), so often the product GM = K = 3.986 x 10^14 m3/s2 is used. The orbital period, or time to complete one orbit, $T\,$ of a small body orbiting a central body in a circular or elliptic orbit is $T = 2\pi\sqrt{a^3/K}$ Escape velocity, the velocity required to escape from a body's gravity to infinity, or ve is found by $v_e = \sqrt(2GM/r)$ Since this formula is the same as that for circular orbit, except by a factor of 2 in the square root term, escape velocity is the square root of 2 (1.414+) times circular orbit velocity. Elliptical orbits will have a velocity at the nearest point to the body, or perigee, in between that of circular and escape. ### Orbit ParametersEdit Several parameters are required to describe the location and orientation of an orbit, the shape of the orbit, and the position of an orbiting body at a given time. Axes - Periodic orbits are generally ellipses. An ellipse has a major and minor axis, which are the longest and shortest distances across the center of the ellipse. These axes are perpendicular to each other. Half of these axes, or the distances from center to edge of the ellipse, are called the Semi-major and Semi-minor axes respectively, with symbols a and b. The Semi-major axis is the value usually used to describe the overall size of an orbit. Eccentricity - The foci of an ellipse are the points along the Semi-major axis such that the sum of the distances from the foci to any point on the ellipse is constant. An orbit of a small body around a more massive one will have the massive one located at one focus of the elliptical orbit. The Focal length, f, is the distance from the focus to the center of the ellipse, and the shape of the orbit is measured by Eccentricity, e, which is defined as: $e = f/a$ The higher the eccentricity, the narrower is the ellipse relative to the semi-major axis, and the greater the difference between the nearest and farthest points of the body from the one it is orbiting. Perigee and Apogee - The prefixes peri- and ap- refer to the nearest and farthest points of an orbit from the center of the body being orbited. Different suffixes are used to indicate what body is being orbited, but perigee and apogee both means lowest and highest points of an Earth orbit, and generically that of any orbit. The general symbols are q for perigee and Q for apogee, and can be found from the formulas: $q = a-f = a(1-e)$ $Q = a+f = a(1+e)$ $f = ae$ ### Lagrange PointsEdit Given two large bodies, such as the Sun and Jupiter, and a third small body, such as an asteroid, there are five points relative to the large bodies where the net forces keep the small body in the same position relative to the two larger ones. Three of these, named L1, L2, and L3, are unstable. If you move slightly away from the exact point, you will tend to move further away. The other two, L4 and L5, are stable. Slight movements around these points will not cause the small body to drift away. L1, L2, and L3 are located between, behind, and opposite the second of the large bodies, respectively. L4 and L5 are located in the same orbit as the second large body, 60 degrees ahead and behind it. As the largest planet, Jupiter has the largest collection of asteroids in it's Lagrange points. Such natural objects are called "Trojans", since the first few at Jupiter were named after characters in the Trojan War. ### RotationEdit Nearly every natural body in orbit also rotates, so that a point on the surface of the body has a velocity greater than zero. This has several effects, discussed below: Rotation Period - This is the time it takes the body to complete one rotation with respect to the stars, the Sun, or the planet if it is a satellite of one. The most obvious effect of the rotation period is the day-night cycle on Earth. Some objects become locked into a rotational resonance with the parent body they orbit. This means the rotation period is a simple fraction of the orbit period. When the resonance is 1:1, it is called Tidally Locked, and the Moon is the most obvious example of that. The result is one side always faces the Earth, with a small wobble. Axial Tilt - Rotation defines an axis of rotation. The places where the axis meets the surface of the body are called poles, and midpoint of the surface between the poles is called the Equator. On smaller bodies with irregular shape, the Equator may not be well defined. On larger bodies which are more or less round, the Equator has the largest distance from the rotation axis. Axial Tilt is the angle the body axis makes with the axis of the body's orbit. The rotational inertia of large bodies causes their rotation axis to remain relatively fixed relative to the stars. As it orbits, first one pole, then the other, points towards the Sun, causing seasonal changes. Rotational Velocity - When you are on the surface of a rotating body, the circular motion about the axis produces an acceleration which can reduce gravity. The velocity and acceleration depend on the distance from the axis and the rotation period. For example, at the Earth's equator the rotation velocity is 465 m/s, which generates an acceleration of 0.0338 m/s2, or about 3% of gravity. Thus the apparent weight is less at the equator than the poles. Large bodies, more than about 1000 km in diameter, have internal forces greater than the strength of the internal materials. Since rotation lowers gravity in some parts relative to others, the body flows into an ellipsoidal, or flattened, shape. This is called hydrostatic equilibrium. The rotation of any body lowers the difference between orbit velocity and surface velocity when they are in the same direction. In the case of Earth it is 5.9% less, but in the case of some asteroids, like 4 Vesta, it can be 36% or higher. Very small objects which do not have structural flaws can even rotate faster than orbital velocity around them, producing regions where you cannot remain on the surface without mechanical aid. ### PerturbationsEdit Gravity forces extend to infinity. Therefore nearby bodies, such as the Moon and Sun for the Earth, also add an acceleration component to the gravity of the Earth. This varies over time as their direction and distance changes. The difference in gravity between the near and far sides from the nearby bodies are called Tidal Forces because they are the source of ocean tides on Earth. They also distort the shape of the solid part of the body. The gravity forces of other bodies also affect the orbit of the body as a whole. The forces besides the strongest one are called Perturbations, because they perturb the orbit caused by the strongest gravity force. On long time scales perturbations can drastically affect an orbit. This is most obvious in the case of Jupiter and comets. Comets are often near escape velocity, so small changes can easily change the orbit, and Jupiter has the most mass to cause such changes. ## Velocity MapEdit In space, physical distance does not matter as much as velocity, since space is mostly frictionless, and what costs you fuel is changing velocity. This graph shows the minimum ideal velocity relative to escape for the Sun's gravity well on the horizontal axis, and for planetary wells and some satellites and asteroids out to Jupiter on the vertical axis. There is no absolute reference frame against which to measure velocity. We choose escape as the zero point since it has the physical meaning of "to leave this gravity well, you must add this much velocity". Since you must add velocity to leave, the values are negative. If you have more than enough velocity to leave a gravity well, that is called excess velocity, and is measured infinitely far away. Determining Total Velocity Total mission velocity is the sum of vertical and horizontal velocity changes on the graph, both in km/s. To travel from Earth to Mars, for example, you first have to add velocity to climb out of the Earth's gravity well, add velocity to change orbit within the Sun's gravity well, then subtract velocity to go down Mars' gravity well. On the graph, that means taking the vertical axis velocity change from Earth surface to the top line, which is Solar System orbits (11.18), plus the horizontal segment to go from the Earth's orbit to Mars' orbit (2.3), plus the vertical change to go to the Martian surface (5.03). That gives a total mission velocity of 18.5 km/s, which has to be accounted for by various propulsion systems. To return to Earth, you then reverse the process. The graph shows theoretical values (single impulse to escape). Real changes in velocity (delta-V) will be higher because (1) maneuvers are not perfectly efficient, (2) orbits are elliptical and inclined, and (3) propulsion systems are not perfectly efficient in performing a given maneuver. Various losses are measured by the difference between ideal velocity, the velocity you would reach in a vacuum with no gravity well present, and the actual velocity you reach in a given circumstance. Therefore this chart is not an exact method for mission planning. It is intended to give a rough estimate as a starting point from which more detailed planning can start. Velocity Bands There are two velocity regions on the vertical axis for each planet or satellite. The lower sub-orbital region is where you have enough velocity to get off the object, but not be in a stable orbit. Those orbits will intersect the body again. So they can be used to travel from point to point on the body's surface, but not to stay in motion for multiple orbits. The higher orbital band, shown with a thicker line, indicates enough velocity for a repeating orbit. The shape of the orbit matters too, but for circular orbits the lowest point in this band is an orbit just above the surface of the body, and the highest point is an orbit just fast enough to escape from it's gravity well. Since gravity varies as the inverse square of distance, relatively large velocity changes are needed for small altitude changes near the surface of a body. Conversely, near escape velocity, relatively small velocity changes can produce large changes in altitude, and at escape it produces a theoretically infinite change. In reality there are multiple gravity wells that overlap, so escape from Earth merely places you in the larger Solar gravity well, and escape from the Sun places you in the larger gravity well of the Galaxy. Solar Orbits The top blue line represents orbits around the Sun away from local gravity wells. The surface of the two largest asteroids, 4 Vesta (-0.35) and 1 Ceres (-0.51) are marked, but the orbital bands for these objects, and the entire gravity well for most smaller asteroids, are too small to show. Instead, the range of Solar velocities are shown for Near Earth Objects and the Main Belt between Mars and Jupiter. In reality the velocities of small objects in the Solar System are spread across the entire chart. The two marked ranges are just of particular interest. The surface of Jupiter and the Sun, and their sub-orbital ranges are off the scale of this chart because of their very deep gravity wells. ## Powered FlightEdit ### Ascent TrajectoriesEdit Circular orbit velocity at the earth's surface is 7910 meter/sec. At the equator, the Earth rotates eastward at 465 meters/sec, so in theory a transportation system has to provide the difference, or 7445 meters/sec. The Earth's atmosphere causes losses that add to the theoretical velocity increment for many space transportation methods. In the case of chemical rockets, they normally fly straight up initially, so as to spend the least amount of time incurring aerodynamic drag. The vertical velocity thus achieved does not contribute to the circular orbit velocity (since they are perpendicular), so an optimized ascent trajectory rather quickly pitches down from vertical towards the horizontal. Just enough climb is used to clear the atmosphere and minimize aerodynamic drag. The rocket consumes fuel to climb vertically and to overcome drag, so it would achieve a higher final velocity in a drag and gravity free environment. The velocity it would achieve under these conditions is called the 'ideal velocity'. It is this value that the propulsion system is designed to meet. The 'real velocity' is what the rocket actually has left after the drag and gravity effects. These are called drag losses and gee losses respectively. A real rocket has to provide about 9000 meters/sec to reach orbit, so the losses are about 1500 meters/sec, or a 20% penalty. Boost From a Non-rotating Body To go from a non-rotating body's surface to orbit requires that a rocket change its velocity from a rest velocity (zero) to a velocity that will keep the payload in orbit. If our rocket maintains a constant thrust during its ascent, then the total velocity change is $\int_0^{t_{orbit}} a\,dt = \int_0^{t_{orbit}} {T \over m}-{D \over m} - g\,dt$ where $a$ is the acceleration, $D$ is the drag, and $g$ is the planet's gravitational pull. Boost From Rotating Body ### Mass Ratio: Tsiolkovsky Rocket EquationEdit For any rocket which expels part of its mass at high velocity to provide acceleration, the total change in velocity delta-v can be found from the exhaust velocity v(e) and the initial and final masses m(0) and m(1) by $\Delta v = v_\text{e} \ln \frac {m_0} {m_1}$ The difference between the initial mass m(0) and the final mass m(1) represents the propellant or reaction mass used. The ratio of the initial and final masses is called the Mass Ratio. The final mass consists of the vehicle hardware plus cargo mass. If the cargo mass is set to zero, then a maximum delta-v is reached for the particular technology, and missions that require more than this are impossible. ### StagingEdit A certain fraction of a vehicle's loaded initial mass will be the vehicle's own hardware. Therefore from the above rocket equation there is a maximum velocity it can reach even with zero payload. When the required mission velocity is near or above this point, dropping some of the empty vehicle hardware allows continued flight with a new mass ratio range based on the smaller hardware mass. This is known as Staging, and the components of the vehicle are numbered in order of last use as first stage, second stage, etc. Last use is mentioned because stages can operate in parallel, so the one to be dropped first gets the lower stage number. The velocity to reach Earth orbit is approximately twice the exhaust velocity of the best liquid fuel mixes in use, thus the rocket equation yields a mass ratio of e^2 or 7.39, and a final mass of 13.5%. This percentage is close to the hardware mass of typical designs, so staging has commonly been used with rockets going to Earth Orbit. We desire a rocket with a number of stages that optimizes the economic efficiency (cost per payload unit mass). The economic efficiency depends on a number of factors, the mass efficiency being only one factor. Let us assume that we desire to launch a payload of weight P. The weight of each stage in the stack is $W_i = Pw_i$ where $w_i$ is a normalized weight for the stage. The total stack weight is thus $W = P \left (1+ \sum_{n=1}^N w_i \right )$ The change of velocity per unit mass for each stage is $\Delta v_i = I_{sp_i}\ln \mu_i$ where $\mu_i$ is the ratio of the weight before the burn of the ith stage to the weight after the burn of that stage. Thus, $\mu_i$ will always have a value greater than 1. The total change in velocity per unit mass for all the stages $\Delta v = \sum_{n=1}^N I_{sp_i}\ln \mu_i$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737926483154297, "perplexity": 506.03883396197267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928869.6/warc/CC-MAIN-20150521113208-00069-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/136306/equilibrium-distance-formula-proof
# Equilibrium distance formula proof Let $$d: \mathbb{R}^n \times \mathbb{R}^n \longrightarrow \mathbb{R}$$ be defined by $$d(x_i,x_j)=\frac{|x_i-x_j|}{\sqrt{M(i)M(j)}},$$ where $M(i)$ represents the average distance between $x_i$ and the other points, $M(j)$ represents the average distance between $x_j$ and the other points, and $|x_i-x_j|$ is the standard Euclidean distance. I need to prove that $d$ satisfies the triangle inequality. Thanks! - This is false. The triangle inequality would be $$\frac{|x_i-x_k|}{\sqrt{M(i)M(k)}}\le\frac{|x_i-x_j|}{\sqrt{M(i)M(j)}}+\frac{|x_j-x_k|}{\sqrt{M(j)M(k)}}\;.$$ Multiply through by $\sqrt{M(i)M(j)M(k)}$ to obtain $$|x_i-x_k|\sqrt{M(j)}\le|x_i-x_j|\sqrt{M(k)}+|x_j-x_k|\sqrt{M(i)}\;.$$ Now we can assume that there are an arbitrary number of points arbitrarily close to $x_k$, which would make $M(k)$ arbitrarily close to $0$, $M(i)$ arbitrarily close to $|x_i-x_k|$ and $M(j)$ arbitrarily close to $|x_j-x_k|$. Thus for the triangle equality to hold in all cases, we would need to have $$|x_i-x_k|\sqrt{|x_j-x_k|}\le|x_j-x_k|\sqrt{|x_i-x_k|}$$ and thus $$|x_i-x_k|\le|x_j-x_k|\;,$$ which is obviously not always true. - thank you, joriki. –  user29860 Apr 26 '12 at 10:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245042204856873, "perplexity": 69.80315845116948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00312-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/energy-eigenvalue-and-eigen-vector.183051/
# Energy eigenvalue and eigen vector 1. Sep 4, 2007 ### dusrkeric I have some question on energy eigenvalue and eigenfunction help plz A particle, mass m , exists in 3 dimensions, confined in the region 0< x < 2L, 0 < y < 3L, 0 < z < 3L a) what are the energy eigenvalues and eigenfunctions of the particle? b) if the particel is a neutron which is confined in a volume with L=10^-15m, what are the three lowest energy eigenvalues, in MeV? what is the lowest energy eigenvalue which is degrenerate? 2. Sep 5, 2007 ### olgranpappy I think this should be posted in the homework help section, eh? 3. Sep 5, 2007 ### malawi_glenn Yes, and also show work done etc. Exactly whatis it that you dont understand? If we dont know how then we can not help you. It is against the policy of the forumus to just hand out solutions to problems. Our teachers in real life do not do so either.. Somebody will move this post eventually, so " dusrkeric " do not make a new one. 4. Sep 5, 2007 ### olgranpappy No, it's a mere homework issue. 5. Sep 5, 2007 ### premagg simple formula dude... use equations of particle in three dimension box. But u don't mean that v outside bos is finite,I guess. 6. Sep 6, 2007 ### Reshma These have pretty straightforward solutions. Please post your work/formulae that you have used. Just few hints... a] Use the energy eigenvalue equation for a 3-dimensional box after normalizing the eigenfunctions. b]This is even easier...use the particle in a 3D square box solution. 7. Sep 6, 2007 ### Gokul43201 Staff Emeritus The box is not a cube - it is a cuboid. You can not use the square well energy eigenvalues. 8. Sep 7, 2007 ### malawi_glenn why not? The solution is obtained via separation of variables. And the general solution in a 1-dim box of lenght a is $$\sqrt{2/a} sin \dfrac{n \pi x}{a}$$ just substitut n, x and a to proper values and the solution for 3dim is obtained by multiply all these into one equation. At least we have done so here in sweden Last edited: Sep 7, 2007 9. Sep 7, 2007 ### Gokul43201 Staff Emeritus Yes, the eigenvalues of a 3D-well will be sums of three 1D-well eigenvalues, but this does not make the box a "3D square box" (since, for example $L_x \neq L_y.$) I should have specified that "you can not use the 3D square box energies." 10. Sep 7, 2007 ### Reshma Yes it is a cuboid in the first case. However, in the second case, he has been given only one length. So it is a special case of a square box. I don't know if case a] & b] are connected. If yes, then I am wrong. 11. Sep 7, 2007 ### olgranpappy Look. The problem just has a confusing wording. He is only given one length 'L' in part b. But since it is part "b", apparently this implies that the result of part "a" is to be applied to the specific case. There is only one length 'L' given in part "a" as well--but the region is not a cube in part a, it is a rectangular solid of sides L, 2L, and 3L. I'm sure that we all understand the elementary quantum mechanics, so the point now under discussion by Reshma and Gokul is just the slightly vague wording of part "b". 12. Sep 8, 2007 ### Gokul43201 Staff Emeritus That's right (gramps). Reshma, in part b, you are not (explicitly) given the length of the sides; you are given the value of L. The lengths of the sides are still (2L,3L,3L). Last edited: Sep 8, 2007
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901768326759338, "perplexity": 1684.773864241366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189313.82/warc/CC-MAIN-20170322212949-00024-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.nbccomedyplayground.com/what-is-the-cut-off-frequency-of-a-waveguide/
# What is the cut-off frequency of a waveguide? ## What is the cut-off frequency of a waveguide? The cutoff frequency of an electromagnetic waveguide is the lowest frequency for which a mode will propagate in it. In fiber optics, it is more common to consider the cutoff wavelength, the maximum wavelength that will propagate in an optical fiber or waveguide. ## What is the cutoff frequency of rectangular waveguide? The dimensions and operating frequencies of a rectangular waveguide are chosen to support only one propagating mode….6.4. 3 Practical Rectangular Waveguide. Mode Cut-off frequency (GHz) TE10 21.07 GHz TE01 42.15 GHz TE11 47.13 GHz TM10 not supported ## What is WR in waveguide? The “WR” designation stands for Rectangular Waveguides. The Number that follows “WR” is the width of the waveguide opening in mils, divided by 10. ## How do you calculate cutoff frequency? How do I determine cutoff frequency of low pass filter? 1. Multiply the value of resistance ( R ), capacitance ( C ), and 2π . 2. Divide the value obtained in the previous step by 1 . 3. Congrats! You have calculated the cutoff frequency of a low-pass RC filter. ## How does dimensions of waveguide changes with cutoff frequency? It is worth noting that the cut-off frequency is independent of the other dimension of the waveguide. This is because the major dimension governs the lowest frequency at which the waveguide can propagate a signal. ## What is the stable bandwidth of WR90 waveguide? Thus for WR-90, the cutoff is 6.557 GHz, and the accepted band of operation is 8.2 to 12.4 GHz. ## How do you calculate upper and lower cutoff frequency? The point of maximum output gain is generally the geometric mean of the two -3dB value between the lower and upper cut-off points and is called the “Centre Frequency” or “Resonant Peak” value ƒr. This geometric mean value is calculated as being ƒr 2 = ƒ(UPPER) x ƒ(LOWER). ## How do you calculate the cutoff frequency of a low pass filter? The cut-off frequency or -3dB point, can be found using the standard formula, ƒc = 1/(2πRC). The phase angle of the output signal at ƒc and is -45o for a Low Pass Filter. ## How do you calculate wavelength from frequency cutoff? Details of the calculation: hfc = Φ, fc = Φ/h = (4.2 eV)(1.6*10-19 J/eV)/(6.626*10-34 Js) = 1.01*1015 Hz is the cutoff frequency. λc = c/fc = 296 nm is the cutoff wavelength. ## How is the waveguide width related to the lower cutoff frequency? The waveguide width determines the lower cutoff frequency and is equal (ideally) to ½ wavelength of the lower cutoff frequency. ## What is the cutoff frequency of an electromagnetic wave? In the waveguide, cutoff frequency is the frequency upto which EM mode will propagate easily. Below this frequency waveguide will attenuate the EM mode and will not be suitable for electromagnetic wave propagation. Following equation or formula is used for rectangular waveguide cutoff frequency calculator. ## Is there a waveguide calculator for a rectangular wall? Waveguide Calculator (Rectangular) Pasternack’s Waveguide Calculator provides the cutoff frequency, operating frequency range and closest waveguide size for a rectangular waveguide based on the custom inputted broad wall width. ## What is the width of a waveguide in mils? The tables below will give you details on the various waveguide sizes and their properties. The Number that follows “WR” is the width of the waveguide opening in mils, divided by 10. For Example WR-650 means a waveguide whose cross section width is 6500 mils.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9399759769439697, "perplexity": 1862.159665029498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00318.warc.gz"}
http://math.stackexchange.com/questions/377041/how-do-i-change-the-order-of-this-triple-integral-so-i-can-integrate-it
# How do i change the order of this triple integral so i can integrate it? How do i change the order of this triple integral so i can integrate it? $$\int_{0}^9\int_{y=\sqrt{z}}^3\int_{x=0}^y z\cdot \cos(y^6)dxdydz$$ - What are the integration limits? And please write your integral using MathJax. –  Ron Gordon Apr 30 '13 at 8:36 $$\int_0^9 dz \, z \: \int_{\sqrt{z}}^3 dy \, \cos{y^6} \: \int_0^y dx = \int_0^9 dz \, z \: \int_{\sqrt{z}}^3 dy \, y \, \cos{y^6}$$ Draw a picture. You can see from that picture how to switch the order of integration and get the following $$\int_0^9 dz \, z \: \int_{\sqrt{z}}^3 dy \, y \, \cos{y^6} = \int_0^3 dy \, y \, \cos{y^6} \: \int_0^{y^2} dz \, z$$ You will find that integral on the RHS may be done in closed form. - Thankyou so much Ron. Is there a way i can reward you points? –  amanda Apr 30 '13 at 9:13 You are welcome. If you can, click the up arrow as well if you want. –  Ron Gordon Apr 30 '13 at 9:14 You have to change the order according to the dependencies between integration variables: • $x$ depends on $y$ • $y$ depends on $z$ • $z$ has fixed bounds So you actually just have to first integrate with respect to $x$, then integrate what you get with respect to $y$, and finally, integrate with respect to $z$: $$I=\displaystyle\int_{z=0}^9 z\left(\int_{y=\sqrt(z)}^3\cos(y^6)\left(\int_{x=0}^ydx\right)dy\right)dz$$ - Hi Dolma, this re-states what the question is asking... –  amanda Apr 30 '13 at 9:05 It restates the question because that triple integral is already in the right order to integrate. –  in_wolframAlpha_we_trust Apr 30 '13 at 9:12 @in_wolfram_we_trust no, it cannot simply be integrated in this form –  amanda Apr 30 '13 at 9:14 Fair enough, I was a bit hasty in my first comment. There is no nice way to integrate $y.\cos(y^6)$. –  in_wolframAlpha_we_trust Apr 30 '13 at 9:19 Oh ok, my bad. I thought you were asking a method to see in which order you had to integrate such multiple integrals. –  Dolma Apr 30 '13 at 9:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829428195953369, "perplexity": 547.9791850462374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121744242.57/warc/CC-MAIN-20150124174904-00177-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.popflock.com/learn?s=Isolated_point
Isolated Point Get Isolated Point essential facts below. View Videos or join the Isolated Point discussion. Add Isolated Point to your PopFlock.com topic list for future reference or share this resource on social media. Isolated Point "0" is an isolated point of A = {0} ? [1, 2] In mathematics, a point x is called an isolated point of a subset S (in a topological space X) if x is an element of S but there exists a neighborhood of x which does not contain any other points of S. This is equivalent to saying that the singleton {x} is an open set in the topological space S (considered as a subspace of X). If the space X is a Euclidean space (or any other metric space), then x is an isolated point of S if there exists an open ball around x which contains no other points of S. (Introducing the notion of sequences and limits, one can say equivalently that an element x of S is an isolated point of S if and only if it is not a limit point of S.) ## Discrete set A set that is made up only of isolated points is called a discrete set (see also discrete space). Any discrete subset S of Euclidean space must be countable, since the isolation of each of its points together with the fact that rationals are dense in the reals means that the points of S may be mapped into a set of points with rational coordinates, of which there are only countably many. However, not every countable set is discrete, of which the rational numbers under the usual Euclidean metric are the canonical example. A set with no isolated point is said to be dense-in-itself (every neighbourhood of a point contains other points of the set). A closed set with no isolated point is called a perfect set (it has all its limit points and none of them are isolated from it). The number of isolated points is a topological invariant, i.e. if two topological spaces ${\displaystyle X}$ and ${\displaystyle Y}$ are homeomorphic, the number of isolated points in each is equal. ## Standard examples Topological spaces in the following examples are considered as subspaces of the real line with the standard topology. • For the set ${\displaystyle S=\{0\}\cup [1,2]}$, the point 0 is an isolated point. • For the set ${\displaystyle S=\{0\}\cup \{1,1/2,1/3,\dots \}}$, each of the points 1/k is an isolated point, but 0 is not an isolated point because there are other points in S as close to 0 as desired. • The set ${\displaystyle {\mathbb {N} }=\{0,1,2,\ldots \}}$ of natural numbers is a discrete set. • The Morse lemma states that non-degenerate critical points of certain functions are isolated. ## A counter-intuitive example Consider the set ${\displaystyle F}$ of points ${\displaystyle x}$ in the real interval ${\displaystyle (0,1)}$ such that every digit of their binary representation ${\displaystyle x_{i}}$ fulfills the following conditions: • Either ${\displaystyle x_{i}=0}$ or ${\displaystyle x_{i}=1}$. • ${\displaystyle x_{i}=1}$ only for finitely many indices ${\displaystyle i}$. • If ${\displaystyle m}$ denotes the biggest index such that ${\displaystyle x_{m}=1}$, then ${\displaystyle x_{m-1}=0}$. • If ${\displaystyle x_{i}=1}$ and ${\displaystyle i, then exactly one of the following two conditions holds: ${\displaystyle x_{i-1}=1}$, ${\displaystyle x_{i+1}=1}$. Informally, this condition means that every digit of the binary representation of ${\displaystyle x}$ which equals to 1 belongs to a pair ...0110..., except for ...010... at the very end. Now, ${\displaystyle F}$ is an explicit set consisting entirely of isolated points[1] which has the counter-intuitive property that its closure is an uncountable set.[2] Another set ${\displaystyle F}$ with the same properties can be obtained as follows. Let ${\displaystyle C}$ be the middle-thirds Cantor set, let ${\displaystyle I_{1},I_{2},I_{3},\ldots }$ be the component intervals of ${\displaystyle [0,1]-C}$, and let ${\displaystyle F}$ be a set consisting of one point from each ${\displaystyle I_{k}}$. Since each ${\displaystyle I_{k}}$ contains only one point from ${\displaystyle F}$, every point of ${\displaystyle F}$ is an isolated point. However, if ${\displaystyle p}$ is any point in the Cantor set, then every neighborhood of ${\displaystyle p}$ contains at least one ${\displaystyle I_{k}}$, and hence at least one point of ${\displaystyle F}$. It follows that each point of the Cantor set lies in the closure of ${\displaystyle F}$, and therefore ${\displaystyle F}$ has uncountable closure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 37, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9151641726493835, "perplexity": 248.5904278175555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00227.warc.gz"}
https://stats.stackexchange.com/questions/117622/weight-decay-in-neural-neural-networks-weight-update-and-convergence
# Weight Decay in Neural Neural Networks Weight Update and Convergence I have a neural network (That I created using java) for a class assignment that is working when I do not use any weight decay value, but when I use a value greater than or equal to .001, my accuracy drops greatly. The data is normalized. I am not sure if it is how I am calculating the convergence condition, or if my weight updates with weight decay is incorrect. I am using a sigmoid activation function. My classifier is binary 0 or 1, and when classifying if my output is > .5, the example is 1, and <= .5, the example is 0. In my test I am using 5 hidden neurons + 1 bias, and 11 input neruons + 1 bias, and 1 output neuron. When running with 0 weight decay i am getting 99% accuracy, however when i use a value of .001 I am getting 56% accuracy. The accuracy I am using is TP + TN / (TP + TN + FP + FN) My weight update right now is Weight = Weight - LearningRate * WeightChange - Weight * WeightDecay My convergence test is if the absolute difference in the sum of the current weights and the sum of the previous weights is < 0.00001 I say that the network has converged. Is this correct in thinking so? Let me know if there is any more information needed. ## 1 Answer It is not surprising that weight decay will hurt performance of your neural network at some point. Let the prediction loss of your net be $\mathcal{L}$ and the weight decay loss $\mathcal{R}$. Given a coefficient $\lambda$ that establishes a tradeoff between the two, one optimises $$\mathcal{L} + \lambda \mathcal{R}.$$ At the optimium of this loss, the gradients of both terms will have to sum up to zero: $$\triangledown \mathcal{L} = -\lambda \triangledown \mathcal{R}.$$ This makes clear that we will not be at an optimium of the training loss. Even more so, the higher $\lambda$ the steeper the gradient of $\mathcal{L}$, which in the case of convex loss functions implies a higher distance from the optimum.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816544413566589, "perplexity": 495.46035767895705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00354.warc.gz"}
http://www.akademikidea.org/a-ders/mod/page/view.php?id=82
## Ethics in Three Rules Rule 1: Don't express any idea that is not yours' to someone else, • neither in written form nor in orally • as if it is your own idea. Rule 2: Be sure that your information source , • obeys to the Rule 1. Rule 3: Share the information and knowledge you have with those • who obeys to the Rules 1 and 2.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232711791992188, "perplexity": 1995.9062345037084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00441.warc.gz"}
https://math.stackexchange.com/questions/1172893/solve-int-cos-sqrt-x-dx-using-a-combination-of-substitution-and-integrati
# Solve $\int cos{\sqrt x} \ dx$ using a combination of substitution and integration by parts My textbook says I should solve the following integral by first making a substitution, and then using integration by parts: $$\int cos\sqrt x \ dx$$ The problem is, after staring at it for a while I'm still not sure what substitution I should make, and hence I'm stuck at the first step. I thought about doing something with the $\sqrt x$, but that doesn't seem to lead anywhere as far as I can tell. Same with the $cos$. Any hints? • Don't just think about it, actually try doing it. The substitution $t = \sqrt{x}$ is correct. – Dylan Mar 6 '15 at 3:59 ## 3 Answers make a subs $u = \sqrt x, x = u^2, dx = 2u du$ now the integral $\int \cos \sqrt x \, dx$ is transformed into $$2\int u \cos u \, du = 2 \int u d (\sin u) =2\left( u\sin u - \int \sin u \, du\right) = 2\left( u\sin u +\cos u +C\right)$$ • That's quite a hint. :) – John Hughes Mar 3 '15 at 4:06 • @Asker, that was an error. fixed it. – abel Mar 3 '15 at 4:29 Try $x = u^2$, and $dx = 2u ~ du$. Let $x=t^2$, then $dx=2tdt$, so $$\int\cos\sqrt{x}dx=\int 2t\cos tdt$$ Then use integration by parts.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751407504081726, "perplexity": 240.2026083819031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141740670.93/warc/CC-MAIN-20201204162500-20201204192500-00056.warc.gz"}
https://byjus.com/multiple-angle-formulas/
# Multiple Angle Formulas The trigonometric functions of multiple angles is the multiple angle formula. Double and triple angles formula are there under the multiple angle formulas. Sine, tangent and cosine are the general functions for the multiple angle formula. The sin formula for multiple angle is: $\large sin \theta = \sum_{k=0}^{n}\;cos^{k}\theta \; Sin^{n-k}\theta\; Sin\left [\frac{1}{2}\left(n-k\right)\right]\pi$ Where n=1,2,3,…… General formulas are, $\large sin^{2}\theta =2 \times cos\,\theta \; sin\,\theta$ $\large sin^{3}\theta =3 \times cos^{2}\,\theta \; sin\, \theta \; sin^{3}\,\theta$ The multiple angle’s Cosine formula is given below: $\large Cos\;n\, \theta =\sum_{k=0}^{n}cos^{k}\theta \,sin^{n-k}\theta \;cos\left [\frac{1}{2}\left(n-k\right)\pi\right]$ Where n = 1,2,3 The general formula goes as: $\large cos^{2}\, \theta =cos^{2}\, \theta – sin^{2}\, \theta$ $\large cos^{3}\, \theta =cos^{3}\, \theta – cos\, \theta \; sin^{2}\, \theta$ Tangent Multiple Angles formula $\large Tan\;n\theta = \frac{sin\;n\theta}{cos\;n\theta}$ ### Solved Examples Question 1: Prove that $\frac{sin\,x+sin\,2x}{1+cos\,x+cos\,2x}=tan\,x$ Solution: Using the identities and formulas above we can solve the question as follows: $\frac{sin\,x+sin\,2x}{1+cos\,x+cos\,2x}=tan\,x$ $=\frac{sin\,x+2\,sin\,x\;cos\,x}{2+cos^{2}\,x+cos\,x}$ $=\frac{sin\,x(1+2cos\,x)}{cos\,x(2\,cos\,+1)}=tan\;x$ #### Practise This Question The molecule having dipole moment is
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683992862701416, "perplexity": 3100.956374443589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527865.32/warc/CC-MAIN-20190419161226-20190419183226-00198.warc.gz"}
https://byjus.com/questions/what-are-the-3-types-of-thermodynamic-systems/
# What are the 3 types of thermodynamic systems? A thermodynamic system can be defined as a part of the universe upon which certain observations are made. The surroundings of a thermodynamic system are the rest of the universe (everything except the system). Therefore, the relationship between a thermodynamic system, its surroundings, and the universe can be expressed as follows: Universe = System + Surroundings ## Types of Thermodynamics A thermodynamic system can be classified as an open system, a closed system, or as an isolated system depending on its properties.  The term surrounding refers to something outside the system that has a direct impact on the system’s behaviour. The three types of thermodynamics are ### Open system If the system features the exchange of both energy and matter with its surroundings, it can be called an open system. ### Closed system If the system can only exchange energy with its surroundings and cannot exchange matter with it is known as closed system. ### Isolated system An isolated system is a thermodynamic system that cannot exchange energy or matter with its surroundings.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873572587966919, "perplexity": 442.3038562803573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00739.warc.gz"}
http://mathoverflow.net/questions/89240/minimal-prime-devisorminass-r
# minimal prime devisor(MinAss R) Hello All,is This conclusion true? if $(R,m)$ be a local ring & $Min Ass R=Ass R$ then we Can conclude that $Min Ass \hat{R}=Ass \hat{R}$. ( $\hat{R}$ is $m$-adic completion of $R$) $MinAss$ means minimal primes in $Ass(R)$. " $Min Ass R = Ass R$ " means R has no embedded prime ideals.in fact, if every associated prime ideal of $R$ is minimal then every associated prime ideal of $\hat{R}$ is minimal. - What does the equality "Min Ass R = Ass R" exactly mean ? –  Ralph Feb 23 '12 at 3:42 It would be nice to have a definition of MinAss here... –  darij grinberg Feb 23 '12 at 3:43 MinAss means minimal primes in Ass(R). "Min Ass R = Ass R" means R has no embedded prime ideals. –  Mahdi Majidi-Zolbanin Feb 23 '12 at 4:04 MinAss means minimal primes in Ass(R). "Min Ass R = Ass R" means R has no embedded prime ideals –  anna Feb 23 '12 at 8:49 The answer is no in general. In the paper Fibres formelles d'un anneau local noethérien D. Ferrand and M. Raynaud give an example of a two-dimensional local domain whose $\mathfrak{m}$-adic completion has embedded prime ideals. In the same paper, they mention that the answer is yes in certain special cases, such as, when $R$ is a quotient of a Cohen-Macaulay ring, or when $R$ is universally Japanese.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528825879096985, "perplexity": 981.4126855580153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642134/warc/CC-MAIN-20140305060722-00071-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/69535-differentiation.html
1. ## Differentiation a wee bit stuck Given f(x) = 5 + 7x – x^4, find f ' (x). Use your result to find f ' (2) im not sure what to do at all. 2. Originally Posted by Koops a wee bit stuck Given f(x) = 5 + 7x – x^4, find f ' (x). there is a rule you must follow: $\frac d{dx} x^n = nx^{n - 1}$ $\frac d{dx}$ means the derivative with respect to $x$ example: $f(x) = x^3 + 2x^2 + 3x + 1$ $\Rightarrow f'(x) = 3x^2 + 4x + 3$ (did you see how the rule was applied?) Use your result to find f ' (2) im not sure what to do at all. just plug in x = 2 once you have found $f ' (x) $ can you finish? 3. i knew the rule, it becomes, 7-4x^3 but what im unclear of is what f ' (x) means, and i dont know what to do to find it. 4. Originally Posted by Koops i knew the rule, it becomes, 7-4x^3 but what im unclear of is what f ' (x) means, and i dont know what to do to find it. what?? you found it! as you found, $f'(x) = 7 - 4x^3$ $f'(x)$ denotes the "derivative of $f(x)$" 5. oooooooooh i had a blonde moment so now i just plug x=2 in
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518385291099548, "perplexity": 1503.5772895647397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982969890.75/warc/CC-MAIN-20160823200929-00281-ip-10-153-172-175.ec2.internal.warc.gz"}
https://ptreview.sublinear.info/?cat=2
# News for September 2018 Last month was slower than usual, with only one property testing paper* Property testing and expansion in cubical complexes, by David Garber, Uzi Vishne (arXiv). Consider the question of testing if an arbitrary function $$f\colon V\times V \to\{-1,1\}$$ is of the form $$f(x,y) = h(x)h(y)$$ for some $$h\colon V\to\{-1,1\}$$. An intuitive one-sided test, shown to work by Lubotzky and Kaufman (2014), is to pick uniformly random $$x,y,z\in V$$ and check that $$f(x,y)f(y,z)f(z,x)=1$$. This paper considers the high-dimensional generalization of testing the property that a function$$f\colon V\times V \times V\times V \to\{-1,1\}$$ is of the form $$f(w,x,y,z) = \alpha\cdot h(w,x)h(y,x)h(y,z) h(w,z)$$, for some $$h\colon V\times V\to\{-1,1\}$$ and sign $$\alpha\in\{-1,1\}$$. The authors derive necessary and sufficient conditions for testability of this property, by formulating it in the language of incidence geometry and exploiting this connection. * If we missed any, please let us know in the comments. # News for August 2018 Three papers this month close out Summer 2018. Test without Trust: Optimal Locally Private Distribution Testing, by Jayadev Acharya, Clément L. Canonne, Cody Freitag, and Himanshu Tyagi (arXiv). This work studies distribution testing in the local privacy model. While private distribution testing has recently been studied, requiring that the algorithm’s output is differentially private with respect to the input dataset, local privacy has this requirement for each individual datapoint. The authors prove optimal upper and lower bounds for identity and independence testing, using a novel public-coin protocol named RAPTOR which can outperform any private-key protocol. Testing Graph Clusterability: Algorithms and Lower Bounds, by Ashish Chiplunkar, Michael Kapralov, Sanjeev Khanna, Aida Mousavifar, and Yuval Peres (arXiv). This paper studies the problem of testing whether a graph is $$k$$-clusterable (based on the conductance of each cluster), or if it is far from all such graphs — this is a generalization of the classical problem of testing whether a graph is an expansion. It manages to solve this problem under weaker assumptions than previously considered. Technically, prior work embedded a subset of the graph into Euclidean space and clustered based on distances between vertices. This work uses richer geometric structure, including angles between the points, in order to obtain stronger results. Near log-convexity of measured heat in (discrete) time and consequences, by Mert Saglam (ECCC). Glancing at the title, it might not be clear how this paper relates to property testing. The primary problem of study is the quantity $$m_t = uS^tv$$, where $$u, v$$ are positive unit vectors and $$S$$ is a symmetric substochastic matrix. This quantity can be viewed as a measurement of the heat measured at vector $$v$$, after letting the initial configuration of $$u$$ evolve according to $$S$$ for $$t$$ time steps. The author proves an inequality which roughly states $$m_{t+2} \geq t^{1 – \varepsilon} m_t^{1 + 2/t}$$, which can be used as a type of log-convexity statement. Surprisingly, this leads to lower bounds for the communication complexity of the $$k$$-Hamming problem, which in turns leads to optimal lower bounds for the complexity of testing $$k$$-linearity and $$k$$-juntas. # News for July 2018 Three Four papers for July. New sublinear graph algorithms, distribution testing under new models, and sublinear matrix algorithms. Onward ho… (Sorry Amit, for missing your paper on sublinear matrix algorithms.) Metric Sublinear Algorithms via Linear Sampling, by Hossein Esfandiari and Michael Mitzenmacher (arXiv). Consider a weighted clique $$G = (V,E)$$ where $$V$$ is a set of points in a metric space and edge weights are metric distances. In this setting, sublinear algorithms are those that make $$o(n^2)$$ edge queries. This paper studies problems like densest subgraph and maxcut in this setting. The key method is a sparsifying algorithm that achieves the following. (I paraphrase their language.) Consider a positive parameter $$\alpha$$, and let $$w(e)$$ denote the weight of edge $$e$$. The aim is to get a subgraph $$H$$ that contains every edge $$e$$ (in $$G$$) with independent probability $$\min(w(e)/\alpha, 1)$$. Furthermore, this subgraph should be obtained in time linear in the number of edges in $$H$$ (hence the title of the paper). For problems like 1/2-approximating the densest subgraph and PTASes for maxcut, the results show that for a carefully chosen $$\alpha$$, approximate solutions on $$H$$ give solutions of comparable quality on $$G$$. These results cleanly generalize to settings where edge weights satisfy triangle inequality with some multiplicative penalty. Sublinear Algorithms for ($$\Delta$$ + 1) Vertex Coloring, by Sepehr Assadi, Yu Chen, and Sanjeev Khanna (arXiv). Arguably, the first thing you learn about vertex coloring is that a graph with maximum degree $$\Delta$$ admits a $$(\Delta+1)$$-coloring, that can be found in linear time. But what about sublinear time/space? I like this! You take a simple classical fact, throw in sublinear constraints, and it opens up a rich theory. This paper shows a non-adaptive $$O(n^{3/2})$$-time algorithm for this problem, and gives a nearly matching lower bound. There are also results for streaming and parallel computation, but let’s focus on the sublinear result. It is remarkable that there is no loss in colors in going to sublinear time. (In contrast, the papers shows an $$\Omega(n^2)$$ lower bound for constructing a maximal matching.) The main technical tool is a list coloring result, where each vertex is given a list of colors and much choose its own from that list. Obviously, if each list is $$[\Delta + 1]$$, such a coloring is possible. The paper proves that even if each list is an independent $$O(\log n)$$-sized sample of $$[\Delta+1]$$, a valid coloring is still possible. The final algorithm is pretty involved, and uses this meta-algorithm as a building block. Anaconda: A Non-Adaptive Conditional Sampling Algorithm for Distribution Testing, by Gautam Kamath and Christos Tzamos (ECCC). The standard model for distribution testing is access to samples from the unknown distribution $$\mathcal{D}$$ with support $$[n]$$. This has attracted much attention with a rich set of results, and the complexity classic problems of uniformity, identity, and equivalence are well understood. But there are alternate models, such as the model of conditional samples (Chakraborty-Fischer-Goldhirsh-Matsliah ’13 and Canonne-Ron-Servedio ’14). For any subset $$S \subseteq [n]$$, we can get a random sample from $$\mathcal{D}$$ restricted to $$S$$. This adds an algorithmic dimension to distribution testing. This paper studies the power of non-adaptive conditional (NACOND) queries. The main result is that uniformity, identity, and equivalence are testable with $$\mathrm{poly}(\log n)$$ queries. (There are existing $$\Omega(\log n)$$ lower bounds for all these problems.) The heart of these algorithms is a procedure ANACONDA that tries to find a set $$S$$ where some element has a high probability, relative to the mass of $$S$$. Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices, by Amit Levi and Yuichi Yoshida (arXiv). When it comes to fundamental problems, it’s hard to beat quadratic minimization. Given a matrix $$A \in \mathbb{R}^{n\times n}$$, we wish to find $$v \in \mathbb{R}^n$$ that minimizes $$v^TAv$$. (This is basically a singular value/vector problem.) One may have additional terms in the objective, depending on $$v^Tv$$ or $$v^Tb$$ (for fixed vector $$b$$). This paper gives sublinear algorithms for this problem. A natural approach is to simply subsample $$k$$ rows and columns to get submatrix $$B$$, solve the problem for $$B$$, and hope for the best. This idea has a rich history from seminal work of Frieze-Kannan. Recently, Hayashi-Yoshida show that constant $$k$$ (only depending on error parameters) suffice for getting a non-trivial approximation for this problem. Unfortunately, the solution error depends on the $$\ell_\infty$$-norm of the solution. This paper shows that for polylogarithmic $$k$$, one can get an error depending on the $$\ell_2$$-norm of the solution. This is a significant improvement, especially for sparse solution vectors. The main technical workhorse is a new matrix decomposition theorem, that shows how any matrix can be written as a sum of a few block matrix, and a low-norm “error” matrix. Admirably, the paper shows a number of experiments, showing the effectiveness of this technique for eigenvalue computations. It’s very nice to see how ideas from sublinear algorithms might have a practical impact. # News for June 2018 The summer gets off to a flying start, with three property testing papers, spanning differential privacy, distribution testing, and juntas in Gaussian space! On closeness to $$k$$-wise uniformity, by Ryan O’Donnell and Yu Zhao (arXiv) In this paper, the authors consider the following structural question about probability distributions over the Boolean hypercube $$\{-1,1\}^n$$: ” what is the relation between total variation distance $$\delta$$ to $$k$$-wise independence, and bound $$\varepsilon$$ on the Fourier coefficients of the distribution on degrees up to $$k$$?” While this question might seem a bit esoteric at first glance, it has direct and natural applications to derandomization, and of course to distribution testing (namely, to test $$k$$-wise independence and its generalization, $$(\varepsilon, k)$$-wise independence of distributions over the hypercube). The main contribution here is to improve (by a $$(\log n)^{O(k)}$$ factor) the bounds on $$\delta (n,k,\varepsilon)$$ over the previous work by Alon et al. [AAK+07], making them either tight (for $$k$$ even) or near-tight. To do so, the authors introduce a new hammer to the game, using linear programming duality in the proof of both their upper and lower bounds. Property Testing for Differential Privacy, by Anna Gilbert and Audra McMillan (arXiv) Differential privacy, as introduced by Dwork et al., needs no introduction. Property testing, especially on this website, needs even less. What about a combination of the two? Namely, given black-box access to an algorithm claiming to perform a differentially private computation, how to test whether this is indeed the case? Introducing and considering this quite natural question for the first time, this work shows, roughly speaking, that testing differential privacy is hard. Specifically, they show that for many notions of differential privacy (pure, approximate, and their distributional counterparts), testing is either impossible or possible but not with a sublinear number of queries (even when the tester is provided with side information about the black-box). In other terms, as the authors put it: trusting the privacy of an algorithm “requires compromise by either the verifier or algorithm owner” (and, in the latter case, even then it’s not a simple matter). Is your data low-dimensional?, by Anindya De, Elchanan Mossel, and Joe Neeman (arXiv) (Well, is it?) To state it upfront, I am biased here, as it is a problem I was very eager to see investigated to begin with. To recap, the question is as follows: “given query access to some unknown Boolean-valued function $$f\colon \mathbb{R}^n \to \{-1,1\}$$ over the high-dimensional space $$\mathbb{R}^n$$ endowed with the Gaussian measure, how can one check whether $$f$$ only depends on “few” (i.e., $$k \ll n$$) variables?” This is the continuous, Gaussian version of the (quite famous) junta testing problem, which has gathered significant attention over the past years (the Gaussian version has, to the best of my knowledge, never been investigated). Now, the above formulation has a major flaw: specifically, it is uninteresting. In Gaussian space*, who cares about the particular basis I expressed my input vector in? So a more relevant question, and that that the authors tackle, is the more robust and natural one: “given query access to some unknown Boolean-valued function $$f\colon \mathbb{R}^n \to \{-1,1\}$$ over the high-dimensional space $$\mathbb{R}^n$$ endowed with the Gaussian measure, how can one check whether $$f$$ only depends on a low-dimensional linear combination of the variables?” Or, put differently, does all the relevant information for $$f$$ live in a low-dimensional subspace? De, Mossel, and Neeman show how can do this, non-adaptively, with a query complexity independent of the dimension $$n$$ (hurray!), but instead polynomial in $$k$$, the distance parameter $$\varepsilon$$, and the surface area $$s$$ of $$f$$. And since this last parameter may seem quite arbitrary, they also proceed to show that a polynomial dependence in this $$s$$ is indeed required. *”In Gaussian space, no one can hear you change basis?” # News for May 2018 Six papers for May, including new models, hierarchy theorems, separation results, resolution of conjectures, and a lot more fun stuff. A lot of things to read this month! Lower Bounds for Tolerant Junta and Unateness Testing via Rejection Sampling of Graphs, by Amit Levi and Erik Waingarten (ECCC). This paper proves a number of new lower bounds for tolerant testing of Boolean functions, including non-adaptive $$k$$-junta testing and adaptive and non-adaptive unateness testing. Combined with upper bounds for these and related problems, these results establishes separation between the complexity of tolerant and non-tolerant testing for natural properties of Boolean functions, which have so far been elusive. As a technical tool, the authors introduce a new model for testing graph properties, termed the rejection sampling model. In this model, the algorithm queries a subset $$L$$ of the vertices, and the oracle will sample an edge uniformly at random and output the intersection of the edge endpoints with the query set $$L$$. The cost of an algorithm is measured as the sum of the query sizes. In order to prove the above lower bounds (in the standard model), they show a non-adaptive lower bound for testing bipartiteness (in their new model). Hierarchy Theorems for Testing Properties in Size-Oblivious Query Complexity, by Oded Goldreich (ECCC). This work proves a hierarchy theorem for properties which are independent of the size of the object, and depend only on the proximity parameter $$\varepsilon$$. Roughly, for essentially every function $$q : (0,1] \rightarrow \mathbb{N}$$, there exists a property for which the query complexity is $$\Theta(q(\varepsilon))$$. Such results are proven for Boolean functions, dense graphs, and bounded-degree graphs. This complements hierarchy theorems by Goldreich, Krivelevich, Newman, and Rozenberg, which give a hierarchy which depends on the object size. Finding forbidden minors in sublinear time: a $$O(n^{1/2+o(1)})$$-query one-sided tester for minor closed properties on bounded degree graphs, by Akash Kumar, C. Seshadhri, and Andrew Stolman (ECCC). At the core of this paper is a sublinear algorithm for the following problem: given a graph which is $$\varepsilon$$-far from being $$H$$-minor free, find an $$H$$-minor in the graph. The authors provide a (roughly) $$O(\sqrt{n})$$ time algorithm for such a task. As a concrete example, given a graph which is far from being planar, one can efficiently find an instance of a $$K_{3,3}$$ or $$K_5$$ minor. Using the graph minor theorem, this implies analogous results for any minor-closed property, nearly resolving a conjecture of Benjamini, Schramm and Shapira. Learning and Testing Causal Models with Interventions, by Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, and Saravanan Kandasamy (arXiv). This paper considers the problem of learning and testing on causal Bayesian networks. Bayesian networks are a type of graphical model defined on a DAG, where each node has a distribution defined based on the value of its parents. A causal Bayesian network further allows “interventions,” where one may set nodes to have certain values. This paper gives efficient algorithms for learning and testing the distribution of these models, with $$O(\log n)$$ interventions and $$\tilde O(n/\varepsilon^2)$$ samples per intervention Property Testing of Planarity in the CONGEST model, by Reut Levi, Moti Medina, and Dana Ron (arXiv). It is known that, in the CONGEST model of distributed computation, deciding whether a graph is planar requires a linear number of rounds. This paper considers the natural property testing relaxation, where we wish to determine whether a graph is planar, or $$\varepsilon$$-far from being planar. The authors show that this relaxation allows one to bypass this linear lower bound, obtaining a $$O(\log n \cdot \mathrm{poly(1/\varepsilon))}$$ algorithm, complemented by an $$\Omega(\log n)$$ lower bound. Flexible models for testing graph properties, by Oded Goldreich (ECCC). Usually when testing graph properties, we assume that the vertex set is $$[n]$$, implying that we can randomly sample nodes from the graph. However, this assumes that the tester knows the value of $$n$$, the number of nodes. This note suggests more “flexible” models, in which the number of nodes may be unknown, and we are only given random sampling access. While possible definitions are suggested, this note contains few results, leaving the area ripe for investigation of the power of these models. # News for April 2018 Three Four papers for April: a new take on linearity testing, results on sublinear algorithms with advice, histogram testing, and distributed inference problems. (Edit: Sorry Clément, for missing your paper on distributed inference!) Testing Linearity against Non-Signaling Strategies, by Alessandro Chiesa, Peter Manohar, and Igor Shinkar (ECCC). This paper gives a new model for property testing, through the notion of non-signaling strategies. The exact definitions are quite subtle, but here’s a condensed view. For $$S \subseteq \{0,1\}^n$$, let an $$S$$-partial function be one that is only defined on $$S$$. Fix a “consistency” parameter $$k$$. Think of the “input” as a collection of distributions, $$\{\mathcal{F}_S\}$$, where each $$|S| \leq k$$ and $$\mathcal{F}_S$$ is a distribution of $$S$$-partial functions. We have a local consistency requirement: $$\{\mathcal{F}_S\}$$ and $$\{\mathcal{F}_T\}$$ must agree (as distributions) on restrictions to $$S \cap T$$. In some sense, if we only view pairs of these distributions of partial functions, it appears as if they come from a single distributions of total functions. Let us focus on the classic linearity tester of Blum-Luby-Rubinfeld in this setting. We pick random $$x, y, x+y \in \{0,1\}^n$$ as before, and query these values on a function $$f \sim {\mathcal{F}_{x,y,x+y}}$$. The main question addressed is what one can say about $$\{\mathcal{F}_S\}$$, if this linearity test passes with high probability. Intuitively (but technically incorrect), the main result is that $$\{\mathcal{F}_S\}$$ is approximated by a “quasi-distribution” of linear functions. An Exponential Separation Between MA and AM Proofs of Proximity, by Tom Gur, Yang P. Liu, and Ron D. Rothblum (ECCC). This result follows a line of work on understanding sublinear algorithms in proof systems. Think of the standard property testing setting. There is a property $$\mathcal{P}$$ of $$n$$-bit strings, an input $$x \in \{0,1\}^n$$, and a proximity parameter $$\epsilon > 0$$. We add a proof $$\Pi$$ that the tester (or the verifier) is allowed to use, and we define soundness and completeness in the usual sense of Arthur-Merlin protocols. For a $$\mathbb{MA}$$-proof of proximity, the proof $$\Pi$$ can only depend on the string $$x$$. In a $$\mathbb{AM}$$-proof of proximity, the proof can additionally depend on the random coins of the tester (which determine the indices of $$x$$ queried). Classic complexity results can be used to show that the latter subsume the former, and this paper gives a strong separation. Namely, there is a property $$\mathcal{P}$$ where any $$\mathbb{MA}$$-proof of proximity protocol (or tester) requires $$\Omega(n^{1/4})$$-queries of the input $$x$$, but there exists an $$\mathbb{AM}$$-proof of proximity protocol making $$O(\log n)$$ queries. Moreover, this property is quite natural; it is simply the encoding of permutations. Testing Identity of Multidimensional Histograms, by Ilias Diakonikolas, Daniel M. Kane, and John Peebles (arXiv). A distribution over $$[0,1]^d$$ is a $$k$$-histogram if the domain can be partitioned into $$k$$ axis-aligned cuboids where the probability density function is constant. Recent results show that such histograms can be learned in $$k \log^{O(d)}k$$ samples (ignoring dependencies on accuracy/error parameters). Can we do any better for identity testing? This paper gives an affirmative answer. Given a known $$k$$-histogram $$p$$, one can test (in $$\ell_1$$-distance) whether an unknown $$k$$-histogram $$q$$ is equal to $$p$$ in (essentially) $$\sqrt{k} \log^{O(d)}(dk)$$ samples. There is a nearly matching lower bound, when $$k = \exp(d)$$. Distributed Simulation and Distributed Inference, by Jayadev Acharya, Clément L. Canonne, and Himanshu Tyagi (arXiv   ECCC). This papers introduces a model of distributed simulation, which generalizes distribution testing and distributed density estimation. Consider some unknown distribution $$\mathcal{D}$$ with support $$[k]$$, and a “referee” who wishes to generate a single sample from $$\mathcal{D}$$ (alternately, she may wish to determine if $$\mathcal{D}$$ has some desired property). The referee can communicate with “players”, each of whom can generate a single independent sample from $$\mathcal{D}$$. The catch is that each player can communicate at most $$\ell$$ < $$log_2k$$ bits (otherwise, the player can simply communicate the sampled element). How many players are needed for the referee to generate a single sample? The paper first proves that this task is basically impossible with a (worst-case) finite number of players, but can be done with expected $$O(k/2^\ell)$$ players (and this is optimal). This can plugged into standard distribution testing results, to get inference results in this distributed, low-communication setting. For example, the paper shows that identity testing can be done with $$O(k^{3/2}/2^\ell)$$ players. # News for March 2018 March has been a relatively slow month for property testing, with 3 works appearing online. (If you notice we missed some, please leave a comment pointing it out) Edge correlations in random regular hypergraphs and applications to subgraph testing, by Alberto Espuny Díaz, Felix Joos, Daniela Kühn, and Deryk Osthus (arXiv). While testing subgraph-freness in the dense graph model is now well-understood, after a series of works culminating in a complete characterization of the testing problems which admit constant-query testers, the corresponding question for hypergraphs is far from resolved. In this paper, the authors develop new techniques for the study of study of random regular hypergraphs, which imply new testing results for subhypergraph-freeness testing, improving on the state-of-the-art for some parameter regimes (e.g., when the input graph satisfies some average-degree condition). Back from hypergraphs to graphs, we also have: The Subgraph Testing Model, by Oded Goldreich and Dana Ron (ECCC). Here, the authors introduce a new model for property testing of graphs, where the goal is to test if an unknown subgraph $$F$$ of an explicitly given graph $$G=(V,E)$$ satisfies the desired property. The testing algorithm is provided access to $$F$$ via membership queries, i.e., through query access to the indicator function $$\mathbf{1}_F\colon E \to \{0,1\}$$. (In some very hazy sense, this is reminiscent of the active learning or testing frameworks, where one gets more or less free access to unlabeled data but pays to see their label.) As a sample of the results obtained, the paper establishes that this new model and the bounded-degree graph model are incomparable: there exist properties easier to test in one model than the other, and vice-versa — and some properties equally easy to test in both. And finally, to drive home the point that “models matter a lot,” we have our third paper: Every set in P is strongly testable under a suitable encoding, by Irit Dinur, Oded Goldreich, and Tom Gur (ECCC). It is known that the choice of representation of the objects has a large impact in property testing: for instance, the complexity of testing a given property can change drastically between the dense and bounded-degree graph models. This work provides another example of such a strong dependence on the representation: while membership to some sets in $$P$$ is known to be hard to test, the authors here prove that, for every set $$S\in P$$, there exists a (polynomial-time, invertible) encoding $$E_S\colon \{0,1\}^\ast\to \{0,1\}^\ast$$ such that testing membership to $$S$$ under this encoding is easy. (They actually show even stronger a statement: namely, that under this encoding the set admits a “proximity-oblivious tester,” that is a constant-query testing algorithm which rejects with probability function of the distance to $$S$$.) Finally, on a non-property testing note: Edith Cohen, Vitaly Feldman, Omer Reingold, and Ronitt Rubinfeld recently wrote a pledge for inclusiveness in the TCS community, available here: https://www.gopetition.com/petitions/a-pledge-for-inclusiveness-in-toc.html If you haven’t seen it already, we encourage you to read it. Update: Fixed a mistake in the overview of the second paper; as pointed out by Oded in the comments, the main comparison was between the new model and the bounded-degree graph model, not the dense graph one. # News for February 2018 February had a flurry of conference deadlines, and they seem to have produced six papers for us to enjoy, including three on estimating symmetric properties of distributions. Locally Private Hypothesis Testing, by Or Sheffet (arXiv). We now have a very mature understanding of the sample complexity of distributional identity testing — given samples from a distribution $$p$$, is it equal to, or far from, some model hypothesis $$q$$? Recently,  several papers have studied this problem under the additional constraint of differential privacy. This paper strengthens the privacy constraint to local privacy, where each sample is locally noised before being provided to the testing algorithm. Distribution-free Junta Testing, by Xi Chen, Zhengyang Liu, Rocco A. Servedio, Ying Sheng, and Jinyu Xie (arXiv). Testing whether a function is a $$k$$-junta is very well understood — when done with respect to the uniform distribution. In particular, the adaptive complexity of this problem is $$\tilde \Theta(k)$$, while the non-adaptive complexity is $$\tilde \Theta(k^{3/2})$$. This paper studies the more challenging task of distribution-free testing, where the distance between functions is measured with respect to some unknown distribution. The authors show that, while the adaptive complexity of this problem is still polynomial (at $$\tilde O(k^2)$$), the non-adaptive complexity becomes exponential: $$2^{\Omega(k/3)}$$. In other words, there’s a qualitative gap between the adaptive and non-adaptive complexity, which does not appear when testing with respect to the uniform distribution. The Vertex Sample Complexity of Free Energy is Polynomial, by Vishesh Jain, Frederic Koehler, and Elchanan Mossel (arXiv). This paper studies the classic question of estimating (the logarithm of) the partition function of a Markov Random Field, a highly-studied topic in theoretical computer science and statistical physics. As the title suggests, the authors show that the vertex sample complexity of this quantity is polynomial. In other words, randomly subsampling a $$\mathrm{poly}(1/\varepsilon)$$-size graph and computing its free energy gives a good approximation to the free energy of the overall graph. This is in contrast to more general graph properties, for the vertex sample complexity is super-exponential in $$1/\varepsilon$$. Entropy Rate Estimation for Markov Chains with Large State Space, by Yanjun Han, Jiantao Jiao, Chuan-Zheng Lee, Tsachy Weissman, Yihong Wu, and Tiancheng Yu (arXiv). Entropy estimation is now quite well-understood when one observes independent samples from a discrete distribution — we can get by with a barely-sublinear sample complexity, saving a logarithmic factor compared to the support size. This paper shows that these savings can also be enjoyed in the case where we observe a sample path of observations from a Markov chain. Local moment matching: A unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distance, by Yanjun Han, Jiantao Jiao, and Tsachy Weissman (arXiv). Speaking more generally of the above problem: there has been significant work into estimating symmetric properties of distributions, i.e., those which do not change when the distribution is permuted. One natural method for estimating such properties is to estimate the sorted distribution, then apply the plug-in estimator for the quantity of interest. The authors give an improved estimator for the sorted distribution, improving on the results of Valiant and Valiant. INSPECTRE: Privately Estimating the Unseen, by Jayadev Acharya, Gautam Kamath, Ziteng Sun, and Huanyu Zhang (arXiv). One final work in this area — this paper studies the estimation of symmetric distribution properties (including entropy, support size, and support coverage), but this time while maintaining differentially privacy of the sample. By using estimators for these tasks with low sensitivity, one can additionally obtain privacy for a low or no additional cost over the non-private sample complexity. # News for January 2018 And now, for the first papers of 2018! It’s a slow start with only four papers (or technical three “standard property testing” papers, and one non-standard paper). Adaptive Boolean Monotonicity Testing in Total Influence Time, by Deeparnab Chakrabarty and C. Seshadhri (arXiv ECCC). The problem of testing monotonicity of Boolean functions $$f:\{0,1\}^n \to \{0,1\}$$ has seen a lot of progress recently. After the breakthrough results of Khot-Minzer-Safra giving a $$\widetilde{O}(\sqrt{n})$$ non-adaptive tester, Blais-Belovs proved the first polynomial lower bound for adaptive testers, recently improved to $$O(n^{1/3})$$ by Chen, Waingarten, and Xi. The burning question: does adaptivity help? This result shows gives an adaptive tester that runs in $$O(\mathbf{I}(f))$$, the total influence of $$f$$. Thus, we can beat these lower bounds (and the non-adaptive complexity) for low influence functions. Adaptive Lower Bound for Testing Monotonicity on the Line, by Aleksandrs Belovs (arXiv). More monotonicity testing! But this time on functions $$f:[n] \to [r]$$. Classic results on property testing show that monotonicity can be tested in $$O(\varepsilon^{-1}\log n)$$ time. A recent extension of these ideas by Pallavoor-Raskhodnikova-Varma replace the $$\log n$$ with $$\log r$$, an improvement for small ranges. This paper proves an almost matching lower bound of $$(\log r)/(\log\log r)$$. The main construction can be used to give a substantially simpler proof of an $$\Omega(d\log n)$$ lower bound for monotonicity testing on hypergrids $$f:[n]^d \to \mathbb{N}$$. The primary contribution is giving explicit lower bound constructions and avoiding Ramsey-theoretical arguments previously used for monotonicity lower bounds. Earthmover Resilience and Testing in Ordered Structures, by Omri Ben-Eliezer and Eldar Fischer (arXiv). While there has been much progress on understanding the constant-time testability of graphs, the picture is not so clear for ordered structures (such as strings/matrices). There are a number of roadblocks (unlike the graph setting): there are no canonical testers for, say, string properties, there are testable properties that are not tolerant testable, and Szemeredi-type regular partitions may not exist for such properties. The main contribution of this paper is to find a natural, useful condition on ordered properties such that the above roadblocks disappear hold, and thus we have strong testability results. The paper introduces the notion of Earthmover Resilient properties (ER). Basically, a graph properties is a property of symmetric matrices that is invariant under permutation of base elements (rows/columns). An ER property is one that is invariant under mild perturbations of the base elements. The natural special cases of ER properties are those over strings and matrices, and it is includes all graph properties as well as image properties studied in this context. There are a number of characterization results. Most interestingly, for ER properties of images (binary matrices) and edge-colored ordered graphs, the following are equivalent: existence of canonical testers, tolerant testability, and regular reducibility. Nondeterminisic Sublinear Time Has Measure 0 in P, by John Hitchcock and Adewale Sekoni (arXiv). Not your usual property testing paper, but on sublinear (non-deterministic) time nonetheless. Consider the complexity class of $$NTIME(n^\delta)$$, for $$\delta < 1$$. This paper shows that this complexity class is a "negligible" fraction of $$P$$. (The analogous result was known for $$\alpha < 1/11$$ by Cai-Sivakumar-Strauss.) This requires a technical concept of measure for languages and complexity classes. While I don’t claim to understand the details, the math boils down to understanding the following process. Consider some language $$\mathcal{L}$$ and a martingale betting process that repeatedly tries to guess the membership of strings $$x_1, x_2, \ldots$$ in a well-defined order. If one can define such a betting process with a limited computational resource that has unbounded gains, then $$\mathcal{L}$$ has measure 0 with respect to that (limited) resource. # News for December 2017 December 2017 concluded the year in style, with seven property testing papers spanning quite the range. Let’s hope 2018 will keep up on that trend! (And, of course, if we missed any, please point it out in the comments.The blame would be on our selves from the past year.) We begin with graphs: High Dimensional Expanders, by Alexander Lubotzky (arXiv). This paper surveys the recent developments in studying high dimensional expander graphs, a recent generalization of expanders which has become quite active in the past years and has intimate connections to property testing. Generalized Turán problems for even cycles, by Dániel Gerbner, Ervin Győri, Abhishek Methuku, and Máté Vizer (arXiv). A Generalized Turán Problem and its Applications, by Lior Gishboliner and Asaf Shapira (arXiv). In these two independent works, the authors study questions of a following flavor: two subgraphs (patterns) $$H,H’$$, what is the maximum number of copies of $$H$$ which can exist in a graph $$G$$ promised to be $$H’$$-free? They consider the case where the said patterns are cycles on $$\ell,k$$ vertices respectively, and obtain asymptotic bounds on the above quantity (the two papers obtain somewhat incomparable bounds, and the first focuses on the case where both $$\ell,k$$ are even). These estimates, in turn, have applications to graph removal lemmata, as discussed in the second work (Section 1.2): specifically, it implies the existence of a removal lemma with a tight super-polynomial bound, a question which was previously open. Approximating the Spectrum of a Graph, by David Cohen-Steiner, Weihao Kong, Christian Sohler, and Gregory Valiant (arXiv). The authors obtain constant-time and query algorithm for the task of approximating (in $$\ell_1$$ norm) the spectrum of a graph $$G$$, i.e. the eigenvalues of its Laplacian, given random query access to the nodes of $$G$$ and to the neighbors of any given node. They also study the applications of this result to property testing in the bounded-degree model, showing that a large class of spectral properties of high-girth graphs is testable. Then, we go quantum: Schur-Weyl Duality for the Clifford Group with Applications: Property Testing, a Robust Hudson Theorem, and de Finetti Representations, by David Gross, Sepehr Nezami, and Michael Walter (arXiv). Introducing and studying a duality theory for the Clifford group, the authors are able (among other results) to resolve an open question in quantum property testing, establishing a constant-query tester (indeed, making 6 queries) for testing whether an unknown quantum state is a stabilizer state. The previous best upper bound was linear in the number of qubits, as it proceeded by learning the state (“testing by learning”). Quantum Lower Bound for a Tripartite Version of the Hidden Shift Problem, by Aleksandrs Belovs (arXiv). This work introduces and studies a generalization of (both) the hidden shift and 3-sum problems, and shows an $$\Omega(n^{1/3})$$ lower bound on its quantum query complexity. The author also considers a property testing version of the problem, for which he proves a similar lower bound—interestingly, this polynomial lower bound is shown using the adversary method, evading the “property testing barrier” which states that (a restricted version of) this method cannot yield better than a constant-query lower bound. And to conclude, a distribution testing paper: Approximate Profile Maximum Likelihood, by Dmitri S. Pavlichin, Jiantao Jiao, and Tsachy Weissman (arXiv) This paper proposes an efficient (linear-time) algorithm to approximate the profile maximum likelihood of a sequence of i.i.d. samples from an unknown distribution, i.e. the probability distribution which, ignoring the labels of the samples and keeping only the collision counts, maximizes the likelihood of the sequence observed. This provides a candidate solution to an open problem suggested by Orlitsky in a FOCS’17 workshop (see also Open problem 84), and one which would have direct implications to tolerant testing and estimation of symmetric properties of distributions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692262768745422, "perplexity": 737.7854920734129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583518753.75/warc/CC-MAIN-20181024021948-20181024043448-00370.warc.gz"}
https://www.physicsforums.com/threads/trying-to-figure-out-how-to-solve-a-formula-for-the-energy-of-light.372032/
# Homework Help: Trying to figure out how to solve a formula for the energy of light 1. Jan 23, 2010 ### kmnghoover 1. The problem statement, all variables and given/known data energy of light with a frequency of 4.3 times 10 to the 14th s to the negative 1st 2. Relevant equations 3. The attempt at a solution 2. Jan 23, 2010 ### rock.freak667 I think you should know E=hf
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442557096481323, "perplexity": 1211.3912700635042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867493.99/warc/CC-MAIN-20180625053151-20180625073151-00571.warc.gz"}
https://remoteview.substack.com/p/atoms-on-dark-matter-threads
Jan 30 • 37M # Atoms On Dark Matter Threads? ### B.U. Rodionov * 0:00 -37:19 “Remote View” is a technology, philosophy and commentary newsletter and podcast by Bob Greenyer, where he ‘Looks back to the future through insight and critical fiction’. Episode details NOTE: This is a machine assisted translation of a Russian paper by B.U. Rodionov * prepared by myself, Bob. W. Greenyer in the week beginning of 23rd January 2023. Although every care has been taken, including making it readable, there may be errors in translation, if anything critical is identified, please can the finder inform me so that I may edit accordingly. Due to my observing and publishing physical evidence in line with several of the theoretical predictions made by Rodinov, including publishing of matching conclusions, I have given some illustrations and in-line links of some empirical evidence that were not part of the original paper. More examples will be added in time. *Moscow Engineering and Physics Institute (State University) 115409, Moscow, Kashirskoye Shosse 31, Department 7, tel. 323-90-39 ## 1. Basic form of matter As we know, the Universe is filled with mysterious "dark matter", which astrophysicists estimate makes up at least 90% of the universe's total mass [1]. Consequently, it is dark matter (and not the atoms and molecules we know about) that is the basic form of matter in our Universe. Let's imagine dark matter as consisting not of "clouds" of unknown to science hypothetical "point" particles (the whole "zoo" of which can be found in [1]), but from well known particles, but existing in nature as compactly connected long (including macroscopic - "infinite") threads of almost nuclear diameter. Such hypothetical threads we call fluxes (from Latin fluo - flow, hence "fluid" and "flux"). What might be the structure of such cosmic, geophysical or planetary threads claiming to be the dark matter of the universe? Let us consider the simplest version of the thread - a "cylindrical atom". A typical "cylindrical atom" "clad" by ordinary atomic nuclei interacting with it and depicted as black circles, is shown in Fig. 1, reproduced from our work [2]. Properties of fluxes as "cylindrical atoms" of quarks and electrons can be calculated by the usual formulas in physics - in the same way as the properties of ordinary spherical atoms are calculated [3]. The results of calculations are given in table 1 [2]. #### Table 1. Properties of fluxes • Flux length unlimited • Weight per metre of thread ~ 1 ng/m • Electron shell diameter ~ 60 fm • Filament fracture energy ~ 5 GeV • Electron binding energy ~ 5 MeV • Thread breaking force ~ 10 tonnes • Quark vortex diameter ~ 10 fm • Information density ~ 10^14 bits/m • Vortex electrical charge ≤ 0.5 C/km • Nuclear activity • Magnetic induction in vortex ~ 3.1^13 Tesla • Superconductivity • Magnetic field energy ~ 30 kJ/m • Waveguide, including for photons ## 2. Commentary on the properties of fluxes ### Invisibility The electric charge of the electron shell of a flux compensates the charge of its quark nucleus, and the magnetic fluxes are usually closed (see below). All this plus small (almost nuclear) diameter of fluxes explains their "invisibility" and almost free passage through ordinary atomic-molecular solids and liquids. Therefore, fluxes are still undiscovered, or rather "hidden" from us. ### Fluxons and fluons Like DNA molecules, fluxons can exist in the form of open or closed filaments (rings). The ends of an unclosed quark filament can be considered as its magnetic poles with equivalent magnetic charges multiples of the Dirac monopole charge (or half as much [3]). The magnetic flux of a quark vortex can be short-circuited by the oppositely directed magnetic flux of its electron shell. Such a flux with two ends, but with the magnetic flux "hidden" inside it, we call a fluxon. Another way to reliably hide a flux's magnetic flux, is to coil it into a ring by closing its opposite magnetic poles. We call a flux ring a fluon. Diverse in properties (composition and mass), fluons can be what we call composite elementary particles (hadrons). ### Long-range action Representation of threads of dark matter as classical objects localized in space like thin fibers of ordinary cotton wool is somewhat naive. From quantum mechanics it directly follows that localisation - "pointness" of quarks and electrons is conditional - these particles (like any other) have a blurred existence (for the whole universe!) "clouds" with characteristic size of "points" of the order of a de Broglie wave length λ = h/p, where h is Planck's constant, p is the particle's momentum. Any kind of localisation of particles alongside the flux, on the flux or inside the flux "at a point", as well as the flux itself "on the geometric line" is prevented by the well-known Heisenberg uncertainty relation - where the small uncertainty of momentum projections of any object ∆ p on coordinate axes necessitates large uncertainty of coordinates of the object - of order h/p. Therefore, all flux particles - quarks and electrons - are "smeared" along the flux (usually non-uniformly), and the flux itself is "smeared" in volume. A "smeared" single flux in a dense substance may simultaneously interact with a large number of atoms of this substance (and, in principle, even with all its atoms). That is, fluxes are capable of long-range action. For example, an excited atom can transfer its energy resonantly without emitting a photon to another atom located at a macroscopic distance from the first one (say, a kilometre or a kiloparsec). But - only in the zone of the cylindrical flux cloud that binds them. ### Superluminal velocities The speed of light in models of fibrous ether (Bernoulli sponge type) is the speed of propagation of transverse mechanical vibrations along filaments (etheric vortices). In our case fluxes play a role of such "light guides" (see Fig. 1). And from the quantum-mechanical long range action, it immediately follows a possibility of realising superluminal speeds (even infinite ones), which has been predicted by theorists for a long time and in recent years has been confirmed in direct experiments with photons [4]. Instantaneous transmission of signals, energy and information, by fluxes makes all parts of the universe rigidly interconnected - something can not happen in one part of the universe without changing the state of all other parts of the Universe. The electrons in a piece of metal, for example - in the wire of a telegraph line between Moscow and St. Petersburg, are just as tightly interconnected. Each electron of the countless number of conduction electrons, of this line, instantly "knows" the state of all others - otherwise the exclusion principle of equal states of fermions (Pauli principle) would not hold. Although, the electromagnetic signal, which carries information for us (exactly the signals we know how to use so far), travels along the telegraph line at a speed somewhat slower than the speed of light. ### Localization barrier Due to Heisenberg's relations, an atomic nucleus of mass M can localise in the electron shell of a flux (diameter D = 60 fm) if the kinetic energy of the nucleus $$T \geq \frac{2}{M}(\frac{h}{D})^2.$$ For many nuclei, T is of the order of 10 keV. To overcome this "localisation barrier" (which is a thousand times lower than the Coulomb repulsion barrier of nuclei, that prevents nuclear fusion reactions from taking place), it is necessary to expend relatively little energy. The energy necessary to localise atomic nuclei on fluxes can be transferred to atoms during electric discharges or liquid cavitation, when crushing materials, or during their irradiation by light, neutrons or other particles. For localisation on the fluxes - carbon, iron and uranium nuclei should have velocities (relative to the flux) about 90, 20 and 5 km/s respectively. After localisation of nuclei on fluxes, their various transformations - flux-transmutations of nuclei are possible (see Fig. 1). Hence the possibility of alchemy, both ancient and modern, modestly now called "cold transmutations" of nuclei. Below we will use the term "flux transmutations", which directly indicates the flux mechanism of the observed nuclear transmutations. ### Electrical properties The electron shell and charged quark vortex provide the superconductivity of the flux, so fluxes of this type can be thought of as superconducting conductors. If the energy of the atoms surrounding the flux is less than the barrier of localization, then the flux located in ordinary matter (for example, in metal) seems to be surrounded by an ideal "vacuum insulator". With a difference in the electric potential of the flux relative to a substance of the order of 10 kV, easily realisable in experiments (or in nature - in thunderstorm clouds), the resultant strength of the electrostatic interaction of the flux "coaxial line” **, is sufficient to capture and move droplets and dust. This can lead to noticeable deformations in the surface of a liquid and the excitement of vibrations in strings crossed by flux. All these effects can be used for electromechanical registration of fluxes. ** The inner conductor is the flux itself, with a diameter of about 60 fm, the outer conductor is the electrons of the substance at a distance of 10^-8 cm from the flux. ### Flux-Contact Localisation of heavy nuclei of some matter on fluxes is capable of creating electrical contact between this matter and geo- or cosmic fluxes. It is interesting to observe experimentally a kind of "anomalous conductivity of vacuum" - the leakage of fluxes of mainly positive charges, from a body electrically isolated from the Earth, in which flux transmutations of nuclei are caused in one way or another. This kind of uncontrolled charge leakage can cause a high potential on satellites relative to the Earth, which may be one of the reasons for their observed accidents. The contact effect may also explain the sometimes observed powerful electrical impacts on Earth of meteorites, the so-called, electrophonic bolides. It is known that auroras arise when solar plasma clusters (solar "plasma bolides") invade the Earth. Localisation of energetic solar wind protons on Earth bound fluxes (geofluxes) as well as localisation of heavy nuclei on fluxes in the Earth's interior, may cause electrification and lightning phenomena in the atmosphere, as well as a negative charge of the Earth (~0.6 MC) as a result of flux-contact. Moreover, nuclear transmutations in the atmosphere (during thunderstorms, auroras or meteorite fly-bys) will cause a predominant leakage of negative charges (electrons) into the atmosphere from the Earth's surface and locally charge the Earth's surface with a positive charge. Thus, these processes are capable of changing the Earth's "good weather" electric field (predominantly flux-transmutation of nuclei in the interior) to the opposite - "bad weather" (predominantly flux-transmutation in the atmosphere). ### Elasticity To deform a flux ring, a fluon with a single magnetic flux quantum and with a ring radius r, a force F of about 1 GeV/r must be applied [3]. At r of the order of the radius of the atom, F, about 0.4 kg, that is, the annular flux “spring” of atomic diameter, can withstand a non-deforming pressure of $$P \approx \frac{F}{4r^{2}} = 10^{15} atm$$ This pressure is 5 thousand times higher than the pressure at the centre of the Sun. Since the flux ionisation temperature, corresponding to the shell electron binding energy of 5 MeV, is 5 thousand times higher than the temperature in the Sun's centre, and considering the strength of quark threads (Table 1), we conclude that fluxes are capable of creating a strong quasi-stationary three-dimensional spatial "lace" of the Universe - flux frames and invisible shells of planets, stars, comets, asteroids, plants, animals, tornadoes and whirlwinds. ### Flux "anchors" If the cell size of flux threads is close to the diameter of an atom, such fluxes can be firmly embedded into atomic and molecular solids. Flux threads with microscopic knots which we will call "anchors", with loop sizes of the order of the atom, are capable of lifting and moving heavy bodies (up to 10 tons), or destroying them with a thunder and rattle. Giant flux structures stretching for millions of kilometres (e.g. comet tails) may be attached to planets and comets by these flux "anchors". ### Nuclear processes (flux transmutations) Ordinary "spherical" nuclei with a non-zero dipole magnetic moment, are captured by fluxes (but not fluons or fluxons) especially intensively by their magnetic poles at their free ends. In this case, a power of about 10 kW is released in the condensed matter, mainly in the form of soft X-rays [3]. Less intensively, the capture, collapse and synthesis of ordinary spherical nuclei can occur on the surface of fluxes (and on the surface of fluons and fluxons). Fusion of ordinary nuclei on the surface of fluxons is possible because of the shielding of charge of these nuclei by the electron "liquid" of the fluxon shell, while the collapse of nuclei is possible because of e-capture (the capture of fluxon shell electrons by protons). In the electron bose-liquid flux, an "extra" part of neutrons is likely to be ejected (leaked) from the captured nucleus. Free neutrons on the quark-gluon flux nucleus inside its electron shell can form a neutron Bose condensate (Fig. 1). When the cylindrical flux core ruptures, the neutron condensate can be captured by it and elongate the flux core, transforming into the quark vortex state typical of the flux core. A quark vortex may be electrically neutral, in which case, the flux (the whole or a part of its length) may have no electron shell. The energy released during all transformations inside the vortex is mainly released through the vortex itself, which explains the absence of radiation during transmutations of atomic nuclei. In the presence of sufficient energy, the flux may not only rupture, but even "spray", producing hadrons, leptons, photons. In the early hot universe, electrons, protons and neutrons formed by breaking (and spraying) fluxes, could have formed the observed atomic-molecular matter, the mass of which is known to be only a few percent of the dark matter mass (of fluxes). ### Neutrino rocket Consider an e-capture (p + e → n + νe) occurring in such a strong external magnetic field where the proton and electron (before capture) and neutron (after capture) would be lined up, that is, the orientations of their spins would align in the direction corresponding to the minimum energy of these particles in the magnetic field. The neutrino (with zero magnetic momentum) in these conditions also appears to be lined up, since, because of its inherent negative helicity (the spin is always directed against the momentum) to conserve the momentum of the original system, the neutrino must escape preferentially in the direction of the neutron spin, formed after the e-capture. From the known properties of the particles in question, it follows that the neutrino must escape preferentially in the direction opposite to the direction of the magnetic induction vector of the external magnetic field. In this case, the neutrino takes away from the source system not only energy and momentum, but also the momentum (equal to the neutrino spin). Thus, e-captures in a strong magnetic field create, for the matter bound to this field, a reactive (with momentum p and angular momentum J) neutrino jet, due to which the matter must move in the direction of the magnetic induction vector and twist counterclockwise when viewed in the same direction. Assuming that during e-capture the neutrino takes away energy on the order of 1 MeV and where A is the e-capture activity in curies, let's find the reaction force F = dp/dt ≈ 2A ng and the momentum moment J ≈ 2A 10^-17 dyn.cm ***. If 1 kg of air per second is involved in nuclear reactions, • A ≈ 10^15 Ci, • F ≈ 2 tons, • J ≈ 0.02 dyn.cm (momentum J is further neglected). *** NOTE: A dyne centimeter (dyn·cm) is a centimetre–gram–second unit of torque, a moment of torque force A quantised magnetic flux is directed inside and along the fluxes, so a cloud of fluxes in air, for example, can move as a whole due to the reactive force F and/or unwind (if the total force F acting on the cloud is displaced relative to its centre of gravity). The nuclear activity of a flux cloud is represented mainly by neutrinos and, even if the activity is much higher than in the above example, it is difficult (but possible!) to register. The indicated mechanism explains the strangeness of the flight of possible flux structures such as UFOs, ball and linear lightning, as well as the rotation of tornadoes and whirlwinds and additionally "perpetual motion machines", if they are actually in operation anywhere. One should bear in mind that the energy of these "perpetual motors" is not taken from the "vacuum" or "gravitation field", as inventors usually claim, but arises due to flux-transmutations of atomic nuclei. That is to say, it is derived from nuclear energy. ### Thermal properties In the case of the existence of a cosmic "cotton wool", physics has to deal with fundamentally open systems, when energy, matter and information can arrive (or be withdrawn) to any point of space by threads. Rapid heating of matter is usually explained by its internal energy sources, but the observed sometimes sharp temperature drop (e.g., in poltergeists) seems to be quite mysterious. When the thermal activity of the fluxes is negative, when energy is withdrawn from bodies through them, the substance can "freeze" near the filaments, dramatically changing their properties. For example, the radiation spectrum of "frozen" atoms and molecules may change, the spectrum lines of such matter becomes narrower and molecular bands (continuum of radiation) "break up" into separate lines (as in rarefied gases). This kind of surprising phenomena in dense media at room temperature are sometimes observed by biologists (the mitogenetic Gurvitch effect). ## 3. Some observable consequences The global threaded nature of matter, may be related to the ability of fluxes to "string" atoms and molecules on themselves (like beads on the thread of a necklace). Indeed, we have threads in cells of organisms such as DNA, RNA, proteins, filaments, microfilaments and microtubules. In crystals, (including - biological ones) atoms and molecules are arranged on lines - on crystallographic axes, which are assembled into three-dimensional lattices. When isolated from solutions or the gas phase (and plasma), the most diverse substances form, sometimes "instantly” in pulsed processes for example complexly structured filament systems (like polymers). Such microscopic filaments are abundantly found by the author in samples taken at the places of natural disasters (the Tunguska explosion of 1908, tornadoes near Voskresensk and Sochi in 2001 and lightning strike sites). According to our investigations, the glassy minerals of bolide origin (E. V. Dmitriyev’s pseudotectites), appeared to be "balls" of quartz threads of a diameter of about 10 microns. Amazing threads of rare-earth metals, so called "angel hair”, are found in "UFO landing sites" (in samples by V A Chernobrov and others). The true kingdom of threads is natural minerals. Here we find both nanotubes (in zeolites) and the molecular “lace” of jade, and fibers of a cat, tiger and other “eyes” visible even to the naked eye. Ulexite consists of natural light guides, through which images can be transmitted, giving it its nickname, TV-stone. No less mysterious are cymophane with parallel channels in chrysoberyl or noble opals made of strictly ordered identical silica balls. We are no longer surprised by the long fibres of asbestos, the "wire" metal nuggets, or the grand chains of mountain ranges. The linear relief on some planets (for example on Europa - a satellite of Jupiter) is similar to the micro-relief of minerals, the difference is only in the material (on Europa it is ice) and in the diameters of the threads. Respectively in minerals, their diameter is about 10 microns and on Europa it is about 100 m. ### The flux genesis of crystals An important argument in favour of the possible existence in nature of a material framework of invisible threads, on which atomic and molecular matter "settles”, is the existence of large crystals, the record holder being beryl at 380 tonnes. There are always such a large number of various defects in the volume and the faces of natural crystals, that the ability of crystals to grow to macroscopic size by the textbook "attaching atom to atom", seems mysterious. But how does a noble opal "ball to ball" grow? In addition, large crystals are found in geologically active regions, where nature gives little time for the "atom to atom" accretion. The formation of "giants" and so called skeleton crystals with complex spatial architecture (snowflakes, dendrites and halite from Sodium Chloride molecules) can be explained by assuming that the primary micro-crystals, growing on threads of invisible framework, "atom-by-atom”, first form knots of thread clusters, with their preferential direction along their crystallographic axes, and thus, set angles and form of both the micro and macroscopic cells of the framework itself. Subsequent micro-crystals line up on the framework, filling its volume, so that even significant-sized inclusions and heterogeneities little disturb the architecture (the shape) of the emerging mineral. Nevertheless, in case of large inhomogeneities, the shape of the mineral, is determined by them. Apparently, this is how unique pseudomorphoses were formed, for example - an Australian opalised snake skeleton, or trunks of giant sequoias in Arizona, made of agate and jasper. With the help of fluxes, one can also explain the amazing phenomenon of epitaxy on substrates, when the structure of the parent crystal, remotely determines the structure of the daughter crystal, through a thick (of the order of hundreds of angstroms), amorphous substrate. ### Flux turbines Due to nuclear processes (for example, transmutations, fusion of nuclei, and their capture of electrons) the filamentary dark matter from fluxes can acquire energy, momentum and angular momentum. Therefore, interacting with ordinary matter, it can act as an engine in natural processes (for example, spinning a tornado) or as an electric generator (Earth's dynamo). Since "cotton wool" is capable of capturing water droplets and dust particles, flux clouds may be visible to the naked eye. It is possible that these are what tornadoes, (whirlwinds) are made of. A tornado's cylindrical coil, rotating about its vertical axis, is a neutrino "turbine" of dense dark matter, (the coil turns about 100 kg/cm^3 with a distance between the threads of the order of an atom) acting as an Archimedean screw. Such a screw is capable of lifting water (and any heavy objects) to a height of kilometres. If a tornado were an analogue of a hoover, it would be able to lift water no higher than 10 metres. Naturally, such hypothetical whirlwinds are capable of lifting Earth's rock into the air and leaving round craters (astroblemes) on the ground. Giant vortices, such as cyclones, hurricanes, sunspots and even spirals of galaxies may have a similar nature. An electric current flowing in a vortex creates a magnetic field. The movement of other parts of the same helix relative to the field, can generate an electric current that amplifies it (as in a unipolar Faraday dynamo). This kind of electric generator, (dynamo) can act in the Earth interior and create its geomagnetic field. Recall that, due to the localisation barrier, even the fluxes in the iron core of the Earth, will behave as insulated conductors. ### Vertical currents in the Earth's atmosphere These currents, that noticeably alter the horizontal component of the geomagnetic field (and are many orders of magnitude greater than the known atmospheric ion currents), were discovered over a century ago. But so far their nature has not been established. The density of vertical currents in any point is found from the known configuration of the geomagnetic field. It varies from place to place. Usually these are fractions of μA/m^2, but there are some places on Earth where vertical current density is higher than 1 mA/m^2 [5]. From our point of view, vertical currents are currents on Earth fluxes (geofluxes). This can be verified experimentally: as geofluxes with the Earth's dynamo current flowing through them, can penetrate through a magnetic screen, rotating, for example, inside the magnetic screen a ring of magnetic material, so that the ring crosses the fluxes, an alternating electric current may then be generated in a coil toroidally wound on the ring. ### The similarity of bolides, comets and ball lightning Some bolides, like the Tunguska bolide, 1908, or the Chulymsky bolide (Tomsk), 1984, cause sound, seismic and electrophysical effects (such as burning of electric bulbs and radio interference), which precede their occurrence. Such bolides are called electrophonic. The Vitim Electrophonic Bolide that crossed the Siberian night sky between the 25th and the 26th of September, 2002, caused St. Elmo lights on the fence of the local airport and even lighted up lamps in the de-energised village of Mama (Irkutsk region). A perfect example of a brilliant (literally) flux of light! The glow and shape of bolides is usually explained by the formation of plasma as air flows around them and their matter is blown away (ablation). However, in many cases, bolides do not leave a smoke trail in the atmosphere. In the most powerful explosions of the first two aforementioned bolides (the site of the Vitim bolide explosion has not yet been studied) no "meteoritic" matter was found. The luminescence of comets is explained by the scattering of sunlight on the dust particles carried away by gas jets, when the Sun heats and evaporates the solid matter of cometary nuclei. Since the dimensions of cometary nuclei are on the order of a kilometre, whereas the luminous head 5, and the tail of the comet 6-7 orders of magnitude larger, the shape of the comet is determined not so much by the features of its nucleus, as by the processes of interaction of particles and gases emitted by the nucleus with the solar wind and the circumsolar magnetic field. But then how can we explain the observed variety of forms of luminous regions near bolides and comets with (sometimes) striking external similarity of these seemingly so different objects with each other and...with ball lightning? Simply, by considering that fluxes are a part of all these bodies and determine their properties, hithertoo mysterious to us. For example, the energy of the solar wind is orders of magnitude too weak to explain the sometimes observed increase in the brightness of comets. The reasons of appearance of concentric bright regions (halos) extending from comets nuclei are unknown. The spectra of comets are mysterious and the mechanisms of rapid ionisation of molecules from the head to the tip of the comet tail are also not clear. What determines the variety of tail shapes, their disappearance, separation or rotation (sometimes even against the Sun). What causes the appearance of several tails on one comet or the division of the comet's nucleus into parts and, when the comet returns to the Sun, how does the comet regain its original form? Let us assume that near bolides, comets and other celestial bodies (including the Sun and the Earth), as well as near ball lightning, there are flux shells which are much longer than the apparent dimensions of these bodies. Comet nuclei can be connected by fluxes to their head, tail, and possibly to distant planets and the Sun. Bolides may glow mostly not due to ablation, but, like comets and ball lightning, due to nuclear reactions (flux transmutations). Bolides and ball lightning can interact "ahead of time" with the surface of the Earth, causing it to shake, can break trees and carry heavy objects from place to place (remember the flux "anchors"). Nuclear reactions on fluxes can cause a sharp change in the brightness of comets, the appearance of haloes, the collapse of comet nuclei, as well as explosions of bolides and ball lightning. The glow of oscillating fluxes and their changing spatial configuration can determine the shape, spectrum of the glow, the dynamics of optical and electrophysical processes of Bolides, comets, ball lightning, as well as the features of their interaction with terrestrial matter. ### Ophthalmology in space After dark adaptation, astronauts on orbital missions observe luminous flashes in their own eyes, which in 90% of cases are in the form of luminous lines, solid or discontinuous. Less frequently observed are glowing spots (sometimes with a bright core) and glowing concentric circles [see reference 6]. Sometimes the astronaut can even see "from where and to where" the spot of light, forming a luminous line, is moving. These facts, in our opinion, speak against the version based on registration in the eyes of ordinary cosmic particles (atomic nuclei [6]). To form, for example, a clear luminous line the ionising particle would have to pass through (or in close proximity to) the light sensitive elements of the retina, which is unlikely if only because of the curvature of the retina. If a retinal excitatory flux filament passes through the eye, the eye can perceive: 1. a solid luminous line - the projection of the thread onto the retina; 2. a broken line - if the trace of the thread on the retina crosses the blind spot of the eye; 3. a running light spot - if the point of intersection of the thread with the retina moves relatively slowly along the retinal surface; 4. a circle or concentric circles, if the thread has the shape of a cylindrical spiral, with an axis in the same direction as its movement and approximately perpendicular to the retina; 5. a shapeless light spot - if the moving filament is a poorly ordered spatial curve; 6. a spot with a bright core - if the spatial curve is predominantly concentrated around some axis approximately perpendicular to the retina and coinciding with the direction of filament motion. Consequently, at a spacecraft’s orbital velocity of 8 kilometres per second, it is possible to localise heavy nuclei, included in elements of its construction and it is possible to excite the electron shells of fluxes due to nuclear transmutations occurring on them. Excited electron shells of fluxes can emit real photons appearing wherever there is a shell vibration, including those far from the point of capture of the nucleus which has excited these vibrations. The vibrations going along the filaments may be perceived by photosensitive elements of the eye, due to the real photons arising along the excited fluxes, and/or directly through virtual photons - electromagnetic contact. ### Images carried by fluxes Excitation of fluxes due to nuclear transmutations on their surface (flux transmutations), emission of photons by excited filaments and their effect on ordinary atomic and molecular matter (including human eye) is possible not only in spaceflight conditions. For instance, images of surrounding objects (leaves of trees or coins lying in a pocket) are sometimes found on the bodies of people struck by lightning. Sometimes a person sees in the direction of a linear lightning strike a luminous ball (ball lightning), though the place of strike itself may be obscured from the observer by trees or a building wall. In our opinion, the first two examples may demonstrate transmission of energy by hypothetical threads (as light by light guides) with subsequent fixation of image on human skin. The third example is a direct analogue of the ophthalmological process in an astronaut's eye (a combination of cases 4 and 5, or case 6). Let's note that "flux ophthalmology" does not cancel the reality of ball lightnings existence (in regards to them, see in the same collection, the author's article "Fire from ball lightning"). Fluxes can also be responsible for the exposure of films in cameras, including the formation of mysterious images (experiments by A.F. Okhatrin). Complex and dynamic gas-discharge patterns near various objects (organic and inorganic) placed on dielectric surfaces in a high-frequency electric field - the Kirlian effect - are widely known. And in these cases, luminous gaseous discharge plasma can develop along fluxes. ## References 1. G.V.Clapdor-Klinegrothaus, K.Züber "Astrophysics of elementary particles", M. Ed. journal UFN, 2000. p. 496. Г.В.Клапдор-Клайнгротхаус, К.Цюбер “Астрофизика элементарных частиц”, М. Ред.журнала УФН, 2000. С.496. 2. B.U.Rodionov Thready (Linear) Dark Matter Possible Displays. Gravitation and Cosmology (2002), Supplement, v.8, p.214-216 3. A .Olkhovatov and B.Rodionov "Tunguska Lights", ed. "Laboratory of Basic Knowledge", M., 1999. C.240. А. Ольховатов и Б.Родионов “Тунгусское сияние”, изд. “Лаборатория базовых знаний”, М.,1999. C.240. 4. G.Nimtz New Knowled of Tunneling from Photonic Experiments. Proc.of the Adriatico Research Conf. “Tunneling and its Implications” 30 July – 2 August 1996, World Scientific Publishing Company. P.1-15 5. N.V.Kulanin, B.U.Rodionov Geomagnetic dark matter as a source of geomagnetic anomalies. Mashinostroitel, No4, 2000, p.56- 59 Н.В.Куланин, Б.У.Родионов Геомагнитная темная материя как источник геомагнитныханомалий. Машиностроитель, №4, 2000, с.56- 59 6. S.V. Avdeev et al. Study of the characteristics of particles that cause ophthalmic phenomena in space flight. MePhI-2001 Scientific Session, Vol. 7, Moscow, MEPhI, 2001, pp. 53-54 С.В.Авдеев и др. Исследование характеристик частиц, вызывающих офтальмологические явления в космическом полете. Научная сессия МИФИ-2001, том 7, М., МИФИ, 2001, с. 53-54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322407603263855, "perplexity": 2058.218539183839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00785.warc.gz"}
http://math.stackexchange.com/questions/136202/wedge-product-du-dz-bar-partialu-wedge-dz?answertab=oldest
Wedge product $d(u\, dz)= \bar{\partial}u \wedge dz$. How to show that if $u \in C_0^\infty(\mathbb{C})$ then $d(u\, dz)= \bar{\partial}u \wedge dz$. Note that $$d(u\,dz)=du\wedge dz+(-1)^0u\,ddz=du\wedge dz=(\partial u+\bar{\partial} u)\wedge dz.$$ Since $$\partial u=\frac{\partial u}{\partial z}dz\hspace{2mm}\mbox{ and }\hspace{2mm}\bar{\partial} u=\frac{\partial u}{\partial \bar{z}}d\bar{z},$$ we have $$d(u\,dz)=\frac{\partial u}{\partial z}dz\wedge dz+\frac{\partial u}{\partial \bar{z}}d\bar{z}\wedge dz=\frac{\partial u}{\partial \bar{z}}d\bar{z}\wedge dz=\bar{\partial} u\wedge dz,$$ where we have used $dz\wedge dz=0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9964006543159485, "perplexity": 78.31002392553862}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930443.64/warc/CC-MAIN-20150521113210-00010-ip-10-180-206-219.ec2.internal.warc.gz"}
http://clay6.com/qa/23326/8-0575-times-10-kg-of-glauber-s-salt-is-dissolved-in-water-to-obtain-1dm-3-
Browse Questions # $8.0575\times 10^{-2}Kg$ of Glauber's salt is dissolved in water to obtain $1dm^3$ of a solution of density 1077.2$Kgm^{-3}$.Calculate the molarity ,molality and mole fraction of $Na_2SO_4$ in solution. $\begin{array}{1 1}(a)\;0.1M,0.15m,2.3\times 10^{-4}\\(b)\;0.2M,0.24m,4.3\times 10^{-3}\\(c)\;0.325M,0.3m,2.3\times 10^{-3}\\(d)\;0.432M,0.123m,1.3\times 10^{-5}\end{array}$ Glauber's salt is $Na_2SO_4.10H_2O$ having mol.wt=322 $\therefore$ weight of $Na_2SO_4$ in $8.0575\times 10^{-1}Kg$ glauber salt $\Rightarrow \large\frac{142\times 8.0575\times 10^{-2}}{322}$ $\Rightarrow 3.5533\times 10^{-2}Kg$ Molarity 'M' of $Na_2SO_4=\large\frac{3.5533\times 10^{-2}}{142\times 10^{-3}\times 1}$ $\Rightarrow 0.2502M$ Molality of $Na_2SO_4=\large\frac{Mole\;of\;Na_2SO_4}{wt\;of\;water\;in\;Kg}$ $\Rightarrow \large\frac{3.5533\times 10^{-2}}{142\times 10^{-3}\times \Large\frac{1041.667}{10^3}}$ $\Rightarrow 0.24m$ Mole fraction of $Na_2SO_4=\large\frac{Mole\;of\;Na_2SO_4}{Mole\;of\;Na_2SO_4+mole\;of\;H_2O}$ $\Rightarrow \large\frac{\Large\frac{3.5533\times 10^{-2}}{142\times 10^{-3}}}{\Large\frac{3.5533\times 10^{-2}}{142\times 10^{-3}}+\frac{1041.667}{18}}$ $\Rightarrow 4.3\times 10^{-3}$ Hence (b) is the correct answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376046061515808, "perplexity": 4354.299284982544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00373-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.thespectrumofriemannium.com/category/mathematics/algebra/
## LOG#185. Geometricobjects. I have the power! I have a power BETTER than the Marvel’s tesseract. It is called physmatics. Hi, there! We are back to school. This time, I am going to give you a tour with some geometrical objects, or geometricobjects, … Continue reading ## LOG#158. Ramanujan’s equation. Hi, everyone! I am back, again! And I have some new toys in order to post faster (new powerful plugin). Topic today: Ramanujan! Why Ramanujan liked the next equation? (1)   This equation can be rewritten as follows (2)   … Continue reading ## LOG#099. Group theory(XIX). Final post of this series! The topics are the composition of different angular momenta and something called irreducible tensor operators (ITO). Imagine some system with two “components”, e.g., two non identical particles. The corresponding angular momentum operators are: The following … Continue reading ## LOG#098. Group theory(XVIII). This and my next blog post are going to be the final posts in this group theory series. I will be covering some applications of group theory in Quantum Mechanics. More advanced applications of group theory, extra group theory stuff … Continue reading ## LOG#097. Group theory(XVII). The case of Poincaré symmetry There is a important symmetry group in (relativistic, quantum) Physics. This is the Poincaré group! What is the Poincaré group definition? There are some different equivalent definitions: i) The Poincaré group is the isometry group … Continue reading ## LOG#096. Group theory(XVI). Given any physical system, we can perform certain “operations” or “transformations” with it. Some examples are well known: rotations, traslations, scale transformations, conformal transformations, Lorentz transformations,… The ultimate quest of physics is to find the most general “symmetry group” leaving … Continue reading ## LOG#095. Group theory(XV). The topic today in this group theory thread is “sixtors and representations of the Lorentz group”. Consider the group of proper orthochronous Lorentz transformations and the transformation law of the electromagnetic tensor . The components of this antisymmetric tensor can … Continue reading ## LOG#094. Group theory(XIV). Group theory and the issue of mass: Majorana fermions in 2D spacetime We have studied in the previous posts that a mass term is “forbidden” in the bivector/sixtor approach and the Dirac-like equation due to the gauge invariance. In fact, … Continue reading ## LOG#093. Group theory(XIII). The sixtor or 6D Riemann-Silberstein vector is a complex-valued quantity up to one multiplicative constant and it can be understood as a bivector field in Clifford algebras/geometric calculus/geometric algebra. But we are not going to go so far in this … Continue reading
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471431732177734, "perplexity": 2040.640934983456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527866.58/warc/CC-MAIN-20190419181127-20190419203127-00373.warc.gz"}
http://repository.bilkent.edu.tr/browse?type=subject&value=Iterative%20Gaussian%20mixture%20estimation
Now showing items 1-1 of 1 • #### Iterative estimation of Robust Gaussian mixture models in heterogeneous data sets  (Bilkent University, 2014-07) Density estimation is the process of estimating the parameters of a probability density function from data. The Gaussian mixture model (GMM) is one of the most preferred density families. We study the estimation of a ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8734068274497986, "perplexity": 525.06681520722}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00095.warc.gz"}
http://mathoverflow.net/questions/16393/finding-a-cycle-of-fixed-length?answertab=active
# Finding a cycle of fixed length Is there any result about the time complexity of finding a cycle of fixed length k in a general graph? All I know is that Noga Alon et al. use the techinique called "color-coding", which has a running time O(M(n)), where M(n) is the time of multiplying two n times n matrices. Is there any better result? - I suppose your n is the number of nodes in the graph. –  HenrikRüping Feb 25 '10 at 15:21 Yes, it is. Thanks for the supplement. –  Hsien-Chih Chang 張顯之 Feb 26 '10 at 7:27 Finding a cycle of any even length can be found in $O(n^2)$ time, which is less than any known bound on $O(M(n))$. For example, a cycle of length four can be found in $O(n^2)$ time via the following simple procedure: Assume the vertex set is $\{1,...,n\}$. Prepare an $n$ x $n$ matrix $A$ which is initially all zeroes. For all vertices $i$ and all pairs of vertices $j, k$ which are neighbors to $i$, check $A[j,k]$ for a $1$. If it has a $1$, output four-cycle, otherwise set $A[j,k]$ to be $1$. When this loop finishes, output no four-cycle. The above algorithm runs in at most $O(n^2)$ time, since for every triple $i,j,k$ we either flip a $0$ to $1$ in $A$, or we stop. (We assume the graph is in adjacency list representation, so it is easy to select pairs of neighbors of a vertex.) The general case is treated by Raphy Yuster and Uri Zwick in the paper: Raphael Yuster, Uri Zwick: Finding Even Cycles Even Faster. SIAM J. Discrete Math. 10(2): 209-222 (1997) As for finding cycles of odd length, it's just as David Eppstein says: nothing better is known than $O(M(n))$, including the case where $k=3$. However, if you wished to detect paths of length $k$ instead of cycles, you can indeed get $O(m+n)$ time, where $m$ is the number of edges. I am not sure if the original color-coding paper can provide this time bound, but I do know that the following paper by some random self-citing nerd gets it: Ryan Williams: Finding paths of length k in O*(2^k) time. Inf. Process. Lett. 109(6): 315-318 (2009) - Thank you for mentioning the result about detecting paths. It's new to me and it is very useful. Thanks!! –  Hsien-Chih Chang 張顯之 Feb 26 '10 at 7:20 A linear time algorithm (i.e., O(m+n)) for detecting paths of length k was mentioned in one of Alon et al.'s papers. It just involves choosing a random ordering of the vertices, and making the graph a DAG using this ordering. Since longest path on DAGs can be solved in linear time, a directed path of length k can be found in linear time, if the chosen random ordering works. Repeat the previous step exponentially many times (in k), to get desired randomized algorithm. –  Rune Mar 1 '10 at 23:59 Rune is absolutely right; the random ordering algorithm runs in O(k! (m + n)) time. –  Ryan Williams Mar 2 '10 at 6:11 Good stuff. How can I detect every cycle of length 4 though? The algorithms you reference seem to only tell you if such a cycle exists or not. –  SchighSchagh Nov 5 '13 at 1:10 Not sure what you're asking. There can be $\Omega(n^4)$ cycles of length 4 in a graph, so $O(n^4)$ time is the best you can hope for asymptotically if you want to list all 4-cycles. If you're asking how to get the above algorithm to produce a 4-cycle in $O(n^2)$ time when one exists, that's also pretty obvious... –  Ryan Williams Nov 5 '13 at 9:33 Basically everyone above hit the nail on its head. I want to add an algorithm of counting every cycles of length 4 in an undirected graph, which runs in O(n^3) instead of O(n^4) every cycle of length four includes four nodes 'a', 'b', 'c' and 'd'. thus for every pair of nodes, u and v, you count the number of nodes that are neighbors to both u and v. Then the problem of counting every cycle of length four transforms to the problem of selecting 2 such nodes that are adjacent to both u and v. - If we restrict to the class of planar graphs, then there is a linear time algorithm due to Eppstein. It is also linear for graphs of bounded tree-width since the problem of finding a cycle of fixed length can easily be encoded as a monadic second-order logic formula, and we can then appeal to Courcelle's theorem. I suspect that the answer for general graphs is actually polynomial. Edit. The related problem of finding a cycle of length $a$ (mod $k$) has not been proven to be polynomial (except in the case $a=0$). - A polynomial bound for general graphs (for any fixed k) is given by the Alon et al color-coding paper that the original question cites. –  David Eppstein Feb 25 '10 at 21:22 Finding a cycle of length being a multiple of k is a pretty interesting question. Thanks for the information! –  Hsien-Chih Chang 張顯之 Feb 26 '10 at 7:25 If we consider graphs with edges labelled from a finite abelian group, then we can define the group-value of a cycle as the sum of its edge labels. For a fixed element g of the group, we can then ask if there is cycle with group-value g. Cycles of length 0 (mod k) are a special instance of this problem, where the group is $Z_k$, g=0, and all edge labels are 1. –  Tony Huynh Feb 26 '10 at 15:51 If there's a deterministic or randomized algorithm with better dependence on $n$ than $M(n)$ even for the first nontrivial case, $k=3$ (that is, testing whether the given graph contains a triangle) then I don't know about it. Nothing better is listed on the Wikipedia article on triangle-free graphs, for instance. There do exist quantum algorithms for finding 3-cycles that are faster, however: see arXiv:quant-ph/0310134. It's also possible to find bounds that are better than $M(n)$ for graphs that are not dense (number of edges sufficiently smaller than quadratic). For instance even fairly naive algorithms can find triangles in time $O(m^{3/2})$ where $m$ is the number of edges in the graph. - Thanks for the useful information. I also noticed that for planar graph you gave a linear time algorithm for detecting any bounded size graph. Thank you very much!! –  Hsien-Chih Chang 張顯之 Feb 26 '10 at 7:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369719982147217, "perplexity": 384.1107866738711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00181-ip-10-171-96-226.ec2.internal.warc.gz"}
https://amathew.wordpress.com/2010/07/14/the-concept-of-topological-entropy/
In the theory of dynamical systems, it is of interest to have invariants to tell us when two dynamical systems are qualitatively “different.” Today, I want to talk about one particularly important one: topological entropy. We will be in the setting of discrete dynamical systems: here a discrete dynamical system is just a pair ${(T,X)}$ for ${X}$ a compact metric space and ${T: X \rightarrow X}$ a continuous map. Recall that two such pairs ${(T,X), (S,Y)}$ are called topologically conjugate if there is a homeomorphism ${h: X \rightarrow Y}$ such that ${T = h^{-1}Sh}$. This is a natural enough definition, and it is clearly an equivalence relation. For instance, it follows that there is a one-to-one correspondence between the orbits of ${T}$ and those of ${S}$. In particular, if ${T}$ has a fixed point, so does ${S}$. Admittedly this necessary criterion for determining whether ${T,S}$ are topologically conjugate is rather trivial. Note incidentally that topological conjugacy needs to be considered even when one is studying smooth dynamical systems—in many cases, one can construct a homeomorphism ${h}$ as above but not a diffeomorphism. This is the case in the Hartman-Grobman theorem, which states that if ${f: M \rightarrow M}$ is a smooth map with a fixed point where the derivative is a hyperbolic endomorphism of the tangent space, then it is locally conjugate to the derivative (that is, the corresponding linear map). 1. Definition of topological entropy Anyway, we need new invariants. One extremely important one is topological entropy, which measures in some sense the “complexity” of ${T}$. Consider the following problem. For a natural number ${n}$, consider segments ${x, Tx, \dots, T^{n-1}x}$ for all ${x \in X}$. How many of them are there? Clearly, the answer will be infinite in general. But we can count how many we need to get a dense packing in the space of all such segments. To be precise, for ${n \in \mathbb{N}, \epsilon>0}$, define the number ${S(n, \epsilon,T)}$ to be the minimal natural number ${m}$ such that there exist points ${y_1, \dots, y_m}$ such that for every ${x \in X}$, there is some ${j}$ such that $\displaystyle d(T^ix , T^i y_j) < \epsilon, \ \ 0 \leq i \leq n-1.$ Here ${d}$ is the metric on ${X}$. The topological entropy ${h_{top}(T)}$ is defined as $\displaystyle \boxed{ h_{top}(T)} = \lim_{\epsilon \rightarrow 0} \limsup_{n \rightarrow \infty} \frac{1}{n} \log S(n, \epsilon, T).$ This is a rather complex definition, so it will be useful to pause to think again about it. Another way to do this is to introduce a new metric on ${X}$. Namely, we define the metric ${d_n}$ via ${d_n(x,y) = \max_{0 \leq i < n} d(T^ix, T^iy)}$. Then, in any metric space ${A, \delta}$, we can call a subset ${B \subset A}$ ${\epsilon}$-dense if every point of ${A}$ is of distance ${<\epsilon}$ from some point of ${B}$. The selection of points ${y_1, \dots, y_m}$ as above was made so that ${\{y_1, \dots, y_m\}}$ is an ${\epsilon}$-dense set—indeed, the smallest such—in ${X}$ endowed with the metric ${d_n}$. This provides some motivation for the definition. There is a variation on the idea of ${\epsilon}$-dense: namely, ${\epsilon}$-separated. This means that any two distinct points in the given subset (which we call ${\epsilon}$-separated) have distance ${\geq \epsilon}$. The problem of finding a maximal ${\epsilon}$-separated set (“to pack the points such that they are far away from each other”) is related to the problem of finding a minimal ${\epsilon}$-dense set. Namely, one can check that a minimal ${2\epsilon}$-dense set is ${\epsilon}$-separated, and similarly a maximal ${\epsilon}$-separated set is ${2\epsilon}$-dense. This provides another way of thinking of topological entropy. Let ${S'(n,\epsilon,T)}$ denote a maximal ${\epsilon}$-separated subset of ${(X, d_n)}$. Then $\displaystyle h_{top}(T) = \lim_{\epsilon \rightarrow 0} \limsup_{n \rightarrow \infty} \frac{1}{n} \log S'(n, \epsilon, T).$ 2. A more natural definition I personally find this definition a little strange. For one thing, it appears superficially to depend on the metric ${d}$, while we supposedly just care about the topological structure. In addition, the formula is rather complicated. We have yet to show that it is invariant under topological conjugacy, in fact. The original definition of Adler, Konheim, and McAndrew is simpler and seems more natural to me; it is defined very explicitly in terms of coverings. It does not even use the metric structure of ${X}$. So, fix a compact space ${X}$, and let ${T: X \rightarrow X}$ be a continuous map, as before. Now an open covering will be denoted ${\mathfrak{A}}$. The refinement ${\mathfrak{A} \vee \mathfrak{B}}$ of two open coverings ${\mathfrak{A}, \mathfrak{B}}$ is just the covering ${\{ U \cap V, U \in \mathfrak{A}, V \in \mathfrak{B}\}}$. We define the size ${\mathcal{N}(\mathfrak{A})}$ of the cover ${\mathfrak{A}}$ to be the cardinality of the minimal subcover; obviously ${\mathcal{N}(\mathfrak{A} \vee \mathfrak{B}) \leq \mathcal{N}(\mathfrak{A}) \mathcal{N}(\mathfrak{B})}$. Given an open cover ${\mathfrak{A}}$, we define the inverse image ${T^{-1}(\mathfrak{A})}$ via ${\{ T^{-1}(U), U \in \mathfrak{A})}$; it is clear that ${\mathcal{N}(T^{-1}(\mathfrak{A})) \leq \mathcal{N}(\mathfrak{A})}$. The following theorem gives another definition of topological entropy (which is how Walters introduces it) Theorem 1 The topological entropy is the supremum of$\displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \log \mathcal{N}( \mathfrak{A} \vee T ^{-1}( \mathfrak{A}) \vee \dots \vee T^{-(n-1)}(\mathfrak{A}))$ over all open covers ${\mathfrak{A}}$. This result actually follows rather easily from the definitions. Note that the limit actually exists, because if ${c_n = \log \mathcal{N}( \mathfrak{A} \vee T ^{-1}( \mathfrak{A}) \vee \dots \vee T^{-(n-1)}(\mathfrak{A}))}$, then the properties of ${n}$ mentioned imply that ${c_{n+m} \leq c_n +c_m}$, from which it is a straightforward exercise in analysis that ${\lim \frac{c_n}{n}}$ equals and is the infimum. Indeed, suppose ${\mathfrak{A}}$ is the cover by all ${\epsilon}$-balls. Take the metric ${d_n}$ as above. Any set ${\bigcap_{i=1}^n T^{-i}(U_i)}$ for ${U_i \in \mathfrak{A}}$ has ${d_n}$-diameter at most ${\epsilon}$. Then if ${S(n, \epsilon, T)}$ is the size of a minimal ${\epsilon}$-spanning set with respect to the metric ${d_n}$ as above, clearly $\displaystyle S(n, \epsilon, T) = \mathcal{N}(\mathfrak{A} \vee T^{-1} (\mathfrak{A}) \vee \dots \vee T^{-(n-1)}(\mathfrak{A})) ,$ because taking one point from each set in the covering ${\mathfrak{A} \vee T^{-1}(\mathfrak{A}) \vee \dots \vee T^{-(n-1)}(\mathfrak{A})}$ gives an ${\epsilon}$-spanning set for ${d_n}$, and this spanning set is minimal if and only if the cover is minimal. It follows that $\displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \log \mathcal{N}( \mathfrak{A} \vee T ^{-1}( \mathfrak{A}) \vee \dots \vee T^{-(n-1)}(\mathfrak{A})) = \lim_{n \rightarrow \infty} \frac{1}{n} \log S(n, \epsilon, T).$ In particular, the topological entropy is less than or equal to $\displaystyle \lim_{n \rightarrow \infty} \frac{1}{n} \log \mathcal{N}( \mathfrak{A} \vee T ^{-1}( \mathfrak{A}) \vee \dots \vee T^{-(n-1)}(\mathfrak{A}))$ and is equal to the limit as ${\epsilon \rightarrow 0}$ of that quantity for ${\mathfrak{A}}$ the cover by ${\epsilon}$-balls. Now, if ${\mathfrak{A}}$ is any cover, I claim that the limit exists and is at most ${h_{top}(f)}$. This will prove the theorem. This is because there is a Lebesgue number ${\epsilon>0}$ for ${\mathfrak{A}}$, and the limit is at most that of the limit with ${\mathfrak{A}}$ replaced by the cover of ${\epsilon}$-balls. So we can reduce to this case, which was already handled above. Incidentally, it clearly suffices to take ${\mathfrak{A}}$ finite (because shrinking ${\mathfrak{A}}$ only increases the limit). This equation is evidently invariant under topological conjugacy, so the theorem implies: Corollary 2 Topological entropy is invariant under topological conjugacy. Next time, we’re going to compute some explicit examples of what this actually means, as well as proving a few more elementary properties.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 120, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983680248260498, "perplexity": 102.50881346349401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747215.81/warc/CC-MAIN-20181121052254-20181121074254-00113.warc.gz"}
https://www.ias.edu/video/k-motives-and-koszul-duality-geometric-representation-theory
# K-Motives and Koszul Duality in Geometric Representation Theory Perverse sheaves and intersection cohomology are central objects in geometric representation theory. This talk is about their long-lost K-theoretic cousins, called K-motives. We will discuss definitions and basic properties of K-motives and explore potential applications to geometric representation theory. For example, K-motives shed a new light on Beilinson-Ginzburg-Soergel's Koszul duality — a remarkable symmetry in the representation theory and geometry of two Langlands dual reductive groups. We will see that this new form of Koszul duality does not involve any gradings or mixed geometry which are as essential as mysterious in the classical approaches. ### Affiliation Max Planck Institute Jens Eberhardt
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817960798740387, "perplexity": 1251.4692144067426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00450.warc.gz"}
https://www.physicsforums.com/threads/question-about-the-derivation-of-the-gravitational-law.789541/
# Question about the Derivation of the Gravitational Law • Start date • Tags • #1 251 8 ## Main Question or Discussion Point The derivation of the law have been put up in the forums but I have a question regarding its derivation. I understood everything from the assumptions to the application of Newton's Third Law, but I got stocked at this step: $$\frac{m}{k} = \frac{M}{k'}$$. This is similar to $$\frac{C}{M} = \frac{c}{m} = \frac{k}{4 \pi^2}$$ at this site, http://www.relativitycalculator.com/Newton_Universal_Gravity_Law.shtml. According to the same site, the next step requires the force to be squared. Why is this so? Is it merely to acquire the force $F$ between the two bodies? Aren't there any other ways to calculate the force other than multiplication? Related Classical Physics News on Phys.org • #2 Nugatory Mentor 12,613 5,163 According to the same site, the next step requires the force to be squared. Why is this so? Is it merely to acquire the force $F$ between the two bodies? Aren't there any other ways to calculate the force other than multiplication? It's just a convenient algebra trick to get both $m$ and $M$ into the equation for $f$. We have $f=f'$ so we can multiply both sides of that equation by $f$ to get one equation that can be solved for $f$ in terms of $k$, $m$, and $M$. • #3 251 8 Alright. Thank you for your help. :D
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9280771017074585, "perplexity": 256.41626494749266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00258.warc.gz"}
https://cdsweb.cern.ch/collection/ATLAS%20PUB%20Notes?ln=sk&as=1
# ATLAS PUB Notes Posledne pridané: 2021-07-30 18:58 A precise interpretation for the top quark mass parameter in ATLAS Monte Carlo simulation This note relates the top quark mass parameter in simulated pp collisions with a 13 TeV centre-of-mass-energy produced by ATLAS to a well-defined field-theoretical mass scheme. [...] ATL-PHYS-PUB-2021-034. - 2021. Original Communication (restricted to ATLAS) - Full text 2021-07-27 20:24 Summary Plots for Heavy Particle Searches and Long-lived Particle Searches - July 2021 The results of searches for heavy particles from the Exotics and HDBS physics groups and long-lived particles from the Exotics and SUSY physics groups are summarized in plots for a representative set of models. [...] ATL-PHYS-PUB-2021-033. - 2021. - 5 p. Original Communication (restricted to ATLAS) - Full text 2021-07-27 20:23 Standard Model Summary Plots June 2021 This note presents cross section summary plots for ATLAS cross section measurements as of June 2021 ATL-PHYS-PUB-2021-032. - 2021. - 19 p. Original Communication (restricted to ATLAS) - Full text 2021-07-27 20:22 Summary of non-resonant and resonant Higgs boson pair searches from the ATLAS experiment This note presents a summary of results from the most recent ATLAS Higgs boson pair ($HH$) searches. [...] ATL-PHYS-PUB-2021-031. - 2021. Original Communication (restricted to ATLAS) - Full text 2021-07-27 20:20 hMSSM summary plots from direct and indirect searches This note presents an update of the plots that summarize the interpretation of various searches for additional Higgs bosons beyond the Standard Model, as well as the Higgs boson coupling combination, in the hMSSM. [...] ATL-PHYS-PUB-2021-030. - 2021. Original Communication (restricted to ATLAS) - Full text 2021-07-27 19:29 Performance of $W$/$Z$ taggers using UFO jets in ATLAS The identification of boosted hadronic decays of $W$ and $Z$ bosons is a key technique for a variety of searches and measurements at the Large Hadron Collider. [...] ATL-PHYS-PUB-2021-029. - 2021. Original Communication (restricted to ATLAS) - Full text 2021-07-25 22:40 Identification of hadronically-decaying top quarks using UFO jets with ATLAS in Run 2 The identification of hadronically-decaying top quarks with large transverse momenta plays an important role for the ATLAS physics programme at the Large Hadron Collider. [...] ATL-PHYS-PUB-2021-028. - 2021. Original Communication (restricted to ATLAS) - Full text 2021-07-25 22:30 Digluon Tagging using $\sqrt{s}=13$ TeV $pp$ Collisions in the ATLAS Detector Jet substructure has played a key role in the development of two-prong taggers designed to identify Lorentz-boosted massive particles. [...] ATL-PHYS-PUB-2021-027. - 2021. Original Communication (restricted to ATLAS) - Full text 2021-07-25 18:41 Sensitivity to exclusive $WW$ production in photon scattering at the High Luminosity LHC The prospects of measuring the process $\gamma\gamma\rightarrow W^+W^-$ with the ATLAS detector during the high luminosity phase of the Large Hadron Collider (HL-LHC) with a centre-of-mass energy of 14~TeV and an integrated luminosity of 3000~fb$^{-1}$ are studied. [...] ATL-PHYS-PUB-2021-026. - 2021. - 19 p. Original Communication (restricted to ATLAS) - Full text 2021-07-23 12:41 ATLAS Computing Acknowledgements Document listing the centres providing major contributions to ATLAS computing in terms of CPU resources in 2020 and 2021. [...] ATL-SOFT-PUB-2021-003. - 2021. Original Communication (restricted to ATLAS) - Full text
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9601369500160217, "perplexity": 3540.8113051211426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00256.warc.gz"}
http://mathoverflow.net/questions/17128/group-completion-theorem?sort=votes
# Group completion theorem Let $M$ be a topological monoid. How does the homology-formulation of the group completion theorem, namely (see McDuff, Segal: Homology Fibrations and the "Group-Completion" Theorem) If $\pi_0$ is in the centre of $H_*(M)$ then $H_*(M)[\pi_0^{-1}]\cong H_*(\Omega BM)$ imply that $M\to \Omega BM$ is a weak homotopy equivalence if $\pi_0(M)$ is already a group? I don't see the connection to homology. Can one prove the latter (perhaps weaker) statement more easily than the whole group completion theorem? A topological group completion $G(M)$ of $M$ should transform the monoid $\pi_0(M)$ into its (standard algebraic) group completion. But a space with this property is not unique. Why is $\Omega BM$ the "right" choice? Perhaps this is clear when I see the connection to the homology-formulation above. - The statement that $M \to \Omega BM$ is a weak equivalence when $M$ is a group-like topological monoid is indeed easier: the map $EM = B(M \wr M) \to BM$ is then a quasi-fibration, has geometric fibre $M$ over the basepoint and homotopy fibre $\Omega BM$. However the homological group-completion theorem also implies this: if $M$ is group-like then $\pi_0(M)$ already consists of units in $H_*(M)$, so it just says that $M \to \Omega BM$ is a homology equivalence. Each of these spaces has homotopy equivalent path components, so it is then enough to observe that the map of 0 components is a homology equivalence between simple spaces, so a weak homotopy equivalence. However it is perverse to prove the "$M \simeq \Omega BM$" result this way. - Thank you, Oscar. Where can I find statements about $EM$ an $BM$ with $M$ a topological monoid (and not a topological group)? The constructions are the same for monoids as for groups, I think, but how to prove that $EM\to BM$ is a quasi-fibration (is this a Serre-fibration?)? Is there something like a "$M$-principal bundle" where $M$ is a monoid? – veit79 Mar 5 '10 at 12:55 That $EM \to BM$ is a quasi-fibration when $M$ is group like is proved in e.g. J.P. May "Classifying Spaces and Fibrations" Memoirs AMS 155, Theorem 7.6. – Oscar Randal-Williams Mar 5 '10 at 13:21 Do you know a free reference? I am not able to get this book without a lot of effort. – veit79 Mar 5 '10 at 15:45 It is available on May's homepage: math.uchicago.edu/~may/BOOKS/Classifying.pdf – Oscar Randal-Williams Mar 5 '10 at 18:29 Tank you, Oscar. I suppose $M\wr M$ is a wreath product. Do you know where I can read about "topological" wreath products? – veit79 Mar 14 '10 at 16:24 Well, if $\pi_0=\pi_0(M)$ is already a group, then $H_*(M)\approx H_*(M)[\pi_0^{-1}]$. So $M$ and $\Omega B M$ have the same homology in this case. This isn't quite enough on its own, but if you can produce a map $M\to \Omega BM$ which induces this homology isomorphism, then the result follows using the Hurewicz theorem. What McDuff-Segal actually do is show that if $M$ is a topological monoid which acts on a space $X$, in such a way that every $m\in M$ induces a homology equivalence $x\mapsto mx\colon X\to X$, then you can produce a "homology fibration" $f:X_M\to BM$ with fiber $X$. "Homology fibration" means that the fibers of $f$ are homology-equivalent to the homotopy fibers of $f$. If $\pi_0M$ is an abelian group, you can find an $X$ such that $X_M$ is contractible, and the fiber of $f:X_M\to BM$ is $X$. This gives the homology equivalence you want, since the homotopy fibers of $f$ look like $\Omega BM$. Take a look at McDuff and Segal's paper, it's nice. There is a also a treatment in terms of simplicial sets in Goerss-Jardine, *Simplicial Homotopy Theory". Added: The functor $M\mapsto \Omega BM$ is the "total derived functor of group completion". The only convincing explanation of why this is so (that I'm aware of) is in Dwyer-Kan, Simplicial Localizations of Categories, JPAA (17) 267-283. Though they work simplicially, and work more generally (with categories in place of monoids), they show that $M$ is a cofibrant simplicial monoid, then the simplicial monoid $M[M^{-1}]$ is weakly equivalent to the space $\Omega |BM|$. - Thank you, Charles. Why is a map inducing homology isomorphisms a homotopy equivalence if $M$ is not simply connected? – veit79 Mar 5 '10 at 13:15 This is not true in general, but is true when you have a map of grouplike H-spaces - in this case, the fundamental groups act trivially in the higher homotopy groups, and the relative Whitehead theorem tells you that the first relative homology group coincides with the first relative homotopy group. – Tyler Lawson Mar 5 '10 at 14:51 Ah, I see. Thank you. – veit79 Mar 5 '10 at 15:06 To elaborate on Tyler's comment: One more step is needed to go from trivial action of the fundamental group on the higher homotopy groups of the two spaces to trivial action on the relative homotopy groups of the mapping cylinder. This can be done by an obstruction theory argument as in Proposition 4.74 of my book. – Allen Hatcher Mar 5 '10 at 15:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940941572189331, "perplexity": 242.97202443523926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862047707.47/warc/CC-MAIN-20160428164727-00045-ip-10-239-7-51.ec2.internal.warc.gz"}
https://en.m.wikisource.org/wiki/Translation:On_the_Apparent_Mass_of_the_Ions
Translation:On the Apparent Mass of the Ions On the Apparent Mass of the Ions  (1900) by Hendrik Lorentz, translated from German by Wikisource In German: Über die scheinbare Masse der Ionen, Physikalische Zeitschrift. 2, 1900/1, pp. 78-80 H. A. Lorentz (Leiden) On the apparent mass of the ions. It is known that by observations of cathode rays we were able to derive the ratio ${\displaystyle {\tfrac {e}{m}}}$, i.e. the ratio between the charge of an ion ${\displaystyle e}$ and its mass ${\displaystyle m}$. The question arises, what is meant by that mass. In any case we must attribute an apparent mass to the ion, as it generates a certain energy in the ether by virtue of its motion. This apparent mass will be denoted by ${\displaystyle m_{0}}$. It is possible that the ion also possesses a real mass in the ordinary sense of the word; in this case, ${\displaystyle m_{0}. If this is not the case, then ${\displaystyle m_{0}=m}$. So we have the inequality ${\displaystyle {\frac {e}{m_{0}}}>{\frac {e}{m}}{,}}$ when there still is a real mass besides the apparent mass; otherwise ${\displaystyle {\frac {e}{m_{0}}}={\frac {e}{m}}.}$ So we want to write ${\displaystyle {\frac {e}{m_{0}}}\geqq {\frac {e}{m}}{,}}$ where ${\displaystyle {\tfrac {e}{m}}=10^{7}}$ is. Now ${\displaystyle m_{0}={\frac {8}{3}}\pi R\sigma e{,}}$ if we conceive the ion as a sphere, ${\displaystyle R}$ is the radius of this sphere, and ${\displaystyle \sigma }$ means the surface density of the charge. This formula allows for an interesting conclusion on the radius of the ions. If, namely, we substitute for ${\displaystyle m_{0}}$ the now specified value into the inequality, we obtain an inequality for the radius. We have ${\displaystyle 4\pi R^{2}\cdot \sigma =e{,}}$ thus ${\displaystyle m_{0}={\frac {8}{3}}\pi R\sigma e={\frac {8}{3}}\pi Re\cdot {\frac {e}{4\pi R^{2}}}={\frac {2e^{2}}{3R}}}$ and thus ${\displaystyle {\frac {e}{m_{0}}}={\frac {3R}{2e}}{,}}$ and ${\displaystyle {\frac {3R}{2e}}\geq 10^{7}}$ and ${\displaystyle R>10^{7}\cdot {\frac {2}{3}}e.}$ The magnitude ${\displaystyle e}$ is unfortunately not known. If we take the charge of an ion in a cathode ray to be as great as in an electrolytic hydrogen, and presuppose the size of a hydrogen molecule, we obtain for ${\displaystyle R}$ a magnitude of order ${\displaystyle 10^{-12}}$ cm, that is certainly not an arbitrarily small magnitude, but a lower limit. The question of whether or not a real mass exists besides the apparent mass of an ion, is extremely important; because by that we touch the question of the relation of ponderable matter with ether and electricity. I am far away to announce a decision, but I would like to cite but a few questions whose resolution can potentially bring us further in that question. The first question is whether an ion rotates in a magnetic field. Actually, we should expect that. Since if an ion is present, and if a magnetic field is caused, then a rotation arises, as it can easily be derived from the formation of induced currents. Of course this is also the case when the ion flies into an already existing magnetic field. The velocity of rotation will depend on the magnitude of the mass; if only apparent mass is present, and even a corresponding moment of inertia, then the rotation velocity has a certain value. If, however, a real moment of inertia is added, the rotation is slowing down. Unfortunately I can not find any phenomenon, from which we could conclude anything about this rotation. A second means by which we maybe could decide the question of the relationship between the apparent and real mass is the following: The value for the apparent mass was given above only in first approximation. If the velocity is such that it is comparable to the velocity of light, then additional magnitudes will be added. For a straight path of the ion we can calculate the intensity of the field and the size of the energy and deduce from that the mass factor. In general, the trajectory will be curvilinear through the influence of the magnetic field, e.g. circular; then the calculation of the mass factor will become more complicated, but it can be carried out. If we denote by ${\displaystyle m_{0}}$ the expression above and ${\displaystyle q}$ is defined as the ratio of the ion velocity to that of light, it follows in second approximation for the apparent mass of the ion in linear motion: ${\displaystyle m_{0}\left(1+{\frac {6}{5}}q^{2}\right){,}}$ while in a circular motion the term with ${\displaystyle q^{2}}$ yields a different coefficient. These terms of the second order could now perhaps become observable, because the velocity of cathode rays increases up to a third of that of light, hence ${\displaystyle q={\tfrac {1}{3}}}$ and ${\displaystyle q^{2}={\tfrac {1}{9}}}$. To come to a decision, we could think of experiments as they were done by Lenard, to examine the influence of electric forces on the velocity of cathode rays. He has shown that the magnetic deflectability of the cathode rays, which is of course the smaller, the greater the speed, will change when the rays can pass through the space between two charged capacitor plates in the direction of the electric force lines. We could measure the magnetic deflection in the case of an uncharged capacitor, then in the case of charge in one direction and then for the other direction. Thus we would obtain three different values of deflectability, between which a simple relation should exist, if the terms of second order could be neglected. If we measure each time the magnetic field-force required for a particular deflection, then the squares of these three field forces should form an arithmetic row. A deviation from this relationship would indicate that the terms with ${\displaystyle q^{2}}$ shall not be neglected, and that therefore in any case the apparent mass is noticeable. Detailed specifications could decide concerning the ratio between the real and the apparent mass, and concerning the question whether a real mass exists. It turns out that by Lenard's experiments we were near to decide about the existence of terms of the second order. (Self-lecture of the lecturer.) Discussion. (Reviewed by the participants.) W. Wien. I was recently concerned with similar issues, and would like to stress that Lenard has observed cathode rays at low velocities, triggered under the influence of ultraviolet light. There, he found a small value for the ratio of mass to charge, namely the decrease lies in the sense which is required by the theory. I have tried to transcend over Lorentz's position, by posing me the question, whether it would suffice when we only consider the apparent mass and omit the inertial mass, and replace it with the electromagnetically defined apparent mass to present the mechanical and electromagnetic phenomena in an uniform way. Because the magnetic and mechanical phenomena are only connected by the energy principle so far. I've tried to pose the question as to whether we could try by Maxwell's theory, to involve mechanics as well. The possibility of an electromagnetic explanation of mechanics was given, after Lorentz has developed a conception of the law of gravity, according to which it would be very similar to electrostatic forces. We would have to think of matter as only composed of very small positive and negative charges, which are within a certain distance from each other. By this condition, the ponderable mass is not constant but depends on the velocity, and namely we obtain terms, depending on even powers of the ratio of velocity to the velocity of light. The numerical factor by which the second term is multiplied, depends on the curvature of the trajectory, but also on the shape of the electric charge. Depending on which different way we choose the form of electrified molecules, we come to other numerical factors. Concerning the ordinary motions on earth, it vanishes because the velocity is very small. Concerning planetary motions we probably can achieve something; because we reach velocities at which we have to consider the terms of second order. On the assumption of a specific type of charge, leading to the simplest electromagnetic field, these terms become relevant in a way, so that the accelerations of two bodies by gravitation are the same up to a slightly different numerical factor, as if the bodies attract each other with constant mass according to Weber's laws. The electromagnetically defined mass comes into play, as if not Newton's, but Weber's law would apply. Lorentz. In essence, we agree; but Wien already wants to go further than I do. Anyway, it seemed of interest to me to look for means, by which we can come to a decision on the issue discussed. One more thing I would like to add: I made the assumption that the sphere, which forms an ion, is rigid. But perhaps one might think that the sphere would be transformed into an ellipsoid when in motion. This has some similarity with the diversity, that was pointed out by Wien. Voigt. I would like to pose the question to the lecturer, concerning the reflection of cathode rays; should a rotating ion not be reflected differently, as a non-rotating one? Lorentz. Certainly, if one imagines that the reflection happens on a surface. But if you look at the reflection, which is more likely to me, as caused by forces that occur at some distance from the surface of the ion, then those surely act on the center, and then the influence of rotation vanishes. Warburg. What does the theory say about the velocity of the ions during reflection? Does it remain the same? Lorentz. As far as I know, yes. I have not elaborated on this. Warburg. Merritt has found that the velocity of reflection has not changed. But the experiments of Cady on the energy of cathode rays are in contradiction to this, so I've thought that the experiments of Merritt may not be completely correct, and maybe we could obtain a velocity change. I wanted to ask if the theory says something in this respect. Lorentz. I can not say this right now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975615382194519, "perplexity": 740.4799250781456}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249578748.86/warc/CC-MAIN-20190224023850-20190224045850-00126.warc.gz"}
http://books.duhnnae.com/2017/aug6/15029867789-Molecular-correlations-and-solvation-in-simple-fluids-Condensed-Matter-Soft-Condensed-Matter.php
# Molecular correlations and solvation in simple fluids - Condensed Matter > Soft Condensed Matter Abstract: We study the molecular correlations in a lattice model of a solution of alow-solubility solute, with emphasis on how the thermodynamics is reflected inthe correlation functions. The model is treated in Bethe-Guggenheimapproximation, which is exact on a Bethe lattice Cayley tree. The solutionproperties are obtained in the limit of infinite dilution of the solute. With$h {11}r$, $h {12}r$, and $h {22}r$ the three pair correlation functionsas functions of the separation $r$ subscripts 1 and 2 referring to solvent andsolute, respectively, we find for $r \geq 2$ lattice steps that$h {22}r-h {12}r \equiv h {12}r-h {11}r$. This illustrates a generaltheorem that holds in the asymptotic limit of infinite $r$. The threecorrelation functions share a common exponential decay length correlationlength, but when the solubility of the solute is low the amplitude of thedecay of $h {22}r$ is much greater than that of $h {12}r$, which in turn ismuch greater than that of $h {11}r$. As a consequence the amplitude of thedecay of $h {22}r$ is enormously greater than that of $h {11}r$. Theeffective solute-solute attraction then remains discernible at distances atwhich the solvent molecules are essentially no longer correlated, as found insimilar circumstances in an earlier model. The second osmotic virialcoefficient is large and negative, as expected. We find that thesolvent-mediated part $Wr$ of the potential of mean force between solutes,evaluated at contact, $r=1$, is related in this model to the Gibbs free energyof solvation at fixed pressure, $\Delta G p^*$, by $Z-2 W1 + \Delta G p^*\equiv p v 0$, where $Z$ is the coordination number of the lattice, $p$ thepressure, and $v 0$ the volume of the cell associated with each lattice site. Alarge, positive $\Delta G p^*$ associated with the low solubility is thusreflected in a strong attraction large negative $W$ at contact, which is themajor contributor to the second osmotic virial coefficient. In this model, thelow solubility large positive $\Delta G p^*$ is due partly to an unfavorableenthalpy of solvation and partly to an unfavorable solvation entropy, unlike inthe hydrophobic effect, where the enthalpy of solvation itself favors highsolubility, but is overweighed by the unfavorable solvation entropy. Author: Marco A. A. Barbosa, B. Widom Source: https://arxiv.org/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9107010960578918, "perplexity": 1442.207725652282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687837.85/warc/CC-MAIN-20170921191047-20170921211047-00654.warc.gz"}
http://math.stackexchange.com/questions/530334/summing-dn-floor-functions-d-times-summing-n-floor-functions
# Summing $dn$ floor functions = $d$ times Summing $n$ floor functions Fix integer $d>1$, and assume real number $x\in[0,1]$. I claim the following statement: $\sum_{k=1}^{dn}\lfloor kx\rfloor=d\sum_{k=1}^n\lfloor kx\rfloor$ is true iff $x\in[0,\frac{1}{dn}]$. I can check it in various cases, and the if-direction is obvious. But my combinatorics skills are not sharp, so is there a counterexample? - You must have $x \in \left[0,\frac{1}{dn}\right)$, for if $x = \frac{1}{dn}$, then the left hand side is $1$, and the right hand side is $0$. Generally, if $\frac{1}{dn} \leqslant x < \frac{1}{n}$, then $$d\sum_{k=1}^n \lfloor kx\rfloor = 0 < \lfloor dnx\rfloor \leqslant \sum_{k=1}^{dn} \lfloor kx\rfloor,$$ so in that case you don't have equality. And if $x \geqslant \frac1n$, then you have \begin{align} \sum_{k=1}^{dn} \lfloor kx\rfloor - d\sum_{k=1}^n \lfloor k\rfloor &= \sum_{j=0}^{d-1} \left(\sum_{k=1}^n \left(\lfloor (k+jn)x\rfloor - \lfloor kx\rfloor\right)\right)\\ &\geqslant\sum_{j=0}^{d-1} \sum_{k=1}^n\left(\lfloor kx + j\rfloor - \lfloor kx\rfloor\right)\\ &\geqslant \sum_{j=0}^{d-1} nj\\ &= n\frac{d(d-1)}{2}\\ &> 0. \end{align} - @DanielFisher, if in addition I have the starting value to be scaled by $d$, i.e. $\sum^{dn}_{k=dk_0}$ and $d\sum^n_{k=k_0}$, do we stumble across a problem? – Chris Gerig Oct 24 '13 at 0:18 One problem is the number of summands. $\sum_{k=dk_0}^{dn}$ gives $d(n-k_0+1)+1$ values, $\sum_{k=k_0}^n$ gives $(n-k_0+1)$ values, so if the terms scaled exactly with $d$, you'd have an extra term. Depending on how that is handled, the range for which both sums have the same value ($0$) may be changed slightly ($x \in \left[0,\frac{1}{dn-1}\right)$ if the $k = dn$ term is omitted), but the strict inequality remains for larger $x$. – Daniel Fischer Oct 24 '13 at 8:12 @DanielFisher, Oh sorry there's a typo. It should be $\sum^{dn}_{k=dk_0+1}$ and $d\sum^{n}_{k=k_0+1}$. Not sure how much this affects your statement, but I don't see exactly how dropping the $k=dn$ term gives us this new range (which I unfortunately can't drop). Ultimately I'm trying to take the original equality that I wrote in the question, and subtract from it the same equality but with smaller $n$. – Chris Gerig Oct 24 '13 at 8:52 If you only sum to $dn-1$, the largest term is $\lfloor (dn-1)x\rfloor$, for $x < \frac{1}{dn-1}$, that is $0$, hence all terms are $0$. But since you keep the upper limit and change the lower limit, that's not relevant. By the way, I mixed up the counts in the previous comment, not yet fully awake. I'm not sure I understand what you're trying to do. You take $\sum_{k=1}^{dn} \lfloor kx\rfloor = d\sum_{k=1}^n \lfloor kx\rfloor$ for $0 \leqslant x < \frac{1}{dn}$, and then subtract $\sum_{k=1}^{dm}\lfloor kx\rfloor = d\sum_{k=1}^m\lfloor kx\rfloor$ from it, for some $m < n$? And then? – Daniel Fischer Oct 24 '13 at 9:03 Ah no sorry, I'm looking for the true range of x after I form the new 'equality'. – Chris Gerig Oct 24 '13 at 9:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545719027519226, "perplexity": 439.00193112533475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257822172.7/warc/CC-MAIN-20160723071022-00236-ip-10-185-27-174.ec2.internal.warc.gz"}
http://projecteuclid.org/euclid.aoms/1177693058;
## The Annals of Mathematical Statistics ### Random Variables with Independent Binary Digits George Marsaglia #### Abstract Let $X = \cdot b_1b_2b_3 \cdots$ be a random variable with independent binary digits $b_n$ taking values 0 or 1 with probability $p_n$ and $q_n = 1 - p_n$. When does $X$ have a density? A continuous density? A singular distribution? This note gives necessary and sufficient conditions for the distribution of $X$ to be: discrete: $\Sigma\min (p_n, q_n) < \infty$; singular: $\Sigma^\infty_m\lbrack\log (p_n/q_n)\rbrack^2 = \infty$ for every $m$; absolutely continuous: $\Sigma^\infty_m\lbrack\log (p_n/q_n)\rbrack^2 < \infty$ for some $m$. Furthermore, $X$ has a density that is bounded away from zero on some interval if and only if $\log (p_n/q_n)$ is a geometric sequence with ratio $\frac{1}{2}$ for $n > k$, and in that case the fractional part of $2^k X$ has an exponential density (increasing or decreasing with the uniform a special case). #### Article information Source Ann. Math. Statist. Volume 42, Number 6 (1971), 1922-1929. Dates First available: 27 April 2007 http://projecteuclid.org/euclid.aoms/1177693058 JSTOR Digital Object Identifier doi:10.1214/aoms/1177693058 Mathematical Reviews number (MathSciNet) MR298715 Zentralblatt MATH identifier 0239.60015 #### Citation Marsaglia, George. Random Variables with Independent Binary Digits. The Annals of Mathematical Statistics 42 (1971), no. 6, 1922--1929. doi:10.1214/aoms/1177693058. http://projecteuclid.org/euclid.aoms/1177693058.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858624696731567, "perplexity": 379.72620127091466}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120974.20/warc/CC-MAIN-20140914011200-00281-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://mathhelpforum.com/algebra/151216-finding-length-pool-variable-given-width.html
# Math Help - Finding the length of a pool with a variable given as the width... 1. ## Finding the length of a pool with a variable given as the width... I am sure that this problem is very basic, but unfortunately, algebra is not my strong point. The problem states: The width of a pool is x. The length of the pool is 6 more than the width. What is the length of the pool? When I first seen this, I immediately thought of x + 6, meaning the length is 6 more than the width but that just seems too easy and it is not really specifying the length. Is this a trick question and x + 6 is the answer or am I missing something? If someone could explain this to me, I would really appreciate it. 2. Originally Posted by dclary I am sure that this problem is very basic, but unfortunately, algebra is not my strong point. The problem states: The width of a pool is x. The length of the pool is 6 more than the width. What is the length of the pool? When I first seen this, I immediately thought of x + 6, meaning the length is 6 more than the width but that just seems too easy and it is not really specifying the length. Is this a trick question and x + 6 is the answer or am I missing something? If someone could explain this to me, I would really appreciate it. length = x + 6 ... that's all. 3. Hi Skeeter, Thank you for the quick reply. I tend to over analyze things and thought I was right, but like I said, I thought it was just too easy. Thanks again, I really appreciate it. Maybe one of these days this algebra stuff will sink in :-)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165129065513611, "perplexity": 200.2809518708043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127407.71/warc/CC-MAIN-20160428161527-00153-ip-10-239-7-51.ec2.internal.warc.gz"}