url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.biostars.org/p/9487833/
featureCounts for WGS instead of RNA-seq 1 0 Entering edit mode 18 months ago BioDH ▴ 10 Hello all, I have done whole genome sequencing and aligned reads on a reference genome. I have some bam files. I want to get the number of reads mapped to sepecific regions defined in a gff3 files. I have used featureCounts for RNA-seq but not for WGS. Can I use it to get read counts? Which library type do I need to specify? FYI, I did paired-end seq. If it is not recommended, any suggestions? featureCounts • 519 views 1 Entering edit mode 18 months ago Martombo ★ 3.0k featureCounts is going to count mapped reads in genomic intervals on whatever kind of data you feed it as input, so it can work on WGS. Of course you'll have to adjust the program options for it to make sense, depending on what is your aim. Be careful, for example, of the distinction between feature and meta-feature counting (gene level), which is the default for featureCounts, the attribute / feature type of the annotation file that you're using and all the options for read filtering and interval overlap. I'm not sure what you are referring to with library type though. 0 Entering edit mode Library type would be accounting for stranded libraries in RNA-seq which does not apply for WGS, so you can just leave that option empty/do not set anything. I would simply make a SAF file (see featureCounts manual) of the features you want to count, and then just do: featureCounts -a your.saf -F SAF -o out.counts your.bam No magic here.
2023-03-22 06:14:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.218466117978096, "perplexity": 4017.679204161204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00233.warc.gz"}
https://rdrr.io/github/CharlotteJana/momcalc/f/README.md
# Introduction The package momcalc includes different functions to calculate moments of some distributions. It is possible to calculate the moments of a normal, lognormal or gamma distribution symbolically. These distributions may be multivariate. The distribution or moments of the BEGG distribution can be calculated numerically. Raw moments can be transformed into central moments and vice versa. The package also provides a test concerning the modality of a distribution. It tests, if a one dimensional distribution with compact support can be unimodal and is based on the moments of that distribution. You can install momcalc from GitHub with: # install.packages("devtools") devtools::install_github("CharlotteJana/momcalc") # Symbolic calculation of moments The function symbolicMoments calculates the moments of a distribution symbolically, meaning that it handles the input as quoted expressions. It can not only work with numbers as input but with any given expression. The distribution may be multivariate and of some of the following types: normal, lognormal or gamma. The type is specified with the argument distribution. For distribution = normal, central moments are calculated. In the other cases, function symbolicMoments returns raw moments. # raw moments of a one dimensional gamma distribution symbolicMoments(distribution = "gamma", missingOrders = as.matrix(1:3, ncol = 1), mean = "μ", var = "σ") #> [[1]] #> σ * gamma(1 + μ^2/σ)/(μ * gamma(μ^2/σ)) #> #> [[2]] #> gamma(2 + μ^2/σ) * (σ/μ)^2/gamma(μ^2/σ) #> #> [[3]] #> gamma(3 + μ^2/σ) * (σ/μ)^3/gamma(μ^2/σ) # raw moments of a one dimensional lognormal distribution symbolicMoments(distribution = "lognormal", missingOrders = as.matrix(1:2, ncol = 1), mean = 2, var = 1, simplify = FALSE) #> [[1]] #> exp(1 * (2 * log(2) - 0.5 * log(1 + 2^2)) + 0.5 * (1^2 * (log(1 + #> 2 * 2) - log(2) - log(2)))) #> #> [[2]] #> exp(2 * (2 * log(2) - 0.5 * log(1 + 2^2)) + 0.5 * (2^2 * (log(1 + #> 2 * 2) - log(2) - log(2)))) # evaluate the result symbolicMoments(distribution = "lognormal", missingOrders = as.matrix(1:2, ncol = 1), mean = 2, var = 1, simplify = TRUE) #> [1] 2 5 Note that the calculation in case of a normal distribution is done with the function callmultmoments of the package symmoments. The following example can be found on Wikipedia: missingOrders <- matrix(c(4, 0, 0, 0, 3, 1, 0, 0, 2, 2, 0, 0, 2, 1, 1, 0, 1, 1, 1, 1), ncol = 4, byrow = TRUE) cov <- matrix(c("σ11", "σ12", "σ13", "σ14", "σ12", "σ22", "σ23", "σ24", "σ13", "σ23", "σ33", "σ34", "σ14", "σ24", "σ34", "σ44"), ncol = 4, byrow = TRUE) symbolicMoments("normal", missingOrders, mean = "μ", cov = cov) #> [[1]] #> 3 * σ11^2 #> #> [[2]] #> 3 * (σ11 * σ12) #> #> [[3]] #> 2 * σ12^2 + σ11 * σ22 #> #> [[4]] #> 2 * (σ12 * σ13) + σ11 * σ23 #> #> [[5]] #> σ12 * σ34 + σ13 * σ24 + σ14 * σ23 # Symbolic transformation of moments The function transformMoment symbolically transforms raw moments into central moments and vice versa. It allows the moments to come from a multivariate distribution. Let (X) be a (possible multivariate) random variable and (Y) the corresponding centered variable, i.e. (Y = X - μ). Let (p = (p_1, ..., p_n)) be the order of the desired moment. #### Transformation from central into raw In this case, set argument type = 'raw'. Then function transformMoment returns #### Transformation from raw to central In this case, set argument type = 'central'. Then function transformMoment returns #### Class momentList The function transformMoment needs as input an S3-object of the class momentList which contains all known central and raw moments. It has four elements: The element centralMoments contains all known central moments of the distribution, where as rawMoments contains all raw moments of the distribution. Both are stored as a list. The elements centralMomentOrders and rawMomentOrders contain the corresponding orders of the moments. They are stored as a matrix or data.frame, each row represents one order of the moment. The number of columns of these matrices should be equal to the dimension of the distribution. The function returns an object of class momentList which is expanded and includes the wanted moment and all moments that were computed during the calculation process. #### Example Calculate the raw moment (E(X_1X_2X_3^2)) for a three-dimensional random variable (X): mList <- momentList(rawMomentOrders = diag(3), rawMoments = list("m1", "m2", "m3"), centralMomentOrders = expand.grid(list(0:1,0:1,0:2)), centralMoments = as.list(c(1, 0, 0, "a", 0, letters[2:8]))) mList <- transformMoment(order = c(1,1,2), type = 'raw', momentList = mList, simplify = TRUE) mList$rawMomentOrders #> [,1] [,2] [,3] #> 0 0 0 #> 1 0 0 #> 0 1 0 #> 0 0 1 #> p 1 1 2 mList$rawMoments #> [[1]] #> [1] 1 #> #> [[2]] #> [1] "m1" #> #> [[3]] #> [1] "m2" #> #> [[4]] #> [1] "m3" #> #> [[5]] #> g * m1 + h + m2 * (e * m1 + f) + m3 * (2 * (b * m2) + 2 * (c * #> m1) + 2 * d + m3 * (a + m1 * m2)) # The BEGG distribution The bimodal extension of the generalized Gamma-Distribution (BEGG) was first introduced in It is a scale mixture of the generalized gamma distribution. The BEGG distribution is almost always bimodal. The two modes can have different shapes, depending on the parameters (α), (β), (δ_0), (δ_1), (η), (\epsilon), (μ) and (σ). The density function can be calculated with dBEGG and is given by The k-th raw moment can be calculated with mBEGG and is given by The density function can have very different shapes: # Testing if a distribution is unimodal The function is.unimoal checks if an (unknown) distribution with compact support can be unimodal. It uses several inequalities that were introduced in Depending on the inequality, moments up to order 2 or 4 are required. A distribution that satisfies all inequalities that contain only moments up to order 2 is called 2-b-unimodal. A distribution that satisfies all inequalities that contain only moments up to order 4 is called 4-b-unimodal. It is possible that a multimodal distribution satisfies all inequalities and is therefore 2- and even 4-bimodal. But if at least one of the inequalities is not satisfied, the distribution cannot be unimodal. In this case, the test returns not unimodal as the result. Here are some examples using the BEGG distribution: # example 1 (bimodal) example1 <- mBEGG(order = 1:4, alpha = 2, beta = 2, delta0 = 1, delta1 = 4, eta = 1, eps = 0) is.unimodal(-2, 2, example1) #> [1] "not unimodal" # example 2 (bimodal) example2 <- mBEGG(order = 1:4, alpha = 2, beta = 1, delta0 = 0, delta1 = 2, eta = 1, eps = -0.5) is.unimodal(-2, 3, example2) #> [1] "4-b-unimodal" # example 3 (bimodal) example3 <- mBEGG(order = 1:4, alpha = 3, beta = 2, delta0 = 4, delta1 = 2, eta = 2, eps = 0.3) is.unimodal(-2.5, 1.5, example3[1:2]) # test with moments of order 1 and 2 #> [1] "2-b-unimodal" is.unimodal(-2.5, 1.5, example3) # test with moments of order 1 - 4 #> [1] "not unimodal" # example 4 (unimodal) example4 <- mBEGG(order = 1:4, alpha = 2, beta = 1, delta0 = 0, delta1 = 0, eta = 1, eps = 0.7) is.unimodal(-4, 2, example4) #> [1] "4-b-unimodal" The function dtrunc of the package momcalc is a modified version of the function dtrunc of the package truncdist. Both packages are free open source software licensed under the GNU Public License (GPL 2.0 or above). The software is provided as is and comes WITHOUT WARRANTY.
2021-04-17 05:42:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6062207221984863, "perplexity": 4038.3828392326823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00183.warc.gz"}
https://en.wikipedia.org/wiki/Magnetic_hysteresis
# Magnetic hysteresis Theoretical model of magnetization m against magnetic field h. Starting at the origin, the upward curve is the initial magnetization curve. The downward curve after saturation, along with the lower return curve, form the main loop. The intercepts hc and mrs are the coercivity and saturation remanence. Magnetic hysteresis occurs when an external magnetic field is applied to a ferromagnet such as iron and the atomic dipoles align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become magnetized. Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive. The relationship between field strength H and magnetization M is not linear in such materials. If a magnet is demagnetized (H=M=0) and the relationship between H and M is plotted for increasing levels of field strength, M follows the initial magnetization curve. This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, M follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the H-M relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the main loop. The width of the middle section along the H axis is twice the coercivity of the material.[1]: Chapter 1 A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations.[1]: Chapter 15 Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon.[2] ## Physical origin The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it doesn't. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording). Larger magnets are divided into regions called domains. Within each domain, the magnetization does not vary; but between domains are relatively thin domain walls in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called nucleation and denucleation). ## Measurement Magnetic hysteresis can be characterized in various ways. In general, the magnetic material is placed in a varying applied H field, as induced by an electromagnet, and the resulting magnetic flux density (B field) is measured, generally by the inductive electromotive force introduced on a pickup coil nearby the sample. This produces the characteristic B-H curve; because the hysteresis indicates a memory effect of the magnetic material, the shape of the B-H curve depends on the history of changes in H. Alternatively, the hysteresis can be plotted as magnetization M in place of B, giving an M-H curve. These two curves are directly related since ${\displaystyle B=\mu _{0}(H+M)}$. The measurement may be closed-circuit or open-circuit, according to how the magnetic material is placed in a magnetic circuit. • In open-circuit measurement techniques (such as a vibrating-sample magnetometer), the sample is suspended in free space between two poles of an electromagnet. Because of this, a demagnetizing field develops and the H field internal to the magnetic material is different than the applied H. The normal B-H curve can be obtained after the demagnetizing effect is corrected. • In closed-circuit measurements (such as the hysteresisgraph), the flat faces of the sample are pressed directly against the poles of the electromagnet. Since the pole faces are highly permeable, this removes the demagnetizing field, and so the internal H field is equal to the applied H field. With hard magnetic materials (such as sintered neodymium magnets), the detailed microscopic process of magnetization reversal depends on whether the magnet is in an open-circuit or closed-circuit configuration, since the magnetic medium around the magnet influences the interactions between domains in a way that cannot be fully captured by a simple demagnetization factor.[3] ## Models The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamic foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011). is inspired by the kinematic hardening laws and by the thermodynamics of irreversible processes.[4] In particular, in addition to provide an accurate modeling, the stored magnetic energy and the dissipated energy are known at all times. The obtained incremental formulation is variationally consistent, i.e., all internal variables follow from the minimization of a thermodynamic potential. That allows easily obtaining a vectorial model while Preisach and Jiles-Atherton are fundamentally scalar models. The Stoner–Wohlfarth model is a physical model explaining hysteresis in terms of anisotropic response ("easy" / "hard" axes of each crystalline grain). Micromagnetics simulations attempt to capture and explain in detail the space and time aspects of interacting magnetic domains, often based on the Landau-Lifshitz-Gilbert equation. Toy models such as the Ising model can help explain qualitative and thermodynamic aspects of hysteresis (such as the Curie point phase transition to paramagnetic behaviour), though they are not used to describe real magnets. ## Applications There are a great variety in applications of the theory of hysteresis in magnetic materials. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, hard magnets (high coercivity) like iron are desirable so the memory is not easily erased. Soft magnets (low coercivity) are used as cores in transformers and electromagnets. The response of the magnetic moment to a magnetic field boosts the response of the coil wrapped around it. Low coercivity reduces that energy loss associated with hysteresis. Magnetic hysteresis material (soft nickel-iron rods) has been used in damping the angular motion of satellites in low earth orbit since the dawn of the space age.[5]
2022-01-16 11:55:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7113414406776428, "perplexity": 880.7775339620114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00010.warc.gz"}
http://bois.caltech.edu/distribution_explorer/continuous/halfnormal.html
# Half-Normal distribution¶ ## Story¶ The Half-Normal distribution is a Normal distribution truncated to only have nonzero probability density for values greater than or equal to the location of the peak. ## Parameters¶ The Half-Normal distribution is parametrized by a positive scale parameter $$\sigma$$ and a location parameter $$\mu$$. In most applications, $$\mu = 0$$. ## Support¶ The Half-Normal distribution is supported on the set of all real numbers that are greater than or equal to $$\mu$$, that is on $$[\mu, \infty)$$. ## Probability density function¶ \begin{split} \begin{align} f(y;\mu, \sigma) = \left\{\begin{array}{cll} \sqrt{\frac{2}{\pi\sigma^2}}\,\mathrm{e}^{-(y-\mu)^2/2\sigma^2} & & y \ge \mu \\[1em] 0 & & \text{otherwise}. \end{array}\right. \end{align}\end{split} ## Moments¶ Mean: $$\displaystyle{\mu + \sqrt{\frac{2\sigma^2}{\pi}}}$$ Variance: $$\displaystyle{\left(1 - \frac{2}{\pi}\right)\sigma^2}$$ ## Usage¶ Package Syntax NumPy mu + np.abs(rg.normal(0, sigma) SciPy scipy.stats.halfnorm(mu, sigma) Stan sampling real<lower=mu> y; y ~ normal(mu, sigma) Stan rng real<lower=mu> y; y = mu + abs(normal_rng(0, sigma)) ## Notes¶ • In Stan, a Half-Normal is defined by putting a lower bound of $$\mu$$ on the variable and then using a Normal distribution with location parameter $$\mu$$. • The Half-Normal distribution with $$\mu = 0$$ is a useful prior for nonnegative parameters that should not be too large and may be very close to zero.
2020-05-29 10:46:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636585712432861, "perplexity": 969.6458373328825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402885.41/warc/CC-MAIN-20200529085930-20200529115930-00525.warc.gz"}
https://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_hard;action=print;num=1054085324
wu :: forums (http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi) riddles >> hard >> Seating Couples (Message started by: william wu on May 27th, 2003, 6:28pm) Title: Seating Couples Post by william wu on May 27th, 2003, 6:28pm N married couples are to be a seated at a round table. Two arrangements are considered identical if one is a rotation of the other. In how many different arrangments can n married couples be seated such that there is always one man between two women, and none of the men is ever next to his own wife? Source: Edouard Lucas (1842-1891). Came up with the closed-form formula for Fibonacci numbers and the Tower of Hanoi puzzle. Died of a freak accident at a banquet, when a plate fell and a shard flew into his cheek. Title: Re: Seating Couples Post by hyperdex on May 28th, 2003, 12:55pm Here's a partial solution... First, we will count the total number of arrangements that satisfy the alternating condition with no man seated next to his wife.  Dividing this result by 2N will produce the desired answer. We will use the principle of inclusion-exclusion.  Given K<=N, how many arrangements are there so that a fixed set of K husbands are seated next to their wives? Designate one chair to be the "head" and let's first solve the K=0 case.  The head will contain either a male or a female, and once this decision has been made there are (N!)^2 possible arrangements.  This means that in total there are 2(N!)^2 arrangements for K=0. Suppose K>0 and fix a set of K husbands.  As before, we decide in advance whether the head seat has a man or a woman.   Consider a round table where we will have K love seats that will seat a husband and his wife and 2N-2K normal seats.  In total, we have 2N-K seats and we must choose K of them to be love seats.  This can be done in C(2N-K, K) ways.  Once we have chosen the love seats we can arrange the K husband/wife combos in exactly K! ways.  Once they are fixed, we can arrange the remaining husbands and wives in (N-K)!^2 ways.  This means that the total number of arrangements where the K husbands sit next to their wives is 2*C(2N-K,K)*K!*(N-K)!^2.  Call this number F(N, K). The principle of inclusion/exclusion implies that the total number of arrangements where no husband is seated next to his wife is exactly \sum_{K=0}^{N} (-1)^k * C(N, K) * F(N, K) Using the formula for F(N, K), this can be "simplified" to 2(N!) \sum_{K=0}^{N} (-1)^k  *  C(2N-K, K)  * (N-K)! and our desired answer would be this divided by 2N which is (N-1)! \sum_{K=0}^{N} (-1)^k *  C(2N-K, K) * (N-K)! Unfortunately, I do not know if there is a closed form for this. Title: Re: Seating Couples Post by Nigel_Parsons on Apr 30th, 2004, 5:21pm “A simple equation is a thing of beauty!” Finding this in the ‘Unsolved – Hard’ puzzles, I started from basics. :: [HIDE] For N couples (N>2: N=2 is impossible of solution) Position first man, (Mr A) at a set point at the table. This avoids any duplication due to rotation, as we are looking at all seating plans relative to Mr A. Set a woman (Mrs B) in the seat to his immediate left. This must not be Mrs A, so can be any one of (N-1) women. At the next seat, place Mr C. As he is neither Mr A, nor the husband of Mrs B, he can be any one of (N-2) men. By similar logic, the next woman can be chosen from (N-2), and the next man from (N-3) This continues until all the seats are filled. The last two people to be seated are the Nth man, and the Nth woman. So for these there is no choice. It will be seen that the number of solutions for women is (N-1)*(N-2)*(N-3).....*1 It will be seen that the number of solutions for men is (N-2)*(N-3).....*1 This gives (N-1)! * (N-2)! Possibilities. However, the seating plan will fail if Mrs A is the last woman to be seated. As we deliberately avoided seating her to the left of her partner, the chance of her being the last to be seated is 1 in (N-1). This reduces the possibilities to ((N-1)! * (N-2)!)/(N-1) = (N-2)! * (N-2)! = ((N-2)!)^2 Although rotations have been avoided, the mirror images have been included (where Mrs B sits to right of Mr A) and these would give everyone the same two partners only on opposite sides. Depending on how the question is phrased, the required answer may be (((N-2)!)^2)/2 The seating plan will also fail if the last man to be seated is the partner of either of the remaining ladies. This is a chance of N/2. So the final solution would be ((((N-2)!)^2)/2)/(N/2) = (((N-2)!)^2)/N [/HIDE] :: I’m not totally happy with my final paragraph, but I believe I’ve ‘broken the back’ of this question Title: Re: Seating Couples Post by THUDandBLUNDER on May 1st, 2004, 12:11pm Quote: So the final solution would be ((((N-2)!)^2)/2)/(N/2) = (((N-2)!)^2)/N Putting N = 3 gives 1/3 ways.  ??? Title: Re: Seating Couples Post by rmsgrey on May 1st, 2004, 2:59pm on 04/30/04 at 17:21:50, Nigel_Parsons wrote: By similar logic, the next woman can be chosen from (N-2), and the next man from (N-3) At this stage you have (using capitals to denote men; lower-case women) AbC. The next woman could be any of n-2, including a, in which case, with AbCa, the next man is one of n-2 rather than the n-3 for AbCd. In general, the number of possibilities for the next person increases by one when the previous person's spouse is already seated. Title: Re: Seating Couples Post by ivancho on Oct 12th, 2004, 5:59pm I think Hyperdex' formula is slightly off - due to somehow counting mirror images in, or something like that. What I get is: [smiley=sum.gif](-1)k * Ck2N-k * (N-k)! * 2N / (2N -k) and you can multiply that by (N-1)!, if you want. Let's say man number 1 always sits at the head chair, since we can rotate the table. In fact, let's have men 2, 3, 4... sit clockwise from him, leaving chairs between them for the wives. This will remove the above (N-1)! from the formula - if you want to count all other possible seatings of the men, just permute them, together with the wives, and you'll get that factor. Now, rook polynomials are effectively inclusion-exclusion, but they generally represent the things to count in a better light.  The numbers of the wives to the left of each man describe a permutation [smiley=pi.gif] of (1,2,..N), such that [smiley=pi.gif](i) is never equal to i or i-1. In terms of a rook configuration ( ie, N rooks on a NxN board, no 2 hitting each other ), it needs to be a configuration that avoids the main diagonal, upper second diagonal and a bottom left square. The main rook theorem tells us that the number of those configurations is   r0 * N! - r1 * (N-1)! + r2 * (N-2)! -..., where rk is the number of ways to place k rooks on the avoided area so that no 2 hit each other. But on our forbidden area, the only way 2 rooks can threaten each other is if they are neighbours ( if you count the bottom left cell as neighbour of top left and bottom right). So rk is equal to the number of ways we can choose k out of 2N marbles on a ring so that no 2 are neighbours. That number is Ck2N-k * 2N / (2N - k)  - I suppose here it's the same thing as Hyperdex's solution, where we can't have the left side of a love seat be a neighbour of another  left side.. I think the easiest way to derive the number itself is to count something slightly different - ie, k red and 1 blue marbles, so that no 2 red are adjacent - counting those placing red ones first gives (2N-k)* rk, counting them via placing the blue one first gives 2N * Ck2N-k. And that's that.. I think this is the most closed form we can get. Apparently it's possible to get a recurrence relation, but I haven't bothered with that.. Title: Re: Seating Couples Post by nakli on Jul 19th, 2012, 6:33am http://en.wikipedia.org/wiki/M%C3%A9nage_problem
2022-01-23 12:48:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6729830503463745, "perplexity": 1586.8746835184918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00505.warc.gz"}
http://www.ncatlab.org/nlab/show/Dixmier-Douady+class
cohomology # Contents ## Idea A bundle gerbe or circle 2-bundle has a unique characteristic class in integral cohomology in degree 3, the higher analog of the Chern class of a circle group-principal bundle (or complex line bundle): this is called the Dixmier-Douady class of the bundle gerbe. ## Definition In the literature one find a universal Dixmier-Douady class defined for different entities, notably for projective unitary-principal bundles and for $U(1)$-bundle gerbes, as well as for C-star algebra constructions related to these. All these notions are equivalent in one sense, namely in bare homotopy theory, but differ in other sense, namely in geometric homotopy theory. ### In bare homotopy-type theory The classifying space of the circle 2-group $\mathbf{B}U(1)$ is an Eilenberg-MacLane space $B \mathbf{B} U(1) \simeq B^3 \mathbb{Z} \simeq K(\mathbb{Z}, 3)$. The bare Dixmier-Douday class is the universal characteristic class $DD : B B U(1) \stackrel{\simeq}{\to} K(\mathbb{Z}, 3)$ exhibited by this equivalence. Hence if we identify $B B U(1)$ with $K(\mathbb{Z}, 3)$, then the DD-class is the identity on this space. This is directly analogous to how the first Chern class is, as a universal characteristic class, the identity on $K(\mathbb{Z},2) \simeq B U(1)$. This means conversely that the equivalence class of a $U(1)$-bundle gerbe/circle 2-bundle is entirely characterized by its Dixmier-Douady class. ### In smooth homotopy-type theory The circle 2-group $\mathbf{B}U(1)$ naturally carries a smooth structure, hence is naturally regarded not just as an ∞-group in ∞Grpd, but as a smooth ∞-group in $\mathbf{H} \coloneqq$ Smooth∞Grpd. For each $n$, the central extension of Lie groups $U(1) \to U(n) \to PU(n)$ that exhibits the unitary group as a circle group-extension of the projective unitary group induces the corresponding morphism of smooth moduli stacks $\mathbf{B} U(1) \to \mathbf{B} U(n) \to \mathbf{B} PU(n)$ in $\mathbf{H}$. This is part of a long fiber sequence in $\mathbf{H}$ which continues to the right by a connecting homomorphism $\mathbf{dd}_n$ $\mathbf{B} U(1) \to \mathbf{B} U(n) \to \mathbf{B} PU(n) \stackrel{\mathbf{dd}_n}{\to} \mathbf{B}^2 U(1)$ in $\mathbf{H}$. Here the last morphism is presented in simplicial presheaves by the zig-zag/∞-anafunctor of sheaves of crossed modules $\array{ [U(1) \to U(n)] &\to& [U(1) \to 1] \\ {}^{\mathllap{\simeq}}\downarrow \\ PU(n) } \,.$ To get rid of the dependence on the rank $n$ – to stabilize the rank – we may form the directed colimit of smooth moduli stacks $\mathbf{B}U \coloneqq \underset{\rightarrow_n}{\lim} \mathbf{B} U(n)$ $\mathbf{B} PU \coloneqq \underset{\rightarrow_n}{\lim} \mathbf{B} PU(n) \,.$ On these we have the smooth universal class $\mathbf{dd} : \mathbf{B} PU \to \mathbf{B}^2 U(1) \,.$ Since the (∞,1)-topos Smooth∞Grpd has universal colimits, it follows that there is a fiber sequence $\array{ \mathbf{B}U &\to& \mathbf{B} PU \\ && \downarrow^{\mathbf{dd}} \\ && \mathbf{B}^2 U(1) }$ exhibiting the moduli stack of smooth stable unitary bundles as the homotopy fiber of $\mathbf{dd}$. ## References Revised on May 30, 2012 16:20:39 by David Corfield (129.12.18.29)
2014-12-19 20:13:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776617884635925, "perplexity": 1022.0352401257705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768847.78/warc/CC-MAIN-20141217075248-00168-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-1-1-2-linear-equations-in-one-variable-1-2-exercises-page-87/15
## Algebra and Trigonometry 10th Edition $x=4$ $x+11=15\\x+11-11=15-11\\x=4$ Checking the solution: $4+11=15$, which is true.
2020-02-16 21:34:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302284121513367, "perplexity": 2071.781931146873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141430.58/warc/CC-MAIN-20200216211424-20200217001424-00191.warc.gz"}
http://gappa.gforge.inria.fr/doc/apas04.html
## Logical formulas These sections describe some properties of the logical fragment Gappa manipulates. Notice that this fragment is sound, as the generated formal proofs depend on the support libraries, and these libraries are formally proved by relying only on the axioms of basic arithmetic on real numbers. ### Undecidability First, notice that the equality of two expressions is equivalent to checking that their difference is bounded by zero: e - f in [0,0]. Second, the property that a real number is a natural number can be expressed by the equality between its integer part int<dn>(e) and its absolute value |e|. Thanks to classical logic, a first-order formula can be written in prenex normal form. Moreover, by skolemizing the formula, existential quantifiers can be removed (although Gappa does not allow the user to type arbitrary functional operators in order to prevent mistyping existing operators, the engine can handle them). As a consequence, a first-order formula with Peano arithmetic (addition, multiplication, and equality, on natural numbers) can be expressed in Gappa's formalism. It implies that Gappa's logical fragment is not decidable. ### Expressiveness Equality between two expressions can be expressed as a bound on their difference: e - f in [0,0]. For inequalities, the difference can be compared to zero: e - f >= 0. The negation of the previous propositions can also be used. Checking the sign of an expression could also be done with bounds; here are two examples: e - |e| in [0,0] and e in [0,1] \/ 1 / e in [0,1]. Logical negations of these formulas can be used to obtain strict inequalities. They can also be defined by discarding only the zero case: not e in [0,0]. Disclaimer: although these properties can be expressed, it does not mean that Gappa is able to handle them efficiently. Yet, if a proposition is proved to be true by Gappa, then it can be considered to be true even if the previous "features" were used in its expression.
2018-07-18 06:50:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226187229156494, "perplexity": 464.27666751539334}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590069.15/warc/CC-MAIN-20180718060927-20180718080927-00153.warc.gz"}
https://uofcstatdeptconsult.github.io/stat/glms/
# Generalized Linear Models Based on the “Statistical Consulting Cheatsheet” by Prof. Kris Sankaran Linear models provide the basis for most inference in multivariate settings. We won’t even begin to try to cover this topic comprehensively. There are entire course sequences that only cover linear models. But, we’ll try to highlight the main regression-related ideas that are useful to know during consulting. This section is focused more on the big-picture of linear regression and when we might want to use it in consulting. We defer a discussion of inference in linear models to the next section. There are three steps in the analysis of data using a (Generalized) Linear Model: Figure: the three steps in GLM analysis. • Step 1: Model Selection, which involves determining which GLM to choose. • Step 2: Model Fitting, which is the actual process of fitting the model to your data. • Step 3: Diagnostics, which involves making sure that your model is a good fit and produces quality estimates Depending on the type of data that you have at-hand, you might opt for: • Linear Regression, if your output $$y$$ is a continuous variable. • Logistic Regression, if your output $$y$$ is a binary variable (eg 0/1). • Multinomial Regression, if your output $$y$$ is a categorical variable with more than 2 categories (eg 0/1/2). • Ordinal Regression, if your output $$y$$ is a categorical variable with more than 2 categories (eg 0/1/2), and there is a natural ordering of the classes (eg 0<1<2) — this is appropriate when the output is a survey scale for instance. • Poisson Regression, if your output $$y$$ consists of count data. ## 1. Linear regression ### 1.1. The Model Linear regression learns a linear function between covariates and a response, and is popular because there are well-established methods for performing inference for common hypothesis. • Generally, model-fitting procedures suppose that there is a single variable $$Y$$ of special interest. The goal of a supervised analysis is to determine the relationship between many other variables (called covariates), $$X_1, \cdots X_p$$ and $$Y$$ . Having a model $$Y = f(X_1, \cdots X_p)$$ can be useful for many reasons, especially (1) improved scientific understanding (the functional form of f is meaningful) and (2) prediction, using $$f$$ learned on one set of data to guess the value of $$Y$$ on a new collection of $$X$$’s. •  Linear models posit that the functional form $$f$$ is linear in the $$X_1, \cdots X_p$$. This is interpreted geometrically by saying that the change in $$Y$$ that occurs when $$X_j$$ is changed by some amount (and all other covariates are held fixed) is proportional to the change in $$X_j$$ . E.g., when $$p = 1$$, this means the scatterplot of $$X_1$$ against $$Y$$ can be well-summarized by a line.  A little more formally, we suppose we have observed samples indexed by $$i$$, where $$x_i \in \mathbb{R}^p$$ collects the covariates for the $$i$$th sample, and $$y_i$$ is the associated observed response. A regression model tries to find a parameter $$\beta \in \mathbb{R}^p$$ so that $Y_i + x_i^T \beta +\epsilon_i$ is plausible, where $$i$$ are drawn i.i.d. from $$N(0, \sigma^2)$$.  for some (usually unknown) $$\sigma^2$$. The fitted value for $$\beta$$ after running a linear regression is denoted $$\hat{\beta}$$ . • Compared to other forms of model fitting, a major advantage of of linear models is that inference is usually relatively straightforwards { we can do tests of significance / build confidence intervals of the strength of association across different $$X$$’s as well as comparison between models with different sets of variables.  The linear assumption is not well-suited to binary, count, or categorical responses $$Y$$ , because it the model might think the response $$Y$$ belongs to some range that’s not even possible (think of extrapolating a linear in a scatterplot when the y-axis values are all between 0 and 1). In these situations, it is necessary to apply generalized linear models (GLMs) (GLMs). Fortunately, many of the ideas of linear models (methods for inference in particular) have direct analogs in the GLM setting. ### When is linear regression useful in consulting? • In a consulting setting, regression is useful for understanding the association between two variables, controlling for many others. This is basically a rephrasing of point (2) above, but it’s the essential interpretation of linear regression coecients, and it’s this interpretation that many research studies are going after. • Sometimes a client might originally come with a testing problem, but might want help extending it to account for additional structure or covariates. In this setting, it can often be useful to propose a linear model instead: it still allows inference, but it becomes much easier to encode more complex structure. ### What are some common regression tricks useful in consulting? • Adding interactions: People will often ask about adding interactions in their regression, but usually from an intuition about the non-quantitative meaning of the word “interaction.” It’s important to clarify the quanti- tative meaning: Including an interaction term between $$X_1$$ and $$X_2$$ in a regression of these variables onto $$Y$$ means that the slope of the relation- ship between $$X_1$$ on $$Y$$ will be different depending on the value of $$X_2$$. For example, i f$$X_2$$ can only take on two values (say, A and B), then the relationship between $$X_1$$ and $$Y$$ will be linear with slope $$\beta_{1A}$$ in the case that $$X_2$$ is A and $$\beta_{1B}$$ otherwise11. When $$X_2$$ is continuous, then there is a continuum of slopes depending on the value of $$X_2: \beta_1 + \beta_{1\times2}X_2$$. See Figure 5 for a visual interpretation of interactions.  • Introducing basis functions: The linearity assumption is not as restrictive as it seems, if you can cleverly apply basis functions. Basis functions are functions like polynomials (or splines, or wavelets, or trees…) which you can mix together to approximate more complicated functions, see the figure. Linear mixing can be done with linear models. To see why this is potentially useful, suppose we want to use time as a predictor in a model (e.g., where $$Y$$ is number of species j present in the sample), but that species population doesn’t just increase or decrease lin- early over time (instead, it’s some smooth curve). Here, you can introduce a spline basis associated with time and then use a linear regression of the response onto these basis functions. The fitted coecients will define a mean function relating time and the response.  • Derived features: Related to the construction of basis functions, it’s often possible to enrich a linear model by deriving new features that you imagine might be related to $$Y$$ . The fact that you can do regression onto variables Figure 5: In the simplest setting, an interaction between a continuous and binary variable leads to two different slopes for the continuous variable. Here, we are showing the scatterplot ( x_{i1}; y_i ) pairs observed in the data. We suppose there is a binary variable that has also been measured, denoted x_{i2}\$, and we shade in each point according to its value for this binary variable. Apparently, the relationship between x_1 and y depends on the value of y (when in the pink group, the slope is less. This can exactly be captured by introducing an interaction term between x_1 and x_2. In cases where x_2 is not binary, we would have a continuous of slopes between x_1 and y – one for each value of x_2. that aren't just the ones that were collected originally might not be obvious to your client. For example, if you were trying to predict whether someone will have a disease based on time series of some lab tests, you can construct new variables corresponding to the \slope at the beginning," or \slope at the end" or max, or min, ... across the time series. Of course, deciding which variables might actually be relevant for the regression will depend on domain knowledge.  One trick --- introducing random effects --- is so common that it gets it's own section. Basically, it's useful whenever you have a lot of levels for a particular categorical vector.. ### 1.2. Diagnostics How can you assess whether a linear regression model is appropriate? Many types of diagnostics have been proposed, but a few of the most common are: • Look for structure in residuals: According to equation 1, the amount that the predictions $$\hat{y}_i = x_i^T\beta$$ is off from $$y_i$$ (this difference is called the residual $$r_i = \hat{y}_i - y_i$$) should be about i.i.d. $$N(0, \sigma^2)$$. Whenever there is a systematic pattern in these residuals, the model is misspecifed in some way. For example, if you plot the residuals over time and you find clumps that are all positive or negative, it means there is some unmeasured phenomena associated with these time intervals that influences the average value of the responses. In this case, you would define new variables for whether you are in one of these intervals, but the solution differs on a case- by-case basis. Other types of patterns to keep an eye out for: nonconstant spread (heteroskesdasticity), large outliers, any kind of discreteness (see Figure 7). Figure 7: Some of the most common types of \structure" to watch out for in residual plots are displayed here. The top left shows how residuals should appear { they look essentially i.i.d. N (0; 1). In the panel below, there is nonconstant variance in the residuals, the one on the bottom has an extreme outlier. The panel on the top-right seems to have means that are far from zero in a structured way, while the one below has some anomalous discreteness. leverage, you might consider throwing them out. Alternatively, you could consider a robust regression method. • Make a qq-plot of residuals: More generally than simply finding large outliers in the residuals, we might ask whether the residuals are plausibly drawn from a normal distribution. qq-plots give one way of doing this – more often than not the tails are heavier than normal. Most people ignore this, but it can be beneficial to consider e.g. robust regression or considering logistically (instead of normally) distributed errors. • Calculate the leverage of different points: The leverage of a sample is a measure of how much the overall fit would change if you took that point out. Points with very high leverage can be cause for concern { it’s bad if your fit completely depended on one or two observations only { and these high leverage points often turn out to be outliers. See Figure 8 for an example of this phenomenon. If you find a few points have very high Figure 8: The leverage of a sample can be thought of as the amount of influence it has in a fpt. Here, we show a scatterplot onto which we fit a regression line. The cloud near the origin and the one point in the bottom right represent the observed samples. The blue line is the regression fit when ignoring the point on the bottom right, while the pink line is the regression including that point. Evidently, this point in the bottom right has very high leverage – in fact, it reverses the sign of the association between X and Y . This is also an example of how outliers (especially outliers in the X direction) can have very high leverage. •  Simulate data from the model: This is a common practice in the Bayesian community (‘‘posterior predictive checks”), though I’m not aware if people do this much for ordinary regression models. The idea is simple though: draw samples from $$N(x_i^T \hat{\beta}, \hat{\sigma}^2)$$ and see whether the new $$y_i$$’s look comparable to the original $$y_i$$’s. Characterizing the ways they don’t match can be useful for modifying the model to better fit the data. ## 2 - Logistic regression ### 2.1. The Model Logistic regression is the analog of linear regression that can be used whenever the response $$Y$$ is binary (e.g., patient got better, respondent answered “yes,” email was spam). In linear regression, the response $$y$$ are directly used in a model of the form $$y_i = x^T_i\beta+ \epsilon$$. In logistic regression, we now want a model between the $$x_i$$ and the unknown probabilities of the two classes when we’re at $$x_i$$: $$p(x_i)$$ and $$1 - p (x_i$$).  The observed value of $$y_i$$ corresponding to $$x_i$$ is modeled as being drawn from a coin flip with probability $$p (x_i)$$.  If we had fit an ordinary linear regression model to the $$y_i$$, we might get fitted responses $$\hat{y}_i$$ outside of the valid range [0; 1], which in addition to being confusing is bad modeling. The goal Logistic regression instead models the log-odds transformation to the $$p (x_i)$$. Concretely, it assumes the model: $$p (y_1, y_2, \cdots, y_n) = \prod_{i=1}^n p_\beta(x_i)^{1_{y_i=1}} (1- p_\beta(x_i))^{1_{y_i=0}}$$ where we are approximating $$p(x_i) \approx p_\beta(x_i) = \frac{1}{1+ e^{-x_i^T\beta}}$$. • This is equivalent to saying that each observation $$y_i$$ is sampled from a Bernouilli whose probability is a function of $$x_i$$: $$y_i \sim \text{Bernouilli}( p_\beta(x_i) )$$. ogistic regression fits the parameter $$\beta$$ to maximize the likelihood defined in the model above . • An equivalent reformulation of the assumption is that $$\log(\frac{ p_\beta(x_i) }{1- p_\beta(x_i) }) = x_i^T \beta$$ i.e. the log-odds are approximately linear. • Out of the box, the coecients fitted by logistic regression can be difficult to interpret. Perhaps the easiest way to interpret them is in terms of the relative risk, which gives an analog to the usual linear regression interpret “when the jth feature goes up by one unit, the expected response goes up by $$\beta_j$$.” First, recall that the relative risk is defined as $$r = \frac{\mathbb{P}[Y=1|X] }{\mathbb{P}[Y=0|X]}$$ which in logistic regression is approximated by $$e^{x_i^T\beta}$$. If we increase the jth coordinate of $$x_i$$ (i.e., we replace $$x_i \leftarrow x_i +\delta$$ ), then this relative risk becomes $$r = e^{x_i^T\beta} e^{\delta^T\beta}$$ The interpretation is that the relative risk got multiplied by $$e^{\beta_j}$$ we increased the jth covariate by one unit. Figure 9: An example of the type of approximation that logistic regression makes. The x-axis represent the value of the feature, and the y-axis encodes the binary 0 / 1 response. The purple marks are observed (xi; yi) pairs. Note that class 1 becomes more common when x is large. The pink line represents the "true" underlying class probabilities as a function of x, which we denote as p (x). This doesn't lie in the logistic family 1/(1+exp(-xB) , but it can be approximated by a member of that family, which is drawn in blue (this is the logistic regression fit). ### 2.2 Diagnostics While you could study the differences $$y_i - \hat{p}(x_i)$$, which would be analogous to linear regression residuals, it is usually more informative to study the Pearson or deviance residuals, which upweight small differences near the boundaries 0 and 1. These types of residuals take into account structure in the assumed bernoulli (coin-flipping) sampling model.  For ways to evaluate the classification accuracy of logistic regression models, see the section on Prediction Evaluation and for an overview of formal inference, see the [section Inference in GLM]. ## 3 - Multinomial regression Multinomial regression is a generalization of logistic regression for when the response can take on one of $$K$$ categories (not just $$K = 2$$).  Here, we want to study the way the probabilities for each of the $$K$$ classes varies as x varies: $$p (y_i = j|x_i)$$ for each $$k = 1 \cdots K$$ Think of the responses $$y_i$$ as observations from a K-sided dice, and that different faces are more probable depending on the associated features xi.  The approximation of logisitc regression is replaced with $p(y_i = k |c_i) \approx p_W( y_i=k|x_i )= \frac{\exp(w_k^Tx_i)}{\sum_k \exp(w_k^Tx_i)}$ where the parameters $$w_1\cdots;w_K$$ govern the relationship between xi and the probabilities for different classes.  As is, this model is not identifiable (you can increase one of the wks and de- crease another, and end up with the exact same probabilities $$p _W(y_i = j|x_i)$$. To resolve this, one of the classes (say the Kth, this is usually chosen to be the most common class) is chosen as a baseline, and we set $$w_K = 0$$.  Then the $$w_k$$s can be interpreted in terms of how a change in $$x_i$$ changes the probability of observing $$k$$ relative to the baseline $$K$$. That is, suppose we increase the $$j$$th variable by one unit, so that $$x_i \leftarrow x_i + \delta_j$$ . Then, the relative probability against class K changes according to $$p(y_i = k |c_i) \approx p_W(y_i=k|x_i )= \frac{\exp(w_k^Tx_i)}{\sum_k \exp(w_k^Tx_i).}$$ ## 3 - Ordinal regression Sometimes we have $$K$$ classes for the responses, but there is a natural ordering between them. For example, survey respondents might have chosen one of 6 values along a likert scale. Multinomial regression is unaware of this additional ordering structure – a reasonable alternative in this setting is ordinal regression. • The basic idea for ordinal regression is to introduce a continuous latent variable $$z_i$$ along with $$K-1$$ “cutpoints” $$\gamma_1, \cdots, \gamma_{K-1}$$ which divides the real line into $$K$$ intervals. When $$z_i$$ lands in the $$k$$th of these intervals, we observe $$y_i = k$$. • Of course, neither the $$z_i$$’s nor the cutpoints $$\gamma_k$$ are known, so they must be inferred. This can be done using the class frequencies of the observed $$y_i$$s though (many $$y_i = k$$ means the $$k$$th bin is large). • To model the influence of covariates $$p (y_i = k|x_i)$$, we suppose that $$x_i = \beta^Tx_i +\epsilon_i$$. When $$\epsilon_i$$ is Gaussian, we recover ordinal probit regression, and when $$\epsilon_i$$ follows the logistic distribution we recover ordinal logistic regression.  An equivalent formulation of ordinal logistic regression models the cumulative logits” as linear $\log(\frac{p(y_i \leq k) }{1 - p(y_i \leq k)} ) = \alpha_k + \beta^T x_i.$ Here, the $$\alpha_k$$’s control the overall frequencies of the $$K$$ classes, while $$\beta$$ controls the influence of covariates on the response. + Outside of the latent variable interpretation, it’s also possible to under- stand $$\beta$$ in terms of relative risks. In particular, when we increase the $$j$$th coordinate by 1 unit, $$x_i \leftarrow x_i + \delta_j$$ , the odds of class $$k$$ relative to class k-11gets multiplied by $$\exp(\beta_j)$$, for every pair of neighboring classes $$k-1$$ and $$k$$. ## 4. Poisson regression ### 4.1. The Model Poisson regression is a type of generalized linear model that is often applied when the responses $$y_i$$ are counts (i.e., $$y_i \in \{ 0, 1, \cdots \}$$.  As in logistic regression, one motivation for using this model is that using ordinary logistic regression on these responses might yield predictions that are impossible (e.g., numbers below zero, or which are not integers).  To see where the main idea for this model comes from, recall that the Poisson distribution with rate $$\lambda$$ draws integers with probabilities: $\mathbb{P}_{\lambda} [y=k |\lambda] = \frac{\lambda^k e^{-\lambda}}{k!}$ The idea of Poisson regression is to say that the different $$y_i$$ are drawn Poissons with rates that depend on the associated $$x_i$$. More specifically, we assume that the data have a joint likelihood: $p(y_1, y_2, \cdots) = \prod_{i=1}^n \frac{\lambda(x_i)^{y_i} e^{-\lambda(x_i)}}{y_i!}$ and that the log of the rates are linear in the covariates: $\log(\lambda(x_i)) = x_i^T \beta$ (modeling the logs as linear makes sure that the actual rates are always nonnegative, which is a requirement for the Poisson distribution). We think of different regions of the covariate space as having more or less counts on average, depending on this function  $$\lambda(x_i)$$. Moving from xi to $$x_i + \delta_j$$ multiplies the rate from  $$\lambda(x_i)$$ to $$\exp(\beta_j)  \lambda(x_i).$$ ### 4-2 Diagnostics. As in logistic regression, it makes more sense to consider the deviance residuals when performing diagnostics. In particular, a deficiency in Poisson regression models (which often motivates clients to show up to consulting) is that real data often exhibit overdipersion with respect the assumed Poisson model. The issue is that the mean and variance of counts in a Poisson model are tied together: if you sample from then the mean and variance of the associated counts are both $$\lambda$$. In real data, the variance is often larger than the mean, so while the Poisson regression model might do a good job approximating the mean  $$\lambda(x_i)$$ at $$x_i$$, the observed variance of the $$y_i$$ near $$x_i$$ might be much larger than $$\lambda (x_i)$$. This motivates the methods in following subsection . ### 4.3- Accounting for overdispersion : Pseudo-Poisson and Negative Binomial regression Pseudo-Poisson and negative binomial regression are two common strategies for addressing overdispersion in count data. In the pseudo-Poisson setup, a new parameter ‘ is introduced that sets the relative scale of the variance in comparison to the mean: $$\mathbb{V}\text{ar} (y) = \mathbb{E}[y]$$. This is not associated with any real probability distribution, and the associated likelihood is called a pseudolikelihood. However, ‘ can be optimized along with from the usual Poisson regression setup to provide a maximum pseudolikelihood estimate. In negative binomial regression, the Poisson likelihood is abandoned alto- gether in favor of the negative binomial likelihood. Recall that the negative bi- nomial (like the Poisson) is a distribution on nonnegative counts $$\{0, 1, 2,\cdots \}$$. It has two parameters, p and r, $$\mathbb{P}[y=k] ={ k+r-1 \choose k} p^k (1-p)^r$$ which can be interpreted as the number of heads that appeared before seeing $$r$$ tails, when flipping a coin with probability $$p$$ of heads. More important than the specific form of the distribution is the fact that it has two parameters, which allow different variances even for the same mean: $\mathbb{E}_{p,r}[y] = \frac{pr}{1-p}$ $\mathbb{V}\text{ar}_{p,r}[y] = \frac{pr}{(1-p)^2} = \mathbb{E}_{p,r}[y] + \frac{1}{2} \mathbb{E}_{p,r}[y]^2$ In particular, for small $$r$$, the variance is much larger than the mean, while for large $$r$$, the variance is about equal to the mean (it reduces to the Poisson). For negative binomial regression, this likelihood is substituted for the Poisson when doing regression, and the mean is allowed to depend on covariates. On the other hand, while the variance is no longer fixed to the mean, it must be the same across all data points. This likelihood model is not exactly a GLM (the negative binomial is not in the exponential family), but various methods for fitting it are available. There is a connection between the negative binomial and the Poisson that both illuminates potential sources of overdispersion and suggests new algorithms for fitting overdispersed data: the negative binomial can be interpreted as a gamma mixture of Poissons.” More specifically, in the hierarchical model: $y | \lambda \sim \text{Poisson}(\lambda)$ $\lambda \sim \text{Gamma}(r, \frac{p}{1-p})$ which draws a Poisson with a randomly chosen mean parameter, the marginal distribution of $$y$$ is exactly NegBin$$(r, p)$$. This suggests that one reason overdis- persed data might arise is that they are actually a mixture of true Poisson subpopulations, each with different mean parameters. This is also the starting point for Bayesian inference of overdispersed counts, which fit Poisson and Gamma distributions according to this hierarchical model.
2022-06-25 14:41:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512868285179138, "perplexity": 448.14821551910416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00105.warc.gz"}
https://electronics.stackexchange.com/questions/401051/doing-logical-operations-on-port-pins-in-assembly
# Doing logical operations on Port PINS in Assembly I'm trying to do those following the operations on PINS, but not sure how to do the not operation on a port, do xor ..etc? would someone show me an excrept psesuod code ? • Imagine that you have functions called or and xor and and and etc. that takes two 8 bits registers and saves the result in one of the 8 bit registers according to the boolean function used. - Do you think you'd be able to solve your assembly-code problem with this information? Or can you at least try solving it on your own? – Harry Svensson Oct 14 '18 at 16:26 • Yes, but what would be the two registers, PORTA, PORTB ? What are the instructions for NOT in assembly ? – andre ahmed Oct 14 '18 at 16:33 • That would be one's complement, a.k.a. com. Start looking into some datasheets, particularly page 281 that I just linked. – Harry Svensson Oct 14 '18 at 16:38 ## 1 Answer After looking through this I've understood enough to show how $$\\text{LED}_0\$$ and $$\\text{LED}_1\$$ can be made, which should show enough for you to implement the other two. As you can see, $$\\text{LED}_0\$$ depends on three bits, the easiest solution would be to use temporary registers while you are manipulating the data from $$\\text{PA}\$$ and then set $$\\text{LED}_0\$$ once you're done. Here's one solution for the first two bits, it's not efficient/optimized, but it will make you understand what I'm doing. I will write PA = Port A since that's what you've written in the comments to your question. MOV TRA,PA // Copy contents from Port A into a Temporary Register A MOV TRB,PA // Copy contents from Port A into a Temporary Register B MOV TRC,PA // Copy contents from Port A into a Temporary Register C LSR TRA //Move all bits once to the right in TRA, so bit1 in TRA is now in bit0 LSR TRB LSR TRB //Move all bits twice to the right in TRB, so bit2 in TRB is now in bit0 LSR TRC LSR TRC //Move all bits three times to the right in TRC, LSR TRC //so bit3 in TRC is now in bit0 AND TRA,TRB //AND TRA and TRB together, bit by bit and save the result in TRA OR TRA, TRC //OR the new TRA with TRC and store the information in TRA ANDI TRA,1 //AND TRA with 0b00000001, in other words, remove the mess in other bits MOV LED,TRA //copy TRA into your LED register //Now, you're ready for the second bit for your LED register //As you can see, It's PA1, PA2 and PA3 again, PA2 is still in TRB and //PA3 is still in TRC, PA1 is demolished, so we need to update TRA again MOV TRA,PA // Copy contents from Port A into a Temporary Register A LSR TRA //Move all bits once to the right in TRA, so bit1 in TRA is now in bit0 COM TRB //Make TRB into NOT TRB OR TRA,TRB //OR TRA with the inverted TRB AND TRA,TRC //AND the new TRA with TRC ANDI TRA,1 //AND TRA with 0b00000001, in other words, remove the mess in other bits LSL TRA //Move all bits in TRA one bit to the left OR LED,TRA //Move whatever information is in bit1 in TRA into LED at bit1 //Now you're ready for the other two bits This should definitely get you started. Remember that you probably want PA to be PIN_A, PIN_A usually corresponds in assembler language to the input of PORT_A. If you want to optimize the code, then never move things into TRB and TRC and make TRA move around, it should lead to more efficient code, and use multiplication by $$\2^n\$$ to copy data into the multiplication register and rotate by $$\n\$$ to the left. So just rotate cleverly with the multiplication and look at the high byte of the multiplication register and you will have your rotated information there.
2020-08-05 19:41:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22194381058216095, "perplexity": 3781.4624060735446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735964.82/warc/CC-MAIN-20200805183003-20200805213003-00286.warc.gz"}
https://www.physicsforums.com/threads/united-states-calculus-2-calculus-in-polar-coordinates.557658/
# United States Calculus 2 - Calculus in Polar Coordinates 1. Dec 6, 2011 ### GreenPrint 1. The problem statement, all variables and given/known data Find the slope of the line tangent to the polar curve at the given point. At the point where the curve intersects the origin, find the equation of the tangent line in polar coordinates. r = 6 sinθ; (-3 7∏/6) 2. Relevant equations 3. The attempt at a solution Select the correct choice below and, if necessary, fill in the answer box within your choice. A. The slope of the curve at the point (-3, 7∏/6) is _ B. The slope of the curve at the point (-3, 7∏/6) is undefined. I selected A. and put sqrt(3) and it told me I was correct. The equation of the tangent line when the curve intersects the origin is _ (Type an equation. Type any angles in radians between 0 and 2∏.) I'm not sure how to do this part. I believe I put in the correct equation in Cartesian. coordinate system in terms of y and x and it told me I was wrong. I wasn't exactly sure why and then I realized that it wanted me to input a polar coordinate equation in the instructions, which I'm not exactly sure how to do. When I put the equation it told me I was wrong and this came up If the graph of the polar curve passes through the origin of some angles θ_0, then f(θ_0)=0 and dy/dx=tan θ_0, which means that the equation of a tangent line through the origin is θ=θ_0. Thus, to determine the equation of the tangent line(s) when the curve intersects the origin, find the value(S) of θ when r=0. Note that this hold for f'(θ_0)=/=0. I'm not exactly sure what it wants me to do. I got r= 6sinθ x=6sinθcosθ y=6(sinθ)^2 dy/dx= ( 2cosθsinθ )/( (cosθ)^2 - (sinθ)^2 ) I'm not exactly sure why they tell me dy/dx = tanθ_0 or why the equation of the line that it wants me to enter is just an angle θ=θ_0 if that is what it wants me to do. I do not see what setting r=0 and solving for θ does r = 6sinθ = 0 θ = 0,∏,2∏ Is this the answer it wants? Thanks for any help.
2017-09-22 15:40:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038142323493958, "perplexity": 418.2947394346481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688997.70/warc/CC-MAIN-20170922145724-20170922165724-00489.warc.gz"}
https://astronomy.stackexchange.com/questions/18133/galaxy-behind-the-ring-nebula
# Galaxy behind the Ring Nebula Would anyone be able to reference the galaxy in the background of this image of the Ring Nebula: It's obviously not a Messier object, I'm more confident in it being in the New General Catalogue. • Redshift: 0.017075 $\pm$ 0.000070, according to Marzke et al. (1996). • Distance: 66.6 Mpc to 68.9 Mpc after corrections, depending on your distance definition of choice (angular diameter, luminosity, etc.). Large uncertainties arise when calculating the difference from redshift. For comparison, Andromeda is approximately 0.78 Mpc away, and the Ring Nebula is $10^{-3}$ times that distance. • Apparent magnitude: 14.80 $\pm$ 0.30, at visible wavelengths. For comparison, the Ring Nebula's apparent magnitude is 8.8.
2020-09-26 20:55:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6769919991493225, "perplexity": 2974.422745236256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400245109.69/warc/CC-MAIN-20200926200523-20200926230523-00128.warc.gz"}
https://msp.org/agt/2005/5-2/b07.xhtml
#### Volume 5, issue 2 (2005) 1 , Théorie des topos et cohomologie étale des schémas. Tome 2, Lecture Notes in Mathematics 270, Springer (1972) MR0354653 2 J F Adams, Lectures on generalised cohomology, from: "Category Theory, Homology Theory and their Applications, III (Battelle Institute Conference, Seattle, Wash., 1968, Vol. Three)", Springer (1969) 1 MR0251716 3 B A Blander, Local projective model structures on simplicial presheaves, $K$–Theory 24 (2001) 283 MR1876801 4 J M Boardman, Conditionally convergent spectral sequences, from: "Homotopy invariant algebraic structures (Baltimore, MD, 1998)", Contemp. Math. 239, Amer. Math. Soc. (1999) 49 MR1718076 5 A K Bousfield, E M Friedlander, Homotopy theory of $\Gamma$–spaces, spectra, and bisimplicial sets, from: "Geometric applications of homotopy theory (Proc. Conf., Evanston, Ill., 1977), II", Lecture Notes in Math. 658, Springer (1978) 80 MR513569 6 A K Bousfield, D M Kan, Homotopy limits, completions and localizations, Lecture Notes in Mathematics 304, Springer (1972) MR0365573 7 M Bökstedt, A Neeman, Homotopy limits in triangulated categories, Compositio Math. 86 (1993) 209 MR1214458 8 P Deligne, Poids dans la cohomologie des variétés algébriques, from: "Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), Vol. 1", Canad. Math. Congress, Montreal, Que. (1975) 79 MR0432648 9 E D Farjoun, Cellular spaces, null spaces and homotopy localization, Lecture Notes in Mathematics 1622, Springer (1996) MR1392221 10 D Dugger, Universal homotopy theories, Adv. Math. 164 (2001) 144 MR1870515 11 D Dugger, S Hollander, D C Isaksen, Hypercovers and simplicial presheaves, Math. Proc. Cambridge Philos. Soc. 136 (2004) 9 MR2034012 12 W G Dwyer, J P C Greenlees, S Iyengar, Duality in algebra and topology, Adv. Math. 200 (2006) 357 MR2200850 13 W G Dwyer, J Spaliński, Homotopy theories and model categories, from: "Handbook of algebraic topology", North-Holland (1995) 73 MR1361887 14 A D Elmendorf, I Kriz, M A Mandell, J P May, Rings, modules, and algebras in stable homotopy theory, Mathematical Surveys and Monographs 47, American Mathematical Society (1997) MR1417719 15 W Fulton, Introduction to toric varieties, Annals of Mathematics Studies 131, Princeton University Press (1993) MR1234037 16 W Fulton, Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], Springer (1998) MR1644323 17 P S Hirschhorn, Model categories and their localizations, Mathematical Surveys and Monographs 99, American Mathematical Society (2003) MR1944041 18 M Hovey, Model categories, Mathematical Surveys and Monographs 63, American Mathematical Society (1999) MR1650134 19 M Hovey, Spectra and symmetric spectra in general model categories, J. Pure Appl. Algebra 165 (2001) 63 MR1860878 20 P Hu, I Kriz, Some remarks on Real and algebraic cobordism, $K$–Theory 22 (2001) 335 MR1847399 21 D C Isaksen, Flasque model structures for simplicial presheaves, $K$–Theory 36 (2005) MR2275013 22 J F Jardine, Motivic symmetric spectra, Doc. Math. 5 (2000) 445 MR1787949 23 R Joshua, Algebraic $K$–theory and higher Chow groups of linear varieties, Math. Proc. Cambridge Philos. Soc. 130 (2001) 37 MR1797730 24 B Keller, Deriving DG categories, Ann. Sci. École Norm. Sup. $(4)$ 27 (1994) 63 MR1258406 25 F Morel, V Voevodsky, $\mathbb{A}^1$–homotopy theory of schemes, Inst. Hautes Études Sci. Publ. Math. (1999) MR1813224 26 S Schwede, B E Shipley, Algebras and modules in monoidal model categories, Proc. London Math. Soc. $(3)$ 80 (2000) 491 MR1734325 27 B Totaro, Chow groups, Chow cohomology and linear varieties, preprint (1995) 28 V Voevodsky, Voevodsky's Seattle lectures: $K$–theory and motivic cohomology, from: "Algebraic $K$–theory (Seattle, WA, 1997)", Proc. Sympos. Pure Math. 67, Amer. Math. Soc. (1999) 283 MR1743245
2022-08-17 16:41:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7876464128494263, "perplexity": 1550.2522732361406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00541.warc.gz"}
http://weblogs.mozillazine.org/bz/archives/2003/05/
# Three Monkeys, Three Typewriters, Two Days ## May 20, 2003 ### Phrase of the day The phrase of the day (of yesterday, actually) is "weakly strongly plurisubharmonic". Posted by bzbarsky at 8:52 PM ## May 19, 2003 ### Opera 7.11 for Linux—first impressions Today I installed Opera 7.11 for Linux, so as to try it. I downloaded the tarball that has no OS version attached to it, with static QT, since my OS (RedHat 6.2) is older than any of the options they listed on the download page. Untarred the browser, ran the install script, installed inside my homedir. Ran the browser in the foreground; got back a prompt. Not useful. Undeterred, I downloaded the RPM; again the one not listed as being built against a particular OS, with static QT. Installed it with no problems (which was nice). Ran it as myself: "Segmentation fault". Ran it as root, on a lark: browser starts. Ran it as myself again: "Segmentation fault". No Opera 7.11 for me, I guess, since I have no plans to spend time attempting to get this to work. Posted by bzbarsky at 11:30 PM ## May 18, 2003 ### With friends like this, who needs enemies? Reading the Slashdot comments on Mozilla Firebird 0.6 and the Mozillazine comments on the same, I am struck by the number of conversations which look like this: Person trying Mozilla Firebird 0.6: "This, this, and this seem to be problem areas that could use some work." Mozilla Firebird 0.6 user: "If you don't like it, don't use it! We don't need your stinkin' kind around here!" (I wish I were making this up, but this is nearly word-for-word what I've seen people post in reply to a number of comments.) I asked one of these Mozilla Firebird users who this mythical "we" was that did not need to hear criticism, since he did not seem to be involved in the project in any visible way (as measured by CVS logs, Bugzilla activity, etc). His response, removing the profanity, personal attacks, and attempts to insult me, was that he was involved in "helping newbies install it" and "spreading the word". I love his methods of spreading the word, and I'm sure that all the Mozilla Firebird front-end developers and all Mozilla back end developers are simply overjoyed to have such an effective group of advocates speaking for them in the first person plural. Posted by bzbarsky at 12:45 AM ## May 10, 2003 ### Relevance of performance tests It's too bad that as soon as people come up with a measurable substitute for whatever it is they care about they start treating it as more important than the real thing. I'm reminded of IQ tests and SATs. Posted by bzbarsky at 1:00 PM ## May 7, 2003 ### Speaking of TeX When I first typed TeX in my previous comment, I entered it as "TeX". Then I thought to myself, "This is the Web, not a plaintext document. Surely I can use some CSS to mark this up so that it will look right!" And indeed, simply doing T<span style="vertical-align: -0.5ex; line-height: 0; text-transform: uppercase">e</span>X did sort of what I wanted. Here the -0.5ex was a guess based on what I knew the result should look like, the line-height was needed to not cause an unsightly line-spacing increase (CSS3 has some proposed properties for handling this globally), and the text-transform was used to allow non-CSS clients to degrade gracefully. There was just one problem. The result looked ugly as sin. Still does, really. So I took a peek at what \TeX is actually defined as. And that is: T\kern-.2em\lower.5ex\hbox{E}X So I was right about the vertical-align; what I was missing was the kerning. It's still missing, as you can tell, because I cannot think of a good way to do it in CSS (relative positioning is no good, since that will make the space after the "X" too big). Oh, well. Chalk up a point for 20-year-old technology. On a related note, this is actually a case when use of the "style" attribute seems to be in order. I suppose I could set a class="e-in-TeX" on those spans and move the style into the site stylesheet, but that seems pretty silly too (though now that I have that markup in two or three places on this page that may indeed make sense). What I would like is a way to say, "Put TeX logo here," without having to repeat the icky markup for it every time. Chalk up a second point for 20-year-old technology, I guess. Posted by bzbarsky at 11:14 PM ### Non-English keyboards under X Today I decided to finally bite the bullet and set up X to let me enter Russian (mostly for e-mail purposes). Surprisingly enough, this was not as hard as I had expected—the HOWTOs and other documentation have gotten a lot better since the last time I considered this endeavor (3 years ago or so). The time breakdown for this was as follows: • Find the right documentation and read it: 30 minutes. • Configure X (experimenting, cursing, finding when reality does not match the HOWTO, discovering by trial and error how XFree86 4.0.1 and 4.3.0 differ, rearranging fonts in my font path, messing with locale environment variables, that sort of thing): 90 minutes. • Discover that there is no easy way I will be able to learn a standard Russian keyboard (given that the labels on my keys are those for a standard U.S. keyboard): 15 minutes of typing. • Look for a "transliterated" keyboard layout on the web: 30 minutes. • Give up on finding one and just write one based on the "ru" keyboard layout: 90 minutes. Somehow, the whole procedure recalled freshman year in college in my mind. Might have had something to do with the amount of time I spent back then configuring things like mail readers, window managers, X, emacs... Next project: find time for my annual search for a tool that converts TeX to MathML. The gotchas here are that it needs to deal with the various AMS packages, with things defined via \newtheorem, and with things defined via \def and \newcommand (though I'll settle for just one or the other if both are not supported). Oh, and dealing with \newenvironment would be a nice bonus. (Unlike some, I do not indulge in \catcode or \let much, so it's ok if those are not handled very well...) If you know of such a beast, please let me know. Posted by bzbarsky at 10:49 PM ### No more bugspam Skimming bugmail was not working, so I have moved along to simply not reading it. I'm still saving it, and I may come back and read it sometime. I may not. In any case, I'll not be receiving any review requests, in addition to bugspam. If there is something that absolutely needs my review, please mail me in person (no guarantees that I'll be able to do it, though). Posted by bzbarsky at 1:29 PM ## May 5, 2003 ### College students will protest the oddest things For a while now, there's been this tent in the middle of the quad here with a bunch of bricks spelling out "No Occupation." Seems that the "moral" thing for the U.S. to do would be to immediately pull all troops out of Iraq and let the infrastructure, civil service, police force, etc. just somehow rebuild themselves. I bet the looters would be ecstatic. As would the dictator who would manage to gain control of enough support to put himself in power. Some people's inability to think through the implications of their pat slogans is astounding. Especially when you consider that this is one of the premier institutions of higher education in the United States, with a strong humanities core curriculum that includes numerous history classes. You'd think people would have learned something in those classes.... Posted by bzbarsky at 4:53 PM ## May 3, 2003 ### Mail filtering I now filter mail to my MIT address into three folders: 1. Spam 2. Bugspam 3. Everything else Folder #3 is getting perhaps a dozen mails a day. I don't bother to read the other two much; every couple of days I skim subjects (and senders for the spam folder) and delete all those mails. Life is much improved. Posted by bzbarsky at 10:00 PM ### gzip defeated Looks like there is this nice url-uncompress function over in url.el. Just changing xml-rpc-request-process-buffer to run that before doing anything else makes life happy. Posted by bzbarsky at 2:53 PM
2017-06-26 00:19:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5198547840118408, "perplexity": 2852.9745983683324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320595.24/warc/CC-MAIN-20170625235624-20170626015624-00434.warc.gz"}
https://www.csauthors.net/zhiqiang-wang/
# Zhiqiang Wang According to our database1, Zhiqiang Wang authored at least 86 papers between 1995 and 2018. Collaborative distances: Book In proceedings Article PhD thesis Other ## Bibliography 2018 Screen-Printed Electrode Modified by Bismuth /Fe3O4 Nanoparticle/Ionic Liquid Composite Using Internal Standard Normalization for Accurate Determination of Cd(II) in Soil. Sensors, 2018 Maglev Train Signal Processing Architecture Based on Nonlinear Discrete Tracking Differentiator. Sensors, 2018 Cross-camera multi-person tracking by leveraging fast graph mining algorithm. J. Visual Communication and Image Representation, 2018 Levitation control of permanent magnet electromagnetic hybrid suspension maglev train. J. Systems & Control Engineering, 2018 The role of indigenous technological capability and interpersonal trust in supply chain learning. Industrial Management and Data Systems, 2018 An Integrated Algorithm of CCPP Task for Autonomous Mobile Robot under Special Missions. Int. J. Comput. Intell. Syst., 2018 Finite-Time $$H_\infty$$ Model Reference Control for Linear Systems Based on Average Dwell-Time Approach. CSSP, 2018 Fused mean-variance filter for feature screening. Computational Statistics & Data Analysis, 2018 Digital Compensation Wideband Analog Beamforming for Millimeter-Wave Communication. Proceedings of the 87th IEEE Vehicular Technology Conference, 2018 2017 Splattering Suppression for a Three-Phase AC Electric Arc Furnace in Fused Magnesia Production Based on Acoustic Signal. IEEE Trans. Industrial Electronics, 2017 A Variant of Clark's Theorem and Its Applications for Nonsmooth Functionals without the Palais-Smale Condition. SIAM J. Math. Analysis, 2017 Global Feedback Stabilization for a Class of Nonlocal Transport Equations: The Continuous and Discrete Case. SIAM J. Control and Optimization, 2017 Satellite-Derived Spatiotemporal Variations in Evapotranspiration over Northeast China during 1982-2010. Remote Sensing, 2017 A chaotic coverage path planner for the mobile robot based on the Chebyshev map for special missions. Frontiers of IT & EE, 2017 Effects of institutional support on innovation and performance: roles of dysfunctional competition. Industrial Management and Data Systems, 2017 Demonstration of 60 GHz millimeter-wave short-range wireless communication system at 3.5 Gbps over 5 m range. SCIENCE CHINA Information Sciences, 2017 A static technique for detecting input validation vulnerabilities in Android apps. SCIENCE CHINA Information Sciences, 2017 A computational investigation of learning behaviors in MOOCs. Comp. Applic. in Engineering Education, 2017 Intent Identification for Knowledge Base Question Answering. Proceedings of the Conference on Technologies and Applications of Artificial Intelligence, 2017 An LTE-U coexistence scheme based on cognitive channel switching and adaptive muting strategy. Proceedings of the 28th IEEE Annual International Symposium on Personal, 2017 Using flow feature to extract pulsatile blood flow from 4D flow MRI images. Proceedings of the Medical Imaging 2017: Image Processing, 2017 Evaluation of human brain aging via diffusion structural characteristics. Proceedings of the 4th International Conference on Systems and Informatics, 2017 2016 Feedback Stabilization for the Mass Balance Equations of an Extrusion Process. IEEE Trans. Automat. Contr., 2016 Optimization of Stripping Voltammetric Sensor by a Back Propagation Artificial Neural Network for the Accurate Determination of Pb(II) in the Presence of Cd(II). Sensors, 2016 Millimeter wave receive beamforming with one-bit quantization. J. Comm. Inform. Networks, 2016 A consumption system model integrating quality, satisfaction and behavioral intentions in online shopping. Information Technology and Management, 2016 Towards efficient sharing of encrypted data in cloud-based mobile social network. TIIS, 2016 Interactive effects of external knowledge sources and internal resources on the innovation capability of Chinese manufacturers. Industrial Management and Data Systems, 2016 A resource-based view on enablers of supplier integration: evidence from China. Industrial Management and Data Systems, 2016 Multi-source alert data understanding for security semantic discovery based on rough set theory. Neurocomputing, 2016 Efficient authentication and access control of message dissemination over vehicular ad hoc network. Neurocomputing, 2016 On-site detection of heavy metals in agriculture land by a disposable sensor based virtual instrument. Computers and Electronics in Agriculture, 2016 URL Based Gateway Side Phishing Detection Method. Proceedings of the 2016 IEEE Trustcom/BigDataSE/ISPA, 2016 Urban pluvial flood risk assessment based on scenario simulation. Proceedings of the 13th Proceedings of the International Conference on Information Systems for Crisis Response and Management, 2016 A network risk assessment methodology for power communication business. Proceedings of the IEEE International Conference on Network Infrastructure and Digital Content, 2016 2015 Modeling, Control Design, and Analysis of a Startup Scheme for Modular Multilevel Converters. IEEE Trans. Industrial Electronics, 2015 Analysis of a system of nonlocal conservation laws for multi-commodity flow on networks. NHM, 2015 Modes of service innovation: a typology. Industrial Management and Data Systems, 2015 IVDroid: Static Detection for Input Validation Vulnerability in Android Inter-component Communication. Proceedings of the Information Security Practice and Experience, 2015 Motion Simulation and Statics Analysis of the Stator and Rotor of Low Speed High Torque Water Hydraulic Motor. Proceedings of the Intelligent Robotics and Applications - 8th International Conference, 2015 2014 Design and Performance Evaluation of Overcurrent Protection Schemes for Silicon Carbide (SiC) Power MOSFETs. IEEE Trans. Industrial Electronics, 2014 Regularity Theory and Adjoint-Based Optimality Conditions for a Nonlinear Transport Equation with Nonlocal Velocity. SIAM J. Control and Optimization, 2014 A quantize-and-forward scheme for systems with two cognitive relays. Proceedings of the 2014 International Symposium on Wireless Personal Multimedia Communications, 2014 Compressive cooperative schemes with multiple relays. Proceedings of the 2014 International Symposium on Wireless Personal Multimedia Communications, 2014 Antecedents and Outcome of Information Sharing in Supply Chain. Proceedings of the Thirteenth Wuhan International Conference on E-Business, 2014 Study on module control of low-speed PEMS train. Proceedings of the International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, 2014 Distributed detection on holes of component based on multi-view projection. Proceedings of the 2014 IEEE/ACIS 13th International Conference on Computer and Information Science, 2014 2013 Output Feedback Stabilization for a Scalar Conservation Law with a Nonlocal Velocity. SIAM J. Math. Analysis, 2013 Multi-aspect sentiment analysis for Chinese online social reviews based on topic modeling and HowNet lexicon. Knowl.-Based Syst., 2013 Threshold Random Walkers for Community Structure Detection in Complex Networks. JSW, 2013 RPFuzzer: A Framework for Discovering Router Protocols Vulnerabilities Based on Fuzzing. TIIS, 2013 A Novel Algorithm of Fundamental Positive Sequence Voltage Detector under Unbalanced and Distorted Voltages. iJOE, 2013 Effects of information technology alignment and information sharing on supply chain operational performance. Computers & Industrial Engineering, 2013 Fabrication of anisotropic nanomaterial by precise and large-area nanowire operation with focused-ion-beam. Proceedings of the 8th IEEE International Conference on Nano/Micro Engineered and Molecular Systems, 2013 Adaptation Phase-Locked Loop Speed and Neuron PI Torque Control of Permanent Magnet Synchronous Motor. Proceedings of the Advances in Neural Networks - ISNN 2013, 2013 Advanced starting point strategy for solving parametric DAE optimization problems. Proceedings of the 10th IEEE International Conference on Control and Automation, 2013 Design of a high voltage DC power supply with PFC function. Proceedings of the 2013 IEEE Industry Applications Society Annual Meeting, 2013 2012 An endurance solution for solid state drives with cache. Journal of Systems and Software, 2012 Aspect and Sentiment Extraction Based on Information-Theoretic Co-clustering. Proceedings of the Advances in Neural Networks - ISNN 2012, 2012 Parallel active contour with Lattice Boltzmann scheme on modern GPU. Proceedings of the 19th IEEE International Conference on Image Processing, 2012 Overlapping community discovery based on core nodes and LDA topic modeling. Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery, 2012 A research on vulnerability discovering for router protocols based on fuzzing. Proceedings of the 7th International Conference on Communications and Networking in China, 2012 2011 Reduced precision solution criteria for nonlinear model predictive control with the feasibility-perturbed sequential quadratic programming algorithm. Journal of Zhejiang University - Science C, 2011 The fitness evaluation strategy in particle swarm optimization. Applied Mathematics and Computation, 2011 Lattice Boltzmann Method of Active Contour for Image Segmentation. Proceedings of the Sixth International Conference on Image and Graphics, 2011 From Biology to Inspiration and Back: Is the Pallidal Complex a Reservoir? Proceedings of the Biologically Inspired Cognitive Architectures 2011, 2011 2010 A study on bacterial colony chemotaxis algorithm and simulation based on differential strategy. IJMIC, 2010 A modified mnemonic enhancement optimization method for solving parametric nonlinear programming problems. Proceedings of the 49th IEEE Conference on Decision and Control, 2010 2009 Adaptive Simulated Annealing and its Application to Protein Folding. Proceedings of the Encyclopedia of Optimization, Second Edition, 2009 Exact Boundary Controllability for 1-D Quasilinear Hyperbolic Systems with a Vanishing Characteristic Speed. SIAM J. Control and Optimization, 2009 Cache simulator based on GPU acceleration. Proceedings of the 2nd International Conference on Simulation Tools and Techniques for Communications, 2009 Using GPU to Accelerate Cache Simulation. Proceedings of the IEEE International Symposium on Parallel and Distributed Processing with Applications, 2009 Overcoming Geoinformatic Knowledge Fence: An Exploratory of Intelligent Geospatial Data Preparation within Spatial Analysis. Proceedings of the Computational Science, 2009 Electrostatic Deposition of Fine Particles for Fabrication of Porous Ceramic Filter. Proceedings of the Annual Meeting of the IEEE Industry Applications Society, 2009 Research on the Semantics Based Cross-Media Information Retrieval. Proceedings of the CSIE 2009, 2009 WRI World Congress on Computer Science and Information Engineering, March 31, 2009 GCSim: A GPU-Based Trace-Driven Simulator for Multi-level Cache. Proceedings of the Advanced Parallel Processing Technologies, 8th International Symposium, 2009 Name Transliteration with Bidirectional Perceptron Edit Models. Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration, 2009 Accelerate Cache Simulation with Generic GPU. Proceedings of the Ninth IEEE International Conference on Computer and Information Technology, 2009 2008 The Classification Diagram of Character Identification in Several Different and Similar Structures of Time Series. Proceedings of the 2008 International Symposium on Computer Science and Computational Technology, 2008 N-Module Based Self-Adaptive Contention Resolution Scheme for WiMAX P2MP Network. Proceedings of the 10th IEEE International Conference on High Performance Computing and Communications, 2008 2006 A parallel algorithm for 3D dislocation dynamics. J. Comput. Physics, 2006 A Constructive Algorithm for Training Heterogeneous Neural Network Ensemble. Proceedings of the Rough Sets and Knowledge Technology, First International Conference, 2006 2005 New Experiments in Distributional Representations of Synonymy. Proceedings of the Ninth Conference on Computational Natural Language Learning, 2005 1999 Large Scale Molecular Dynamics Simulations with Fast Multipole Implementations. Proceedings of the ACM/IEEE Conference on Supercomputing, 1999 1997 Prediction of peptide conformation: The adaptive simulated annealing approach. Journal of Computational Chemistry, 1997 1995 The design of chromophore containing biomolecules. Proceedings of the Global Minimization of Nonconvex Energy Functions: Molecular Conformation and Protein Folding, 1995
2018-11-17 07:12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2697597146034241, "perplexity": 8319.977694106703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743294.62/warc/CC-MAIN-20181117061450-20181117083450-00309.warc.gz"}
https://physics.stackexchange.com/questions/557305/what-is-a-coordinate-free-formulation-of-deformation-theory
What is a coordinate-free formulation of deformation theory? For example how are stress, strain and shear tensors described invariantly, without any coordinates, purely in a geometric manner? A formulation that avoids indices coordinates and matrices, even in practical calculations. • Have you looked at the brief description of the tensor form of the linear elasticity equations on Wikipedia? Jun 5, 2020 at 12:40 • Yea but it's too brief, and it doesn't show any concrete examples, without Cartesian or other coordinates. Jun 5, 2020 at 13:03 • You cannot avoid coordinates and indices in practical (physics) calculations. – user21299 Jun 5, 2020 at 13:37 • I'm not looking for opinions here Jun 5, 2020 at 13:41 • @Ezio It's "too brief" because that's all there is to it, and there aren't any examples because for almost all practical purposes you need a definite coordinate system. How would you propose to define the stress-strain relationship for a general anisotropic material, unless you have some way to talk about the orientation of the material at different points in space, for example - and that is going to involve defining some basis vectors, which is the same thing as a coordinate system. Jun 5, 2020 at 14:27 I would recommend looking at works of W. Noll, C. Truesdell and collaborators. They have been working on the mathematical foundations for continuous mechanics since 1950-s producing several textbooks & monographs with most notable being The non-linear field theories of mechanics by C. Truesdell & W. Noll. For more modern exposition see the paper: From the introduction: This paper is intended to serve as a model for the first few chapters of future textbooks on continuum mechanics and continuum thermomechanics. It may be considered an update of the paper Lectures on the Foundations of Continuum Mechanics and Thermodynamics [N2] by one of us (W.N.), published in 1973,and an elaboration of topics treated in Part 3, entitled Updating the Non-Linear Field Theories of Mechanics, of the booklet [FC] by W.N.$${}^1$$. The present paper differs from most existing textbooks on the subject in several important respects: 1. It uses the mathematical infrastructure based on sets, mappings, and families, rather than the infrastructure based on variables, constants, and parameters. (For a detailed explanation, see The Conceptual Infrastructure of Mathematics by W.N. [N1].) 2. It is completely coordinate-free and $$\mathbb{R}^n$$-free when dealing with basic concepts. 3. It does not use a fixed physical space. Rather, it employs an infinite variety of frames of reference, each of which is a Euclidean space. The motivation for avoiding physical space can be found in Part 1, entitled On the Illusion of Physical Space, of the booklet [FC]. Here, the basic laws are formulated without the use of a physical space or any external frame of reference. 4. It considers inertia as only one of many external forces and does not confine itself to using only inertial frames of reference. Hence kinetic energy, which is a potential for inertial forces, does not appear separately in the energy balance equation. In particle mechanics, inertia plays a fundamental role and the subject would collapse if it is neglected. Not so in continuum mechanics, where it is often appropriate to neglect inertia, for example when analyzing the motion of toothpaste when it is extruded slowly from a tube. See also PhD thesis Frame-Free Thermomechanics (2010) by Seguin and other papers (including those referenced in the above quote) at Noll's webpage. • Nolls stuff is really interesting I like his approach. The link you put in seems only quasi coordinate free. It says it's c free only for basic objects. Jun 6, 2020 at 14:44 • The first link that is Jun 6, 2020 at 14:46 • The 2-nd list item of quote should be read as It is (completely coordinate-free) and ($\mathbb{R}^n$-free when dealing with basic concepts) : the basic concepts bit refers to $\mathbb{R}^n$ independence, at some point they do assume Euclidean spaces, theory remains coordinate-free even after that. Jun 6, 2020 at 15:13 • Euclidean space is not the same thing as Rn. I do appreciate the abstraction, but I would like to see how this formalism applies to some concrete examples. Otherwise I can't see the point of all this structure. One of the main reasons I asked for coord free is for more simplicity. For example by a tensor I simply mean some linear function of some sort of vectors.. In theory these papers seem nice,but I don't see how this apparent loss of simplicity is justified practically. Jun 6, 2020 at 15:57 • My impression is that higher level of abstraction could be used to simplify description of complicated materials: nematic liquid crystals, materials with memory etc. Jun 6, 2020 at 16:29 The infinitessimal strain tensor is defined by $$\textstyle{\frac 12} {\mathcal L}_{\boldsymbol \eta} {\bf g}$$ where $${\bf g}$$ is the usual metric of our 3-d euclidean world. Here $${\mathcal L}_{\boldsymbol \eta}$$ is the Lie derivative with respect to the displacement vector field $${\boldsymbol \eta}$$. For large displacements that take a point $${\bf r}$$ to $$\phi({\bf r})$$ we define the finite strain as $$\textstyle{\frac 12}( \phi^*({\bf g})-{\bf g})$$. Here $$\phi^*{\bf g}({\bf x},{\bf y})= {\bf g}(\phi_*({\bf x}),\phi_*({\bf y}))$$. In other words take two small displacements $${\bf x}$$, $${ \bf y}$$ in the undeformed material and take their inner product. Now deform the material so that the displacement vectors get moved (possibly a long way) and stretched and rotated to (still small) displacements $$\phi_*({\bf x})$$ and $$\phi_*({\bf y})$$. Take their new inner product (in our ambiant 3-space). The difference between the original inner product and the one of the deformed vectors defines the finite strain tensor $${\bf e}$$ evaluated on $${\bf x},{\bf y}$$ . None of the these concepts need a coordinate system. • OK this makes sense. Something along the lines of what I had in mind. So let's say than for example cauch stress tensor. It would be some linear function that would take in vectors(functions, displacements). What are some practical or real world examples of this. Without having to resort to coordinates? Jun 6, 2020 at 14:43
2022-08-08 02:19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6420242190361023, "perplexity": 532.9931302753987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00303.warc.gz"}
http://crypto.stackexchange.com/questions/12145/need-32-bit-mixing-function-that-has-perfect-avalanche-between-octets
# Need 32-bit mixing function that has perfect avalanche between octets for my hobby tinkering project, I need a mixing function that takes 32-bit input and has 32-bit output (and will, most likely, run in a 32-bit C environment) and the following property (independent of endianness, i.e. it’s enough to only look at either big endian or little endian (or pdp endian) systems): union u { uint32_t u32; uint8_t u8[4]; } ival, oval, ival2, oval2; int x; ival.u32 = /* some unsigned 32-bit value */; oval.u32 = ƒ(ival.u32); x = /* some value ∈ { 0, 1, 2, 3 } */; ival2.u32 = ival.u32; ival2.u8[x] ^= /* some nonzero octet */; /* ival2 = ival where exactly one input octet differs */ oval2.u32 = ƒ(ival2.u32); assert(oval2.u8[0] ≠ oval.u8[0]); /* first output octet differs */ assert(oval2.u8[1] ≠ oval.u8[1]); /* second output octet differs */ assert(oval2.u8[2] ≠ oval.u8[2]); /* third output octet differs */ assert(oval2.u8[3] ≠ oval.u8[3]); /* fourth output octet differs */ That is, I need a function where, when you change one of the octets in the input, all four octets of the output differ. Since this is a 32-bit to 32-bit mapping, it can be a perfect mixing function (bijective, not one-way); this would be extremely beneficial because I could then replace the Final function of a 32-bit (nōn-cryptographic) hash (based on Jenkins’ one-at-a-time but tweaked) I’m using in the same context with it, too, and it would not lose any fractional bits of entropy from the input. Of course I could do this “the simple way” with lookup tables, but the idea is to have this in only a few lines of code (that is, few machine instructions), either completely algorithmic, or with only, say, up to 256 bytes of read-only data. I bet there’s something like this already around. (My target CPUs are, for now, i486, sparc v8, 32-bit MIPS, but I’d want it in portable code, not assembly; C is just fine as long as it uses only unsigned integers, since I’ll most likely need to implement it in C.) Please only include code snippets if they are true Public Domain (e.g. government work) or I can reuse them under the MIT Licence, the MirOS Licence, or the BSD Licence, or in a language not C with a permission for me to “rewrite the same algorithm” in C (while not copying a line of code); otherwise, please only include algorithmic descriptions that are enough for me to write C code from it. And nothing that’s legally dangerous or questionable of course ☺ My strength is coding, not mathematics (I already had to prove the bijectivity of the Finish function of my modified Jenkins-OAAT empirically, i.e. by trying all possible 2³² combinations), that’s why I thought to ask here. (No homework or commercial stuff, just trying to improve the world, in the context of the BSD Unix operating systems, and Open Source.) - The avalanche property does not have to hold if more than one octet gets modified, right? –  nightcracker Dec 5 '13 at 23:31 Oh, sorry, that is underspecified then. If two get modified, the other two should both change; similar for three, but this is harder to express. I think that, since we get a 2³²→2³² mapping, this should be doable. –  mirabilos Dec 5 '13 at 23:42
2014-07-24 03:52:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762102603912354, "perplexity": 3477.3652082179274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997885796.93/warc/CC-MAIN-20140722025805-00138-ip-10-33-131-23.ec2.internal.warc.gz"}
https://socratic.org/questions/a-sample-of-rock-is-found-to-contain-100-grams-ion-a-parent-isotope-how-many-gra
# A sample of rock is found to contain 100 grams ion a parent isotope. How many grams of the parent isotope will remain after two half lives? After one half life 50 grams $\frac{100}{2} = 50$ After two half lives 25 grams $\frac{50}{2} = 25$
2019-07-22 12:48:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1935151368379593, "perplexity": 1309.2451943124195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00351.warc.gz"}
https://math.stackexchange.com/questions/787710/showing-that-pim-n-pim-ast-pin-for-n-dimensional-manifolds-m
# Showing that $\pi(M \# N) = \pi(M) \ast \pi(N)$ for $n$-dimensional manifolds $M$,$N$ Problem: Let $M$ and $N$ be $n$-dimensional manifolds, where $n > 2$. Let $M \# N$ be their connected sum. Show that $\pi(M \# N) = \pi(M) \ast \pi(N)$. RE-EDITED Attempt: 1. Let $U_2$ and $V_2$ be two small open balls from $M$ and $N$ to be removed by the connected sum operation. Let $p_m \in U_2$ and $p_n \in V_2$, and then let $U_1 = M - \{p_m\}$ and $U_2 = N - \{p_n\}$. 2. We can now view the connected sum $M\#N$ as the quotient of the disjoint union of $M$ and $N$ by an equivalence relation identifying $U_2 -\{p_m\}$ with $V_2-\{p_n\}$ defined by some homeomorphism between the two. Denote $W_1$ and $W_2$ as the respective images of $U_1$ and $V_1$ under the quotient map. 3. We have that $U_2$ and $V_2$ are open by construction. Furthermore, since $M$ and $N$ are open, we have that $U_1 = M - \{p_m\}$ and $V_1 = N - \{p_n\}$ are also open (since open sets with a point removed are still open). 4. Then we have the following open covers of $M$ and $N$ respectively: $$M = U_1 \cup U_2$$ $$N = V_1 \cup V_2$$ 5. We can then express $$M \# N = \underbrace{W_1}_{\text{open}} \cup \underbrace{W_2}_{\text{open}}$$ as well since $M \cup N = V_1 \cup U_1$ and $W_1$ and $W_2$ are just the images of $U_1$ and $V_1$ under the natural quotient map used to define $M \# N$. 6. Consider that $W_1 \cap W_2$ is path connected since $W_1 \cap W_2$ is homeomorphic to $U_2 - \{p_m\} \cong V_2 - \{p_n\}$, both of which are punctured disks which deformation retract onto $S^{n-1}$. Since spheres of dimension greater than $2$ are simply connected (hence path connected), we have that $W_1 \cap W_2$ is path connected as well. This will allow us to later apply Van Kampen to $M \# N = W_1 \cup W_2$. 7. Now we have that $$U_1 \cap U_2 = U_2 - \{p_n\} \cong W_1 \cap W_2 \cong V_2 - \{p_m\} = V_1 \cap V_2$$ so that from above we can say $$\pi_1(U_1 \cap U_2) \cong \pi(V_1 \cap V_2) \cong \pi(W_1 \cap W_2) \cong \{e\}$$ 8. Now since $n > 2$, we have that $$\underbrace{\pi_1(U_1) = \pi_1(M - \{p_m\}) \cong \pi_1(M) \cong \pi_1(W_1)}_{\text{removing a point doesn't change fundamental group for n > 2}}$$ and similarly $$\pi_1(V_1) = \pi_1(N - \{p_n\})\cong\pi_1(N) \cong \pi_1(W_2)$$ 9. Then applying Van Kampen on $M \# N = W_1 \cup W_2$ yields that $$\pi_1(W_1) \ast_{\pi_1(W_1 \cap W_2)} \pi_1(W_2) \cong \pi_1(M\#N)$$ 10. But since (8) yields that $$\pi_1(W_1) \ast_{\pi_1(W_1 \cap W_2)} \pi_1(W_2) \cong \pi_1(W_1) \ast_{\{e\}} \pi_1(W_2) \cong \pi_1(W_1) \ast \pi_1(W_2) \cong \pi_1(M) \ast \pi_1(N)$$ we then have from (9) that $$\pi_1(M) \ast \pi_1(N) \cong \pi_1(M\#N)$$ as desired. • Hint: use Seifert–van Kampen – Grigory M May 9 '14 at 10:55 • $\pi_1(M)$ is a group not a number, so «$\pi_1(M)=\infty$» is not even wrong – Grigory M May 9 '14 at 10:56 • @user1770201: What is $\pi$? – user99914 May 9 '14 at 11:07 • @John: The fundamental group, or the set of all paths (up to homotopy) of $M$ or $N$. Since $M$ and $N$ are manifolds, I presume our choice of base points don't matter. – user1770201 May 9 '14 at 11:11 • In light of the duplicate post flagged by Grigory M and other users, I edited my post above with (what I think is) a complete solution to the problem but with questions flagged on the steps where I'm specifically confused. I appreciate the comments. – user1770201 May 9 '14 at 15:28 Let's be a little more precise. The direct sum $M\#N$ is defined as quotient of the disjoint union of $M$ and $N$ by an equivalence relation identifying $U_2 -\{p_m\}$ with $V_2-\{p_n\}$ defined by some homeomorphism between the two. Then denote by $W_1$ the image of $U_1$ under the quotient map and by $W_2$ the image of $V_1$ under the quotient map. 5) $W_1\cap W_2$ is connected because it is homeomorphic to $U_2-\{p_m\}\cong V_2-\{p_n\}$. These are punctured disks, which deformation retract onto spheres. Spheres of dimension greater than $0$ are connected. (Note that this still works for $n=2$.) 6) See above: everything here intersects in the punctured disk where we glued to create the connect sum. Punctured disks deformation retract to spheres. 7) The space $S^{n-1}$ is definitely not contractible for $n\geq 2$. It is simply connected. 8) Here is where you need the hypothesis $n>2$. For dimension greater than $2$, removing a point does not change the fundamental group. To see why, show that the homomorphism $\pi_1(U_1)\to \pi_1(M)$ induced by the inclusion is actually an isomorphism. The intuition is that if any of the homotopies between generators $\pi_1(M)$ hits the ball you've removed, it can be "moved" so it "goes around" the ball. (Removing a point does change higher homotopy groups --- $M$ is not homotopy equivalent to $U_1$ --- but does not change $\pi_1$.) • In your first paragraph, why is the equivalence relation identifying $U_2 - \{p_m\}$ with $V_2 - \{p_n\}$ instead of just $U_2$ with $V_2$? Also in your first paragraph: how do we know what the image of $U_1$ and $V_1$ is under the quotient map for purposes of defining $W_1$ and $W_2$? – user1770201 May 11 '14 at 16:58 • @user1770201 Because $U_2$ and $V_2$ are open disk; for a connect sum don't want to glue the two manifolds along a disk, we want to glue them along a collar of the removed disks. – Neal May 11 '14 at 17:24 • @user1770201 I'm not sure I understand the question. The image of $U_1$ is the set of equivalence classes in $M\# N$ which have at least one representative in $U_1$. – Neal May 11 '14 at 17:26 • Do you have a link which explains in more detail the definition of $M \# N$ you are using? Definitions seem to vary from site to site (wikipedia, proof wiki, etc) and I'm trying to find yours. – user1770201 May 12 '14 at 2:21 • @user1770201 They're all the same: Delete a disk, glue along boundaries. :) Mine is equivalent to Wikipedia's in the case of a smooth manifold; the disk $U_i$ is a tubular neighborhood of $p_i$ diffeomorphic to the normal bundle of $p_i$. Wikipedia also specifies a choice of gluing map, which I did not do. – Neal May 12 '14 at 2:36
2019-08-19 10:11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9103403687477112, "perplexity": 175.54396619513986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314721.74/warc/CC-MAIN-20190819093231-20190819115231-00088.warc.gz"}
http://tex.stackexchange.com/questions/41198/how-can-you-change-what-frame-sectionpage-does-in-beamer
# How can you change what \frame{\sectionpage} does in beamer? I am using beamer to make a presentation. I would like to have a page after I start a new section with only the section title on it. When I use the command \frame{\sectionpage} I end up getting a page saying "Section 1" and then "the section title". I would however like to only have "the section title" and remove "Section 1". How can I do this? - You can use the \insertsection command to get the current section name and fashion your own sectionpage. \documentclass{beamer} \begin{document} \section{Section name A} \frame{\insertsection} \section{Section name B in the frametitle} \begin{frame}{\insertsection} \end{frame} \end{document} To modify the current one you can look up for the \AtBeginSection in the manual p.97 (v.3.12) - I tailored my own template for the section page by using \setbeamertemplate and stripping the unneeded part from the original \defbeamertemplate*{section page} in beamer/base/themes/inner/beamerinnerthemedefault.sty: \setbeamertemplate{section page} { \begin{centering} \begin{beamercolorbox}[sep=12pt,center]{part title} \usebeamerfont{section title}\insertsection\par \end{beamercolorbox} \end{centering} } - Welcome to TeX.SX! –  henrique Sep 9 '13 at 21:46
2015-07-07 09:02:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5583754181861877, "perplexity": 3590.0932912559906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099105.15/warc/CC-MAIN-20150627031819-00188-ip-10-179-60-89.ec2.internal.warc.gz"}
https://docs.relational.ai/getting-started/help/error-messages/ic-body-error
• Getting Started • HELP • Error Messages • NON_FORMULA_IC_BODY # Error code: NON_FORMULA_IC_BODY This guide presents the integrity constraint (IC) body error and how to solve it. An integrity constraint ensures the integrity of the database, requiring that its relations and data obey the specified constraint. It should always be defined as a formula, which evaluates to true or false. If this is not the case, a NON_FORMULA_IC_BODY error is prompted. See the following example: def R = {1, 2} ic { R[1] } A solution to the above could be something like this: def R = {1, 2} ic { R[1] > 0 }
2022-09-28 23:19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5613763928413391, "perplexity": 2888.1861550877516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00774.warc.gz"}
https://w3.onera.fr/smac/?q=tracker
Routine tracker Rational approximation of tabulated data using genetic programming. Description This routine uses Genetic Programming (see below) to compute a multivariate rational approximation $f:\mathbb{R}^n\rightarrow\mathbb{R}$ of a set of scalar samples $\left\{y_k\in\mathbb{R}, k\in [1, N]\right\}$ obtained for different values $\left\{x_k\in\mathbb{R}^n,k \in [1, N]\right\}$ of some explanatory variables $x$. The maximum degree of the rational function is set by the user and a sparse rational expression is usually obtained. Unless specified, its nonsingularity is guaranteed on the whole considered domain (defined as the convex hull of $\left\{x_k\in\mathbb{R}^n,k \in [1, N]\right\}$). When the model structure has to be determined as a whole (numerator/denominator degrees, number and type of monomials), the problem cannot generally be solved by means of classical techniques. Over a few explanatory variables, a sequential and systematic exploration of the terms cannot reasonably be expected owing to the curse of dimensionality. E.g., for 2 variables and a maximum degree of 10, there are not less than $10^{15}$ rational candidates available! Moreover, this dual-purpose optimization (model structure and parameters) is complicated by the fact that a rational model is no more a Linear-in-its-Parameters (LP) model. Happily, some promising techniques have recently appeared for global optimization, with the purpose of solving symbolic regression problems close to this one. It is especially the case of Genetic Programming and that is why this was adapted to the approximation of a rational function. The interest in combining least-squares methods (LS) with symbolic optimization of the kernel functions has already been studied some years ago and has resulted for instance in the Matlab code GP-OLS [4]. More recently, another Toolbox has been developed (GPTIPS), permitting to encode and to adapt a LP model in a multigene symbolic regression form [6]. However, none of those tools is suitable for directly synthetizing a rational modeling, especially because the parameter optimization becomes a nonlinear process then. Consequently, the approach proposed here is somewhat different and is dedicated to the rational case. It is issued from several considerations: 1/ the rational case extends the polynomial case (structured modeling expressed as the quotient of two polynomials) 2/ GP is fully justified then since there is no other classical option available for jointly optimizing the structure of numerator and denominator (a brute-force search would not attempt to minimize the number of monomials) 3/ those two components remain LP when considered separately, and it would be a pity not to take advantage of that. In other words, GP is clearly a promising alternative for rational modeling, but a prior adaptation of the method is required to use it with a maximum efficiency. That is why the algorithm called TRACKER (Toolbox for Rational Approximation Computed by Knowing Evolutionary Research) has been developed by ONERA. See [2] for more details. TRACKER algorithm in a few words The outlines of TRACKER are the following [2]. Each component of the rational function (numerator and denominator) is represented by a single separate chromosome which comes in a syntax tree form as usual, and a priori includes several genes. The sets F$=\{+,*\}$ and T$=\{x^0=1,x^1,...,x^n\}$ are chosen as for the polynomial case (see section below about GP), and a peculiar syntax rule is defined to ensure that all the non terminal nodes located below a "$*$"-type node are also "$*$"-type nodes. This trick avoids to create useless branching which could result in splitting and multiplying some monomials. Thanks to this architecture, a gene appears as a subtree linked to the root node of its chromosome through one or several "$+$"-type nodes. A parse analysis of the different genes composing a chromosome also permits to avoid the creation of spurious genes by identifying and grouping them if any. As an example, the figure below depicts the tree's architecture corresponding to the function $(a_0+a_1*x_2+a_2*x_1*x_3)/(b_0+b_1*x_1+b_2*x_1^2*x_3+b_3*x_2*x_3^2)$. The 5 genes related to the different monomials are highlighted by colours (except from the constants $a_0$ and $b_0$ which are an integral part of the structure). To solve the parametric optimization, i.e. to estimate the regression coefficients $(a_j,b_j)$ of any created tree, TRACKER implements a well-known technique, in use for identifying transfer functions in the frequency domain [1,5]. In the general case where $f$ is a rational function expressed as: $(2)\hspace{3cm}y_k=f(x_k)=P(x_k)\bigg/Q(x_k)=\displaystyle\sum_{j=0}^{n_P}a_j\ r_j^P(x_k)\bigg/\displaystyle\sum_{j=0}^{n_Q}b_j\ r_j^Q(x_k)$ where $\left\{r_j^P,j\in [1,n_P]\right\}$ and $\left\{r_j^Q,j\in [1,n_Q]\right\}$ are two sets of monomial regressors, this consists in iteratively linearizing the expression of the error term involved in the quadratic cost function as: $(3)\hspace{3cm}\hat{\epsilon}_k\simeq\bigg[\displaystyle\sum_{j=0}^{n_P}a_j\ r_j^P(x_k)-\displaystyle\sum_{j=1}^{n_Q}b_j\ y_k\ r_j^Q(x_k)-y_k\bigg]\bigg/\bigg[1+\displaystyle\sum_{j=1}^{n_Q}\hat{b}_j\ r_j^Q(x_k)\bigg]$ where the approximation of $\epsilon_k$ comes from the replacement of the parameters $b_j$ by their most recent estimates $\hat{b}_j$ at the running iteration, and the denominator is arbitrarily normalized by choosing $b_0=1$. This algorithm relies on the fact that the denominator approximation becomes fully justified when the iterative process has converged. Hence, the parameters $(a_j,b_j)$ to be determined just become solution of a linearized LS problem. In practice, 2 or 3 iterations are usually sufficient to ensure the convergence of this process, conveniently initialized by setting the denominator to 1 at the first iteration. In case of ill-conditioning, a few iterations of Levenberg-Marquardt optimization are used to recover a satisfactory result. Introduced into the GP selection process (see section below about GP), that technique enables to evaluate the performance of every individual very easily, by coming down to a short series of ordinary LS. The overcost remains limited because the major part of the computations required for the regression matrix can be stored and reused throughout the loop. Otherwise, to get well-defined LFRs, an important issue is to check that the denominator has no roots in the considered domain, i.e. $Q(x)\neq0$ for all $x$. Hence, a $\mu$-analysis based technique is implemented to check the nonsingularity of the resulting functions, and additional constraints are introduced until strictly positive denominators are obtained [2]. Syntax [pop,best,fdata,fdesc,fsym,flfr]=tracker(X,Y,names,maxdeg{,options}) Input arguments The first four input arguments are mandatory: X Values $\left\{x_k\in\mathbb{R}^n,k \in [1, N]\right\}$ of the explanatory variables $x$ ($n\times N$ array, where X(:,k) corresponds to $x_k$). Y Samples $\left\{y_k\in\mathbb{R}, k\in [1, N]\right\}$ to be approximated ($1\times N$ array where Y(k) corresponds to $y_k$). names Names of the explanatory variables $x$ ($1\times n$ cell array of strings). maxdeg Maximum degree of the approximating rational function $f$. The fifth input argument is an optional structured variable. Default values are given between brackets. Most of the parameters can be left to their default setting, some of them (tree features, evolutionary process) being especially devoted to experienced users. The most important ones are maxexp, nbind and nbgen. options $\hspace{0.65cm}$User constraints and settings maxexp: [maxdeg*ones(1,n)] maximum exponent of each explanatory variable in the approximating rational function $f$. weights: [ones(1,N)] vector containing the weighting coefficients applied to the data samples. trace: [1] trace of execution (0=no, 1=text, 2=text+figures). viewpoint: [0 0] deviation with respect to the default viewpoint (see plotapprox) ; applicable only if 3-D graphs are to be displayed (options.trace=2 and $n=2$). Population features nbind: [100] number of individuals in the population. nbgen: [100] maximum number of generations. display: [5] display rate in number of generations. Convergence parameters goal: [false] to stop if a minimum fitness value is reached. target: [0] minimum fitness value for stopping. Remark: Stopping the run at any time is also possible by changing a flag from 0 to 1 in the file stop.dat (flag checked by the code at the end of every generation). Set of terminal nodes nbfvar: [-1] number of fictitious variables added to get simpler trees: $\hspace{1.5cm}$(nbfvar<0 to let the fictitious variables be automatically determined w.r.t. $maxexp$, $\hspace{1.6cm}$ nbfvar=0 to avoid any fictitious variable, $\hspace{1.6cm}$ nbfvar>0 to choose the number and type of fictitious variables). fvar: [ [] ] vector containing the numbers of the explanatory variables associated to the fictitious variables (to be provided only if nbfvar>0). dfvar: [ [] ] vector containing the relative degrees of the explanatory variables associated to the fictitious variables (to be provided only if nbfvar>0). Tree features treedepth: [13] maximum allowed depth of trees (can be automatically incremented during the run till the treedepthigh value is reached). treenodes: [200] maximum allowed number of nodes per tree. treealgo: [3] method used to generate the trees (1=full, 2=grow, 3=ramp half and half). subtreedepth: [min(8,fix(2*treedepth/3))] maximum allowed depth of sub-trees when using mutation operator. treedepthigh: [18] maximum maximorum value of the treedepth parameter. Evolutionary process pmutation: [0.10] probability of mutation. pcrossover: [0.85] probability of crossover. pcopy: [0.05] probability of direct reproduction. Remark: pmutation, pcrossover and pcopy must sum to 1. pmutation_subtree: [0.90] probability of sub-tree mutation. pmutation_termswitch: [0.10] probability of exchange between terminal nodes. Remark: pmutation_subtree and pmutation_termswitch must sum to 1. toursize: [4] tournament size during the selection process. highlevcross: [0.5] proportion of high level crossover allowed for '+'-type nodes. parsipress: [false] lexicographic parsimony pressure. elitism: [0.05] elite proportion of the population kept for the following generation. Output arguments pop Structure containing the main information about the individuals of the final population: nbindiv: number of individuals in the population. nbinputs: number of explanatory variables (including fictitious variables if any). nbfinputs: number of fictitious variables added to get simpler trees. f2nfinputs: row vector (1 x nbfinputs) containing the numbers of the explanatory variables associated to the fictitious variables (only if nbfvar~=0). degfinputs: row vector (1 x nbfinputs) containing the relative degrees of the explanatory variables associated to the fictitious variables (only if nbfvar~=0). indiv: cell vector containing a representation of the parse trees associated to the individuals of the population (each component includes 2 cells of char containing the chromosomes associated to the numerator pop.indiv{i}(1) and denominator pop.indiv{i}(2)). fitness: vector containing the fitnesses of the individuals of the population. nbnodes: vector containing the numbers of nodes of the individuals of the population. param: cell vector containing the monomials coefficients (vector) for the individuals of the population ordered as follows: param{i}(1) = constant term of the numerator for the $i^{th}$ individual of the population, param{i}(2:l+1) = coefficients associated to the $l$ numerator monomials for the $i^{th}$ individual of the population (apart from the constant term), param{i}(l+2:l+m+1) = coefficients associated to the $m$ denominator monomials for the $i^{th}$ individual of the population (apart from the constant term frozen to 1). nbreg: vector containing the number of regressors associated to any individual in the population ($l+m$). degmax: vector containing the max degree of the regressors associated to any individual in the population. sumdeg: vector containing the total degree associated to any individual in the population. best Index of the best individual in the final population (with respect to fitness). fdata Values $\left\{f(x_k)\in\mathbb{R}, k \in [1, N]\right\}$ of the approximating function $f$ corresponding to the best individual (same size as Y). fdesc Description of the approximating function $f$ corresponding to the best individual (fdesc.cnum(j)/cden(j) and fdesc.enum(j,:)/eden(j,:) contain the coefficient and the exponents of the $j$th monomial of the numerator/denominator used to approximate Y). fsym Symbolic representation of the rational function $f$ corresponding to the best individual (symbolic object). flfr Linear fractional representation of the rational function $f$ corresponding to the best individual (GSS object if the GSS library is installed, LFR object otherwise if the LFR toolbox is installed). lsapprox olsapprox qpapprox koala Example Drag coefficient of a generic fighter aircraft model: load data_cx % alfa angle converted from radians to degrees (estimation and validation data) X1(2,:)=X1(2,:)*57.3; Xv(2,:)=Xv(2,:)*57.3; % approximation on a rough grid maxdeg=8; [pop,best,fdata,fdesc,fsym,flfr]=tracker(X1,Y1,names,maxdeg,tracker_options); ---------------------------------------------------------------------- Systems Modeling Analysis and Control (SMAC) Toolbox APRICOT Library Version 1.0 - April 2014 Rational approximation using Genetic Programming ---------------------------------------------------------------------- Number of samples: 1196 Number of explanatory variables: 2 Creation of 4 fictitious variables - with degrees w.r.t. explanatory variable #1: 3 5 - with degrees w.r.t. explanatory variable #2: 3 5 Maximum degree required for the approximation function: 8 Maximum degree required for each variable: 8 8 The population is comprised of 100 individuals and will be evolved over 100 generations at most There are 6 explanatory variables, including 4 fictitious variables (added to get simpler trees) Generation 0 --> best fitness: 0.00283915 (40 nodes) Generation 5 --> best fitness: 0.00118514 (76 nodes) Generation 10 --> best fitness: 0.00118290 (72 nodes) Generation 15 --> best fitness: 0.00115280 (84 nodes) Generation 20 --> best fitness: 0.00111282 (80 nodes) Generation 25 --> best fitness: 0.00104232 (88 nodes) Generation 30 --> best fitness: 0.00104232 (88 nodes) Generation 35 --> best fitness: 0.00102940 (84 nodes) Generation 40 --> best fitness: 0.00099021 (86 nodes) Generation 45 --> best fitness: 0.00099021 (86 nodes) Generation 50 --> best fitness: 0.00095285 (84 nodes) Generation 55 --> best fitness: 0.00094642 (98 nodes) Generation 60 --> best fitness: 0.00094455 (96 nodes) Generation 65 --> best fitness: 0.00092366 (102 nodes) Generation 70 --> best fitness: 0.00092366 (102 nodes) Generation 75 --> best fitness: 0.00092279 (100 nodes) Generation 80 --> best fitness: 0.00092279 (100 nodes) Generation 85 --> best fitness: 0.00092257 (94 nodes) Generation 90 --> best fitness: 0.00078144 (98 nodes) Generation 95 --> best fitness: 0.00078144 (98 nodes) Final size of the archive memory: 8615 Positivity check is successful, everything is OK Fitness value of the best individual #96: 0.00078144 (98 nodes) Symbolic expression of the rational approximation Numerator: 0.0166278+0.00254291*Al^2+6.86357e-006*Al^4-0.000493197*Ma^4*Al^3+7.49538e-005*Ma^3*Al^3 +3.73923e-005*Ma*Al^3-5.59117e-006*Ma*Al^4-6.15747*Ma^4+3.6731e-005*Ma^3*Al^4-0.000507616*Al^3 -0.0793195*Ma+5.21629*Ma^5+2.02358*Ma^3-5.4279e-007*Al^5 Denominator: 1.0-16.6575*Ma^2+0.0972086*Ma^5*Al^3+0.261167*Al^2-235.275*Ma^6+0.00194204*Al^4 -0.073906*Ma^4*Al^3+0.00353226*Ma*Al^3-7.47814e-005*Ma*Al^4-697.819*Ma^4-0.0344331*Al^3+231.866*Ma^3 +0.141067*Al-2.48184e-005*Al^5-3.96694*Ma+765.611*Ma^5 Size of the LFR: (trial#1 --> 15 / trial#2 --> 20 / trial#3 --> 15) ==> Minimum size = 15 CPU time for the tracker call = 854 seconds % Plot 3D surfaces (estimation results on a rough grid and validation results on a fine grid) plosurfs(pop,best,X1,Y1,names,Xv,Yv); Individual #96 (98 nodes/28 terms/max degree 8/total degree 116) Validation errors: Global=3.384660e-003 - RMS=8.539880e-004 - Max Local=4.596582e-003 % Plot Pareto fronts plofronts(pop,best); % User loop to analyse and plot any individual (Pareto optimal solution) loop=1; while loop  indiv = input('\nIndividual to be displayed (press RETURN to terminate) ?');  if isempty(indiv)   loop=0;  else   plosurfs(pop,indiv,X1,Y1,names,Xv,Yv);   [fdata,fdesc,fsym,flfr]=pop2lfr(pop,indiv,names,X1);  end end Individual #32 (66 nodes/21 terms/max degree 8/total degree 90) Validation errors: Global=5.193755e-003 - RMS=1.057876e-003 - Max Local=6.213335e-003 Symbolic expression of the rational approximation Numerator: 0.0136579-2.66819e-010*Al^6+1.0892e-006*Al^4-0.0020092*Ma^4*Al^3+0.000711613*Ma^3*Al^3 +0.00136296*Ma^5*Al^3-0.000451564*Ma+0.202329*Ma^5-0.0670935*Ma^3-0.000739199*Al-5.0639e-005*Al^3 -3.52304e-008*Al^5 Denominator: 1.0+0.0217883*Al^2+0.000157657*Al^4+0.00771755*Ma^5*Al^3-0.00226182*Ma^3*Al^3 -4.24916e-006*Ma*Al^4+11.9683*Ma^4-0.00272452*Al^3-0.368659*Ma-6.21157*Ma^3-2.14231e-006*Al^5 Size of the LFR: (trial#1 --> 16 / trial#2 --> 18 / trial#3 --> 16) ==> Minimum size = 16 $\Rightarrow$ the science of LFR sizes is not an exact one, ...and is often disappointing !!! What is Genetic Programming ? Genetic Programming (GP) is part of the evolutionary algorithms family, in the same way as GAs, evolutionary programming, etc. It uses the same principles inspired by those of natural evolution to evolve a population of individuals randomly created, improving its behavior progressively until a satisfactory solution is found. However, opposite to GAs, it is not based on a binary coding of information but uses a structured representation instead, as syntax trees. These parse trees appear more suited to solve structural or symbolic optimization problems since they can have different sizes and shapes. The alphabet used to create these models is also flexible enough to cope with different types of problems, and so they can be used to encode mathematical equations as well as behavior models, ... and even computer programs. First works date back to the early sixties, but GP was really implemented and brought up to date only in the early 90s by John Koza, thanks also to an increase of computing power. He was thus able to prove the interest of GP in many application fields, and laid the foundations of a standard paradigm which did not evolve much since then [3]. The iterative process, which breeds a population of programs, transforms them generation after generation by applying genetic mechanisms like those of Darwinian evolution: reproduction, mutation, crossover, but also gene duplication or deletion. Unlike GAs, they are applied to the hierarchically structured trees of the individuals, which comprise a set of nodes and links. These elements fall into two categories: the set F of internal nodes called functions or operators, and the set T of tree's leaves called terminals. An example is given below, corresponding to the mathematical function $f(x_1,x_2)=x_1(x_2-1)/(x_1+x_2)/\sqrt{x_2}$. All types of functions are acceptable: from elementary ones like arithmetical operators $(+,-,*,/)$ or mathematical operators $(\sqrt,exp,...)$, to logical, conditional (tests), or even more complex (e.g. user-defined). The terminals correspond to the function arguments but can also include some of their internal parameters or predefined constants. By the way, the content of T is a central issue for the problem of a joint structural/parameter optimization. A priori, it requires to discover the best functional structure permitting to fit the data by choosing and arranging relevant operators from F, but also to rule the coefficients involved in this functional structure by adapting the numerical values of some parameters included in T. That is called symbolic regression, by extending the usual notion of numerical regression. Of course, to be able to discover the right parameter values, an extra mechanism must be added to the GP algorithm [3]. It relies on the generalization of (predefined) constants, by introducing ephemeral random constants (constant creation). Accordingly, the set T includes a new kind of terminal denoted by E which results, when created by GP, in the insertion of a random number in the corresponding tree's branch. The discovery of the right parameter value could then rely on applying evolution mechanisms to the terminals E. Though this constant creation looks consistent with GP formalism, it would not be a very efficient process in practice. The tuning of a single parameter would require to mobilize many subtrees, each of them including many functions and constants ! That' why, when dealing with LP models, it is wiser to simplify that general GP formulation, which proves to be really interesting for nonLP models. The regression parameters are therefore taken away from T, which includes only the $n$ explanatory variables $x^i$, and possibly some predefined constants. In the simplified case of LP models for example, the population individuals are then mobilized only to represent the $m$ regressors $r_j(x^i)$ in (1): $(1)\hspace{3cm}y_k=f(x_k)=\displaystyle\sum_{j=1}^mw_j\ r_j(x_k)$ At every GP iteration, the functions $r_j$ (as well as their number $m$) are derived from the trees corresponding to any individual by analyzing the tree structure from its root. The numerical value of the parameters $w_j$ can then be adapted independently from GP, by applying any minimization technique to the squared error. From the advanced LS methods described for instance in [7], we can imagine that coupling GP with an OLS algorithm allows to solve the parametric optimization of the $w_j$ very efficiently [4]. For LP models, GP permits to extend the polynomial modeling by using simple mathematical operators as regressors, but not necessarily restricted to monomials only. However, by choosing F as $\{+,*\}$ and T as $\{x^0=1,x^1,...,x^n\}$, GP will produce pure polynomial models. The model complexity can also be controlled by penalizing some internal GP parameters, like the tree's depth, number of branches/leaves, or by favoring the selection of the simplest operators to the detriment of more complex ones. Practically, this can be easily achieved thanks to the fitness function which is used to handle the GP mechanisms of evolution. Similarly to what happens in ridge regression, this fitness function can be splitted in two parts by adding a penalty component to favor the simplest models and prevent from overfitting. E.g., to avoid the creation of too many non terminal nodes, a weighting can be introduced in the fitness against the number of nodes. A standard GP algorithm can be summarized by the following loop of executional steps [3]: • (S1) Creation of an initial population of M programs by random combination of T and F elements. These individuals are built by using a special routine for subtree creation, in order to get a pool of trees with various depths, sizes and shapes. A maximum depth is generally specified for the trees, as well as a maximum number of nodes, to avoid the creation of unnecessarily complex programs. • (S2) Evaluation of every program in the population, to get a relative or absolute measure of their relevance. This evaluation makes use of the user-defined fitness function, which can gather different types of assessment (numerical or logical) depending on the optimization task (multiobjective or constrained). In our case, it involves the computation of the sum of squared errors over the data base, including regularization terms. • (S3) Creation of a new population (the next generation) thanks to mechanisms implementing the principles of natural evolution. They are applied to a series of individuals, randomly selected with a probability usually based on their fitness. During this process, the best individuals are favored but the best-so-far is not necessarily selected nor the worst-so-far removed from the population. An elitist strategy is generally used to handle the replacement of old individuals by new ones, in terms of a parameter setting the generation gap (e.g. a value of 90% means that only 10% of the population passes down its genetic inheritance to the next generation). The selection operators comprise: • Mutation (asexual operation) with a weak probability of the order of some percents (see figure below). It consists in randomly choosing a mutation point in the tree, and then to replace the subtree issued from this point by a new structure randomly created by the same process as in (S1). • Crossover (sexual recombination) with a strong occurrence probability greater than 80%. During this operation, two parent individuals are selected and two crossover points are randomly chosen (one for each tree). The two subtrees rooted from these points are then exchanged to produce two offspring individuals, inheriting thus partly from each of their parents (see figure below). • Reproduction (cloning) which simply copies the selected individual to the new population. • Architecture-altering operations (gene duplication and deletion), each of them being applied sparingly with a weak probability less than 1%. They are motivated by the fact that the size and shape of the solution are sometimes a major part of the problem. This is especially true in our case, since the number and type of regressors' kernels will condition the tree's depth and shaping. Consequently, GP will use these operations to automatically modify the architecture of population trees, increasing their diversity with the hope that architectures well-suited to the problem will multiply and prosper under the selective pressure of the competition. These operations include duplication, creation/deletion of branches or leaves, i.e. terminal nodes and function arguments. • (S4) Go back to (S2) following an iterative process, until a termination criterion is satisfied or a maximum number of generations N is reached. The best-so-far program produced during the run is retained as the result, and corresponds to a solution or approximate solution to the problem if the run is successful (convergence). Suboptimal results can also be favored in terms of a trade-off between performances and complexity. As regards to the selection strategy of individuals from which the next generation will inherit, the applied techniques are similar to those used by other evolutionary methods. Therefore, several types of selection are available by: • Uniform sampling $\rightarrow$ the selection probability obeys a uniform distribution and all individuals have the same chance to be selected regardless of their fitness. • Roulette wheel $\rightarrow$ the selection probability is proportional to the fitness of the individuals. The best image to figure the process is a casino roulette where the better the individuals, the larger the size of the sectors. This strategy tends to favor the good elements which will be more easily selected, but can rapidly weaken the population in case of big gaps in the performance, i.e. between the best individual and the following ones. • Fitness ranking $\rightarrow$ this a variant of the roulette wheel, aiming at increasing the diversity of the selected individuals in case of heterogenous distribution of the fitnesses (the drawback being to slow the convergence). To do that, the population is sorted by fitness at first, the individuals being ranked from 1 (the best) to $M$ (the worst). They are selected following a roulette process then, but with a probability proportional to their rank and no more to their fitness. • Tournament $\rightarrow$ $k$ individuals are picked up at random and the best one is selected. This operation is repeated as many times as necessary to get the required number of individuals. Thus, the choice of $k$ permits to give more or less chance to the worst individuals: a high value will penalize them heavily whereas, if $k$ is low, they could be selected even so. This parameter can also be varied during the iterations to control the switch between exploration (low $k$) and exploitation stages (high $k$). References [1] A. Bucharles et al., "An overview of relevant issues for aircraft model identification", AerospaceLab, Issue 4, http://www.aerospacelab-journal.org/al4, 2012. [2] G. Hardier, C. Roos and C. Seren, "Creating sparse rational approximations for linear fractional representations using genetic programming", in Proceedings of the 3rd IFAC International Conference on Intelligent Control and Automation Science, Chengdu, China, pp. 232-237, September 2013. [3] J.R. Koza and R. Poli, "A Genetic Programming Tutorial", In Burke ed., Intoductory Tutorials in Optimization, Search and Decision Support, 2003. [4] J. Madar, J. Abonyi and F. Szeifert, "Genetic programming for the identification of nonlinear input-output models", Industrial and Engineering Chemistry Research, 44 (9), pp. 3178-3186, 2005. [5] C. Sanathanan and J. Koerner, "Transfer function synthesis as a ratio of two complex polynomials", IEEE Transactions on Automatic Control, 8 (1), pp. 56-58, 1963. [6] D.P. Searson, D.E. Lealy and M.J. Willis, "GPTIPS: an open source GP toolbox for multigene symbolic regression", International Multiconference of Engineers and Computer Scientists, Hong Kong, China, 2010. [7] C. Seren, G. Hardier and P. Ezerzere, "On-line Estimation of Longitudinal Flight Parameters", SAE AeroTech Congress and Exhibition, Toulouse, France, 2011.
2023-02-04 19:31:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6525370478630066, "perplexity": 1604.9131058928963}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00088.warc.gz"}
https://www.rdocumentation.org/packages/GmAMisc/versions/1.0.0/topics/modelvalid
# modelvalid 0th Percentile ##### R function for binary Logistic Regression internal validation The function allows to perform internal validation of a binary Logistic Regression model implementing most of the procedure described in: Arboretti Giancristofaro R, Salmaso L. "Model performance analysis and model validation in logistic regression". Statistica 2003(63): 375<U+2013>396. Keywords modelvalid ##### Usage modelvalid(data, fit, B = 200, g = 10, oneplot = TRUE, excludeInterc = FALSE) ##### Arguments data Dataframe containing the dataset (Dependent Variable must be stored in the first column to the left). fit Object returned from glm() function. B Desired number of iterations (200 by default). g Number of groups to be used for the Hosmer-Lemeshow test (10 by default). oneplot TRUE (default) is the user wants the charts returned in a single visualization. excludeInterc If set to TRUE, the chart showing the boxplots of the parameters distribution across the selected iteration will have y-axis limits corresponding to the min and max of the parameters value; this allows better displaying the boxplots of the model parameters when they end up showing up too much squeezed due to comparatively higher/lower values of the intercept. FALSE is default. ##### Details The procedure consists of the following steps: (1) the whole dataset is split into two random parts, a fitting (75 percent) and a validation (25 percent) portion; (2) the model is fitted on the fitting portion (i.e., its coefficients are computed considering only the observations in that portion) and its performance is evaluated on both the fitting and the validation portion, using AUC as performance measure; (3) the model's estimated coefficients, p-values, and the p-value of the Hosmer and Lemeshow test are stored; (4) steps 1-3 are repeated B times, eventually getting a fitting and validation distribution of the AUC values and of the HL test p-values, as well as a fitting distribution of the coefficients and of the associated p-values. The AUC fitting distribution provides an estimate of the performance of the model in the population of all the theoretical fitting samples; the AUC validation distribution represents an estimate of the model<U+2019>s performance on new and independent data. ##### Value The function returns: -a chart with boxplots representing the fitting distribution of the estimated model's coefficients; coefficients' labels are flagged with an asterisk when the proportion of p-values smaller than 0.05 across the selected iterations is at least 95 percent; -a chart with boxplots representing the fitting and the validation distribution of the AUC value across the selected iterations. for an example of the interpretation of the chart, see the aforementioned article, especially page 390-91; -a chart of the levels of the dependent variable plotted against the predicted probabilities (if the model has a high discriminatory power, the two stripes of points will tend to be well separated, i.e. the positive outcome of the dependent variable will tend to cluster around high values of the predicted probability, while the opposite will hold true for the negative outcome of the dependent variable); -a list containing: • $overall.model.significance: statistics related to the overall model p-value and to its distribution across the selected iterations •$parameters.stability: statistics related to the stability of the estimated coefficients across the selected iterations • $p.values.stability: statistics related to the stability of the estimated p-values across the selected iterations •$AUCstatistics: statistics about the fitting and validation AUC distribution • $Hosmer-Lemeshow statistics: statistics about the fitting and validation distribution of the HL test p-values As for the abovementioned statistics: -full: statistic estimated on the full dataset; -median: median of the statistic across the selected iterations; -QRNG: interquartile range across the selected iterations; -QRNGoverMedian: ratio between the QRNG and the median, expressed as percentage; -min: minimum of the statistic across the selected iterations; -max: maximum of the statistic across the selected iterations; -percent_smaller_0.05: (only for$overall.model.significance, $p.values.stability, and$Hosmer-Lemeshow statistics): proportion of times in which the p-values are smaller than 0.05; please notice that for the overall model significance and for the p-values stability it is desirable that the percentage is at least 95percent, whereas for the HL test p-values it is indeed desirable that the proportion is not larger than 5percent (in line with the interpetation of the test p-value which has to be NOT significant in order to hint at a good fit); -significant (only for \$p.values.stability): asterisk indicating that the p-values of the corresponding coefficient resulted smaller than 0.05 in at least 95percent of the iterations. logregr , aucadj • modelvalid ##### Examples # NOT RUN { data(log_regr_data) # fit a logistic regression model, storing the results into an object called 'model' model <- glm(admit ~ gre + gpa + rank, data = log_regr_data, family = "binomial") # run the function, using 100 iterations, and store the result in the 'res' object res <- modelvalid(data=log_regr_data, fit=model, B=100) # } Documentation reproduced from package GmAMisc, version 1.0.0, License: GPL (>= 2) ### Community examples Looks like there are no examples yet.
2021-01-17 10:24:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49216750264167786, "perplexity": 2029.1257179809775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703511903.11/warc/CC-MAIN-20210117081748-20210117111748-00080.warc.gz"}
https://ccse.jaea.go.jp/software/PARCEL/1.1/manual/PARCEL_manual_mathjax_eng.html
# 1 Krylov subspace methods This chapter describes the iterative methods contained in this library. The following notation is used, $$Ax = b$$ represents the linear system which will be solved, with $$A$$ being an n$$\times$$n sparse matrix and $$x_0$$ representing the initial vector. $$K$$ represents an appropriate preconditioner. ## 1.1 Preconditioned Conjugate Gradient Method [1] The Conjugate Gradient (CG) method is an iterative Krylov subspace algorithm for solving symmetric matrices. • Algorithm description 1. $$\mathbf{{Compute\ }}\mathbf{r}^{\mathbf{0}}\mathbf{= b - A}\mathbf{x}^{\mathbf{0}}\mathbf{{\ for\ some\ initial\ guess\ }}\mathbf{x}^{\mathbf{- 1}}\mathbf{=}\mathbf{x}^{\mathbf{0}}\mathbf{\ }$$ 2. $$\mathbf{p}^{\mathbf{- 1}}\mathbf{= 0}$$ 3. $$\mathbf{\alpha}_{\mathbf{- 1}}\mathbf{= 0\ }\mathbf{\ }$$ 4. $$\mathbf{\beta}_{\mathbf{- 1}}\mathbf{= 0}$$ 5. $$\mathbf{{solve\ s\ from\ }}\mathbf{K}\mathbf{s = \ }\mathbf{r}^{\mathbf{0}}$$ 6. $$\mathbf{\rho}_{\mathbf{0}}\mathbf{=}\left\langle \mathbf{s,}\mathbf{r}^{\mathbf{0}} \right\rangle$$ 7. $$\mathbf{for\ i = 0,1,2,3}\mathbf{\ldots}$$ 8.   $$\mathbf{p}^{\mathbf{i}}\mathbf{= s +}\mathbf{\beta}_{\mathbf{i - 1}}\mathbf{p}^{\mathbf{i - 1}}$$ 9.   $$\mathbf{q}^{\mathbf{i}}\mathbf{= A}\mathbf{p}^{\mathbf{i}}$$ 10.   $$\mathbf{\gamma =}\left\langle \mathbf{p}^{\mathbf{i}}\mathbf{,}\mathbf{q}^{\mathbf{i}} \right\rangle$$ 11.   $$\mathbf{x}^{\mathbf{i}}\mathbf{=}\mathbf{x}^{\mathbf{i - 1}}\mathbf{+}\mathbf{\alpha}_{\mathbf{i - 1}}\mathbf{p}^{\mathbf{i - 1}}$$ 12.   $$\mathbf{\alpha}_{\mathbf{i}}\mathbf{=}\mathbf{\rho}_{\mathbf{i}}\mathbf{\ /\ \gamma}$$ 13.   $$\mathbf{r}^{\mathbf{i + 1}}\mathbf{=}\mathbf{r}^{\mathbf{i}}\mathbf{-}\mathbf{\alpha}_{\mathbf{i}}\mathbf{q}^{\mathbf{i}}$$ 14.   $$\mathbf{{solve\ s\ from\ }}\mathbf{K}\mathbf{s = \ }\mathbf{r}^{\mathbf{i + 1}}$$ 15.   $$\mathbf{\rho}_{\mathbf{i + 1}}\mathbf{=}\left\langle \mathbf{s,}\mathbf{r}^{\mathbf{i + 1}} \right\rangle$$ 16.   $$\mathbf{{if\ }}\left\| \mathbf{r}^{\mathbf{i + 1}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ the}}\mathbf{n}$$ 17.     $$\mathbf{x}^{\mathbf{i}}\mathbf{=}\mathbf{x}^{\mathbf{i - 1}}\mathbf{+}\mathbf{\alpha}_{\mathbf{i - 1}}\mathbf{p}^{\mathbf{i - 1}}$$ 18.     $$\mathbf{{qui}}\mathbf{t}$$ 19.   $$\mathbf{{endi}}\mathbf{f}$$ 20.   $$\mathbf{\beta}_{\mathbf{i}}\mathbf{=}\mathbf{\rho}_{\mathbf{i + 1}}\mathbf{\ /\ }\mathbf{\rho}_{\mathbf{i}}$$ 21. $$\mathbf{{\ en}}\mathbf{d}$$ ## 1.2 Preconditioned Biconjugate Gradient Stabilized Method[1] The Biconjugate Gradient Stabilized (Bi-CGSTAB) method is an iterative Krylov subspace algorithm for solving nonsymmetric matrices. • Algorithm description 1. $$\mathbf{{Compute\ }}\mathbf{r}_{\mathbf{0}}\mathbf{= b - A}\mathbf{x}_{\mathbf{0}}\mathbf{{\ for\ some\ initial\ guess\ }}\mathbf{x}_{\mathbf{- 1}}\mathbf{=}\mathbf{x}_{\mathbf{0}}$$ 2. $$\mathbf{p}_{\mathbf{0}}\mathbf{=}\mathbf{r}_{\mathbf{0}}$$ 3. $$\mathbf{c}_{\mathbf{1}}\mathbf{=}\left\langle \mathbf{r}_{\mathbf{0}}\mathbf{,}\mathbf{r}_{\mathbf{0}} \right\rangle$$ 4. $$\mathbf{for\ i = 0,1,2,3}\mathbf{\ldots}$$ 5.   $$\mathbf{{solve\ }}\widehat{\mathbf{p}}\mathbf{{\ from\ K}}\widehat{\mathbf{p}}\mathbf{= \ }\mathbf{p}_{\mathbf{i}}$$ 6.   $$\mathbf{q = A}\widehat{\mathbf{p}}$$ 7.   $$\mathbf{c}_{\mathbf{2}}\mathbf{=}\left\langle \mathbf{r}_{\mathbf{0}}\mathbf{,q} \right\rangle$$ 8.   $$\mathbf{\alpha =}\mathbf{c}_{\mathbf{1}}\mathbf{\ /\ }\mathbf{c}_{\mathbf{2}}$$ 9.   $$\mathbf{e =}\mathbf{r}_{\mathbf{i}}\mathbf{- \alpha q}$$ 10.   $$\mathbf{{solve\ }}\widehat{\mathbf{e}}\mathbf{{\ from\ K}}\widehat{\mathbf{e}}\mathbf{= e}$$ 11.   $$\mathbf{v = A}\widehat{\mathbf{e}}$$ 12.   $$\mathbf{c}_{\mathbf{3}}\mathbf{=}\left\langle \mathbf{e,v} \right\rangle\mathbf{\ /\ }\left\langle \mathbf{v,v} \right\rangle$$ 13.   $$\mathbf{x}_{\mathbf{i + 1}}\mathbf{=}\mathbf{x}_{\mathbf{i}}\mathbf{+ \alpha}\widehat{\mathbf{p}}\mathbf{+}\mathbf{c}_{\mathbf{3}}\widehat{\mathbf{e}}$$ 14.   $$\mathbf{r}_{\mathbf{i + 1}}\mathbf{= e -}\mathbf{c}_{\mathbf{3}}\mathbf{v}$$ 15.   $$\mathbf{c}_{\mathbf{1}}\mathbf{=}\left\langle \mathbf{r}_{\mathbf{0}}\mathbf{,}\mathbf{r}_{\mathbf{i + 1}} \right\rangle$$ 16.   $$\mathbf{{if\ }}\left\| \mathbf{r}_{\mathbf{i + 1}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ }}\mathbf{{the}}\mathbf{n}$$ 17.     $$\mathbf{{quit}}$$ 18.   $$\mathbf{{endif}}$$ 19.   $$\mathbf{\beta =}\mathbf{c}_{\mathbf{1}}\mathbf{\ /\ }\left( \mathbf{c}_{\mathbf{2}}\mathbf{c}_{\mathbf{3}} \right)$$ 20.   $$\mathbf{p}_{\mathbf{i + 1}}\mathbf{=}\mathbf{r}_{\mathbf{i + 1}}\mathbf{+ \beta}\left( \mathbf{p}_{\mathbf{i}}\mathbf{-}\mathbf{c}_{\mathbf{3}}\mathbf{q} \right)$$ 21. $$\mathbf{{end}}$$ ## 1.3 Preconditioned Generalized Minimum Residual Method [1] The Generalized Minimum Residual (GMRES(m)) method is an iterative Krylov subspace algorithm for solving nonsymmetric matrices. GMRES(m) is restarted periodically after $$m$$ iterations if it did not converge within $$m$$ iterations, the solution obtained until $$m$$ iterations will be used as an input for the new restart cycle. PARCEL provides variants of GMRES with the Classical Gram-Schmidt (sometimes referred to as standard Gram-Schmidt) and Modified Gram-Schmidt (MGS) orthogonalization methods. • Algorithm description $$\mathbf{H}_{\mathbf{n}}$$ is an upper triangular matrix with $$\mathbf{h}_{\mathbf{j,k}}$$ representing its elements, $$\mathbf{\ }\mathbf{e}_{\mathbf{i}}$$ represents a vector consisting of the first i elements of the vector $$\mathbf{e}$$. 1. $$\mathbf{for\ j = 0,1,2,3}\mathbf{\ldots}$$ 2.   $$\mathbf{r = b - A}\mathbf{x}_{\mathbf{0}}$$ 3.   $$\mathbf{v}_{\mathbf{1}}\mathbf{= - r\ /\ }\left\| \mathbf{r} \right\|$$ 4.   $$\mathbf{e =}\left( \mathbf{-}\left\| \mathbf{r} \right\|\mathbf{,0,\ldots,0} \right)^{\mathbf{T}}$$ 5.   $$\mathbf{n = m}$$ 6.   $$\mathbf{for\ i = 1,2,3}\mathbf{\ldots}\mathbf{m}$$ 7.      $$\mathbf{{solve\ }}{\widehat{\mathbf{v}}}_{\mathbf{i}}\mathbf{{\ from\ K}}{\widehat{\mathbf{v}}}_{\mathbf{i}}\mathbf{=}\mathbf{v}_{\mathbf{i}}$$ 8.      $$\mathbf{\omega = A}{\widehat{\mathbf{v}}}_{\mathbf{i}}$$ 9.      $$\mathbf{for\ k = 1,2,3}\mathbf{\ldots}\mathbf{i}$$ 10.       $$\mathbf{h}_{\mathbf{k,i}}\mathbf{=}\left\langle \mathbf{\omega,}\mathbf{v}_{\mathbf{k}} \right\rangle$$ 11.       $$\mathbf{\omega = \omega -}\mathbf{h}_{\mathbf{k,i}}\mathbf{v}_{\mathbf{k}}$$ 12.     $$\mathbf{{end}}$$ 13.     $$\mathbf{h}_{\mathbf{i + 1,i}}\mathbf{=}\left\| \mathbf{\omega} \right\|$$ 14.     $$\mathbf{v}_{\mathbf{i + 1}}\mathbf{= \omega/}\left\| \mathbf{\omega} \right\|$$ 15.     $$\mathbf{for\ k = 1,2,3}\mathbf{\ldots}\mathbf{i - 1}$$ 16.       $$\begin{pmatrix} \mathbf{h}_{\mathbf{k,i}} \\ \mathbf{h}_{\mathbf{k + 1,i}} \\ \end{pmatrix}\mathbf{=}\begin{pmatrix} \mathbf{c}_{\mathbf{i}} & \mathbf{- s}_{\mathbf{i}} \\ \mathbf{s}_{\mathbf{i}} & \mathbf{c}_{\mathbf{i}} \\ \end{pmatrix}\begin{pmatrix} \mathbf{h}_{\mathbf{k,i}} \\ \mathbf{h}_{\mathbf{k + 1,i}} \\ \end{pmatrix}$$ 17.     $$\mathbf{{end}}$$ 18.     $$\mathbf{c}_{\mathbf{i}}\mathbf{=}\frac{\mathbf{1}}{\sqrt{\mathbf{1 +}\left( \frac{\mathbf{h}_{\mathbf{i + 1,i}}}{\mathbf{h}_{\mathbf{i,i}}} \right)^{\mathbf{2}}}}$$ 19.     $$\mathbf{s}_{\mathbf{i}}\mathbf{= -}\frac{\mathbf{h}_{\mathbf{i + 1,i}}}{\mathbf{h}_{\mathbf{i,i}}}\frac{\mathbf{1}}{\sqrt{\mathbf{1 +}\left( \frac{\mathbf{h}_{\mathbf{i + 1,i}}}{\mathbf{h}_{\mathbf{i,i}}} \right)^{\mathbf{2}}}}$$ 20.     $$\begin{pmatrix} \mathbf{e}_{\mathbf{i}} \\ \mathbf{e}_{\mathbf{i + 1}} \\ \end{pmatrix}\mathbf{=}\begin{pmatrix} \mathbf{c}_{\mathbf{i}} & \mathbf{- s}_{\mathbf{i}} \\ \mathbf{s}_{\mathbf{i}} & \mathbf{c}_{\mathbf{i}} \\ \end{pmatrix}\begin{pmatrix} \mathbf{e}_{\mathbf{i}} \\ \mathbf{e}_{\mathbf{i + 1}} \\ \end{pmatrix}$$ 21.     $$\mathbf{h}_{\mathbf{i,i}}\mathbf{=}\mathbf{c}_{\mathbf{i}}\mathbf{h}_{\mathbf{i,i}}\mathbf{- \ }\mathbf{s}_{\mathbf{i}}\mathbf{h}_{\mathbf{i + 1,i}}$$ 22.     $$\mathbf{h}_{\mathbf{i + 1,i}}\mathbf{= 0}$$ 23.     $$\mathbf{{if\ }}\left\| \mathbf{e}_{\mathbf{i + 1}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ }}\mathbf{{then}}$$ 24.       $$\mathbf{n = i}$$ 25.       $$\mathbf{{exit}}$$ 26.     $$\mathbf{{endif}}$$ 27.   $$\mathbf{{end}}$$ 28.   $$\mathbf{y}_{\mathbf{n}}\mathbf{=}\mathbf{H}_{\mathbf{n}}^{\mathbf{- 1}}\mathbf{e}_{\mathbf{n}}$$ 29.   $$\mathbf{{solve\ }}\widehat{\mathbf{x}}\mathbf{{\ from\ K}}\widehat{\mathbf{x}}\mathbf{=}\sum_{\mathbf{k = 1}}^{\mathbf{n}}{\mathbf{y}_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}}$$ 30.   $$\mathbf{x}_{\mathbf{n}}\mathbf{=}\mathbf{x}_{\mathbf{0}}\mathbf{+}\widehat{\mathbf{x}}$$ 31.   $$\mathbf{x}_{\mathbf{0}}\mathbf{=}\mathbf{x}_{\mathbf{n}}$$ 32. $$\mathbf{{end}}$$ ## 1.4 Preconditioned Communication Avoiding Generalized Minimum Residual Method [2,3] The Communication Avoiding Generalized Minimum Residual (CA-GMRES(s,t)) method is an iterative Krylov subspace algorithm for solving nonsymmetric matrices. The calculation which is done in $$s$$ iterations in the GMRES(m) method is equivalent to one iteration of the CA-GMRES method. CA-GMRES(s,t) is restarted periodically after $$t$$ iterations if it did not converge within $$t$$ iterations, the solution obtained until $$m$$ iterations will be used as an input for the new restart cycle. CA-GMRES(s,t) is equivalent to GMRES(m) if $$s$$ and $$t$$ are chosen so that $$m = t \times s$$. The convergence property of CA-GMRES(s,t) is the same as GMRES(m) in exact arithmetic. In addition, the number of global collective communication calls is reduced by communication avoiding QR factorization. As CA-GMRES(s,t) produces $$s$$ basis vectors at once, when $$s$$ is too large, the linear independency of the basic vectors may become worse because of round off errors, leading to worse convergence property than GMRES(m). In order to improve the orthogonality of the basic vectors, the PARCEL implementation of CA-GMRES(s,t) provides an option to use the Newton basis in addition to the monomial basis. • Algorithm description $$\mathbf{e}$$ represents a unit vector, $${\widetilde{\mathbf{\rho}}}_{\mathbf{k}}\mathbf{=}\mathbf{R}_{\mathbf{k}}$$ represents the $$s$$-th row and $$s$$-th column element of the matrix $$\mathbf{R}_{\mathbf{k}}$$, and $$\mathbf{E}$$ represents the eigenvalues obtained from the Hessenberg matrix, which is generated by the iteration process. 1. $$\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{= \lbrack}\mathbf{e}_{\mathbf{2}}\mathbf{,}\mathbf{e}_{\mathbf{3}}\mathbf{,\ldots,}\mathbf{e}_{\mathbf{s + 1}}\mathbf{\rbrack}$$ 2. $$\mathbf{for\ j = 0,1,2,3\ldots}$$ 3.   $$\mathbf{r = b - A}\mathbf{x}_{\mathbf{0}}$$ 4.   $$\mathbf{v}_{\mathbf{1}}\mathbf{= r\ /\ }\left\| \mathbf{r} \right\|$$ 5.   $$\mathbf{\zeta =}\left( \left\| \mathbf{r} \right\|\mathbf{,0,\ldots,0} \right)^{\mathbf{T}}$$ 6.   $$\mathbf{for\ k = 0,1\ldots,t - 1}$$ 7.     $$\mathbf{Fix\ basis\ conversion\ matrix\ \lbrack}\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{,E\rbrack}$$ 8.     $$\mathbf{Comupute\ \ basic\ vector\ \ \lbrack\ s,}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{,}\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{\ \rbrack}$$ 9.     $$\mathbf{if(k.eq.0)}$$ 10.       $$\mathbf{{QR\ decomposition\ }}\mathbf{V}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{=}\mathbf{Q}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{R}_{\mathbf{0}}^{\mathbf{\sim}}$$ 11.       $$\mathfrak{Q}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{=}\mathbf{Q}_{\mathbf{0}}^{\mathbf{\sim}}$$ 12.       $$\mathfrak{H}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{=}\mathbf{R}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{B}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{R}_{\mathbf{0}}^{\mathbf{- 1}}$$ 13.       $$\mathcal{H}_{\mathbf{0}}\mathbf{=}\mathfrak{H}_{\mathbf{0}}^{\mathbf{\sim}}$$ 14.       $$\mathbf{for\ o = 1,2\ldots,s}$$ 15.       $$\mathbf{Givens\ rotation\ \lbrack o,}\mathcal{H}_{\mathbf{0}}\mathbf{,\zeta\rbrack}$$ 16.       $$\mathbf{{if\ }}\left\| \mathbf{\zeta}_{\mathbf{o + 1}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ then}}$$ 17.         $$\mathbf{update\ \ solution\ \ vector\lbrack s,k,o,}{\acute{\mathbf{V}}}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{,}{\acute{\mathbf{Q}}}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{,}\mathbf{R}_{\mathbf{0}}\mathbf{,}\mathcal{H}_{\mathbf{0}}\mathbf{,\zeta,}\mathbf{x}_{\mathbf{0}}\mathbf{\ \rbrack}$$ 18.         $$\mathbf{{quit}}$$ 19.       $$\mathbf{{endif}}$$ 20.     $$\mathbf{{else}}$$ 21.       $${\acute{\mathfrak{R}}}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}\mathbf{=}\left( \mathfrak{Q}_{\mathbf{k - 1}}^{\mathbf{\sim}} \right)^{\mathbf{H}}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}$$ 22.       $${\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{-}\mathfrak{Q}_{\mathbf{k - 1}}^{\mathbf{\sim}}{\acute{\mathfrak{R}}}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}$$ 23.       $$\mathbf{{QR\ decomposition\ }}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}{\acute{\mathbf{Q}}}_{\mathbf{k}}^{\mathbf{\sim}}{\acute{\mathbf{R}}}_{\mathbf{k}}^{\mathbf{\sim}}$$ 24.       $$\mathbf{R}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}\begin{pmatrix} \mathbf{R}_{\mathbf{k}} & \mathbf{z}_{\mathbf{k}} \\ \mathbf{0}_{\mathbf{1,k}} & \mathbf{\rho} \\ \end{pmatrix}$$ 25.       $$\mathfrak{H}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}\mathbf{= -}\mathfrak{H}_{\mathbf{k - 1}}\mathfrak{R}_{\mathbf{k - 1,k}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{+}\mathfrak{R}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}$$ 26.       $$\mathbf{H}_{\mathbf{k}}\mathbf{=}\mathbf{R}_{\mathbf{k}}\mathbf{B}_{\mathbf{k}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{+}{\widetilde{\mathbf{\rho}}}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{b}_{\mathbf{k}}\mathbf{z}_{\mathbf{k}}\mathbf{e}_{\mathbf{s}}^{\mathbf{T}}\mathbf{-}\mathbf{h}_{\mathbf{k - 1}}\mathbf{e}_{\mathbf{1}}\mathbf{e}_{\mathbf{s(k - 1)}}^{\mathbf{T}}\mathfrak{R}_{\mathbf{k - 1,k}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}$$ 27.       $$\mathbf{h}_{\mathbf{k}}\mathbf{=}{\widetilde{\mathbf{\rho}}}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{\rho}_{\mathbf{k}}\mathbf{b}_{\mathbf{k}}$$ 28.       $$\mathfrak{H}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}\begin{pmatrix} \mathfrak{H}_{\mathbf{k - 1}} & \mathfrak{H}_{\mathbf{k - 1,k}}^{\mathbf{\sim}} \\ \begin{matrix} \mathbf{h}_{\mathbf{k - 1}}\mathbf{e}_{\mathbf{1}}\mathbf{e}_{\mathbf{s(k - 1)}}^{\mathbf{T}} \\ \mathbf{0}_{\mathbf{1,s(k - 1)}} \\ \end{matrix} & \begin{matrix} \mathbf{H}_{\mathbf{k}} \\ \mathbf{h}_{\mathbf{k}}\mathbf{e}_{\mathbf{s}}^{\mathbf{T}} \\ \end{matrix} \\ \end{pmatrix}$$ 29.       $$\mathcal{H}_{\mathbf{k}}\mathbf{=}\begin{pmatrix} \mathfrak{H}_{\mathbf{k - 1,k}}^{\mathbf{\sim}} \\ \mathbf{H}_{\mathbf{k}} \\ \mathbf{h}_{\mathbf{k}}\mathbf{e}_{\mathbf{s}}^{\mathbf{T}} \\ \end{pmatrix}$$ 30.       $$\mathbf{for\ o = 1 + sk,\ldots,s(k + 1)}$$ 31.         $$\mathbf{Givens\ rotation\ \lbrack o,}\mathcal{H}_{\mathbf{k}}\mathbf{,\zeta\rbrack}$$ 32.         $$\mathbf{{if\ }}\left\| \mathbf{\zeta}_{\mathbf{o + 1}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ then}}$$ 33.           $$\mathbf{update\ \ solution\ \ vector\lbrack s,k,o,}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{,}\mathfrak{Q}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{,}{\acute{\mathbf{R}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{,}\mathcal{H}_{\mathbf{k}}\mathbf{,\zeta,}\mathbf{x}_{\mathbf{0}}\mathbf{\ \rbrack}$$ 34.           $$\mathbf{{quit}}$$ 35.         $$\mathbf{{endif}}$$ 36.       $$\mathbf{{end}}$$ 37.     $$\mathbf{{endif}}$$ 38.   $$\mathbf{{end}}$$ 39.   $$\mathbf{update\ \ solution\ \ vector\lbrack s,t - 1,st,}{\acute{\mathbf{V}}}_{\mathbf{t - 1}}^{\mathbf{\sim}}\mathbf{,}\mathfrak{Q}_{\mathbf{t - 1}}^{\mathbf{\sim}}\mathbf{,}{\acute{\mathbf{R}}}_{\mathbf{t - 1}}^{\mathbf{\sim}}\mathbf{,}\mathcal{H}_{\mathbf{t - 1}}\mathbf{,\zeta,}\mathbf{x}_{\mathbf{0}}\mathbf{\ \rbrack}$$ 40. $$\mathbf{{end}}$$ • Fix basis conversion matrix $$\mathbf{\lbrack s,B,E\rbrack}$$ 1. $$\mathbf{{if\ compute\ Newton\ Basis}}$$ 2.   $$\mathbf{i = 0}$$ 3.   $$\mathbf{{while}}\left( \mathbf{i \leq s - 1} \right)$$ 4.     $$\mathbf{{if}}\left( \mathbf{i.eq.s - 1} \right)\mathbf{{then}}$$ 5.       $$\mathbf{B}_{\mathbf{i,i}}\mathbf{= Re\lbrack}\mathbf{E}_{\mathbf{i}}\mathbf{\rbrack}$$ 6.     $$\mathbf{{else}}$$ 7.       $$\mathbf{{if}}\left( \mathbf{{Im}}\left\lbrack \mathbf{E}_{\mathbf{i}} \right\rbrack\mathbf{.eq.0} \right)\mathbf{{then}}$$ 8.         $$\mathbf{B}_{\mathbf{i,i}}\mathbf{= Re\lbrack}\mathbf{E}_{\mathbf{i}}\mathbf{\rbrack}$$ 9.       $$\mathbf{{else}}$$ 10.         $$\mathbf{B}_{\mathbf{i,i}}\mathbf{= Re\lbrack}\mathbf{E}_{\mathbf{i}}\mathbf{\rbrack}$$ 11.         $$\mathbf{B}_{\mathbf{i + 1,i + 1}}\mathbf{= Re\lbrack}\mathbf{E}_{\mathbf{i}}\mathbf{\rbrack}$$ 12.         $$\mathbf{B}_{\mathbf{i,i + 1}}\mathbf{= -}\left( \mathbf{Im\lbrack}\mathbf{E}_{\mathbf{i}}\mathbf{\rbrack} \right)^{\mathbf{2}}$$ 13.         $$\mathbf{i = i + 1}$$ 14.       $$\mathbf{{endif}}$$ 15.     $$\mathbf{{endif}}$$ 16.     $$\mathbf{i = i + 1}$$ 17.   $$\mathbf{{end\ \ }}$$ 18. $$\mathbf{{end\ \ }}$$ • Comupute basis vector $$\mathbf{\lbrack\ s,v\ ,B\rbrack}$$ The elements of the matrix $$\mathbf{B}$$ are represented as $$\mathbf{b}_{\mathbf{k,i}}$$ 1. $$\mathbf{for\ k = 1,2,3\ldots s}$$ 2.   $$\mathbf{{solve\ }}{\widehat{\mathbf{v}}}_{\mathbf{k - 1}}\mathbf{{\ from\ K}}{\widehat{\mathbf{v}}}_{\mathbf{k - 1}}\mathbf{=}\mathbf{v}_{\mathbf{k - 1}}$$ 3.   $$\mathbf{if(k.ne.1)then}$$ 4.     $$\mathbf{\alpha =}\mathbf{b}_{\mathbf{k - 1,k - 1}}$$ 5.     $$\mathbf{\beta =}\mathbf{b}_{\mathbf{k - 2,k - 1}}$$ 6.     $$\mathbf{v}_{\mathbf{k}}\mathbf{= A}{\widehat{\mathbf{v}}}_{\mathbf{k - 1}}\mathbf{- \alpha}\mathbf{v}_{\mathbf{k - 1}}\mathbf{+ \beta}\mathbf{v}_{\mathbf{k - 2}}$$ 7.   $$\mathbf{{else}}$$ 8.     $$\mathbf{\alpha =}\mathbf{b}_{\mathbf{k - 1,k - 1}}$$ 9.     $$\mathbf{v}_{\mathbf{k}}\mathbf{= A}{\widehat{\mathbf{v}}}_{\mathbf{k - 1}}\mathbf{- \alpha}\mathbf{v}_{\mathbf{k - 1}}$$ 10.   $$\mathbf{{endif}}$$ 11. $$\mathbf{{en}}\mathbf{d}$$ • Givens rotation $$\mathbf{\lbrack i,}\mathcal{H,}\mathbf{\zeta\rbrack}$$ The elements of the matrix $$\mathcal{H}$$ are represented as $$\mathcal{h}_{\mathbf{k,i}}$$. 1. $$\mathbf{for\ k = 1,2,3\ldots i - 1}$$ 2.   $$\begin{pmatrix} \mathcal{h}_{\mathbf{k,i}} \\ \mathcal{h}_{\mathbf{k + 1,i}} \\ \end{pmatrix}\mathbf{=}\begin{pmatrix} \mathbf{c}_{\mathbf{i}} & \mathbf{- s}_{\mathbf{i}} \\ \mathbf{s}_{\mathbf{i}} & \mathbf{c}_{\mathbf{i}} \\ \end{pmatrix}\begin{pmatrix} \mathcal{h}_{\mathbf{k,i}} \\ \mathcal{h}_{\mathbf{k + 1,i}} \\ \end{pmatrix}$$ 3. $$\mathbf{\text{end}}$$ 4. $$\mathbf{c}_{\mathbf{i}}\mathbf{=}\frac{\mathbf{1}}{\sqrt{\mathbf{1 +}\left( \frac{\mathcal{h}_{\mathbf{i + 1,i}}}{\mathcal{h}_{\mathbf{i,i}}} \right)^{\mathbf{2}}}}$$ 5. $$\mathbf{s}_{\mathbf{i}}\mathbf{= -}\frac{\mathcal{h}_{\mathbf{i + 1,i}}}{\mathcal{h}_{\mathbf{i,i}}}\frac{\mathbf{1}}{\sqrt{\mathbf{1 +}\left( \frac{\mathcal{h}_{\mathbf{i + 1,i}}}{\mathcal{h}_{\mathbf{i,i}}} \right)^{\mathbf{2}}}}$$ 6. $$\begin{pmatrix} \mathbf{\zeta}_{\mathbf{i}} \\ \mathbf{\zeta}_{\mathbf{i + 1}} \\ \end{pmatrix}\mathbf{=}\begin{pmatrix} \mathbf{c}_{\mathbf{i}} & \mathbf{- s}_{\mathbf{i}} \\ \mathbf{s}_{\mathbf{i}} & \mathbf{c}_{\mathbf{i}} \\ \end{pmatrix}\begin{pmatrix} \mathbf{\zeta}_{\mathbf{i}} \\ \mathbf{\zeta}_{\mathbf{i + 1}} \\ \end{pmatrix}$$ 7. $$\mathcal{h}_{\mathbf{i,i}}\mathbf{=}\mathbf{c}_{\mathbf{i}}\mathcal{h}_{\mathbf{i,i}}\mathbf{- \ }\mathbf{s}_{\mathbf{i}}\mathcal{h}_{\mathbf{i + 1,i}}$$ 8. $$\mathcal{h}_{\mathbf{i + 1,i}}\mathbf{= 0}$$ • Update solution vector $$\mathbf{\lbrack s,k,n,V,Q,R,}\mathcal{H,}\mathbf{\zeta,}\mathbf{x}_{\mathbf{0}}\mathbf{\ \rbrack}$$ 1. $$\mathbf{y =}\mathcal{H}^{\mathbf{- 1}}\mathbf{\zeta}$$ 2. $$\mathbf{{solve\ }}\widehat{\mathbf{x}}\mathbf{{\ from\ K}}\widehat{\mathbf{x}}\mathbf{=}\sum_{\mathbf{k = 0}}^{\mathbf{s}\mathbf{m}\mathbf{- 1}}\mathbf{Q}_{\mathbf{k}}\mathbf{y}_{\mathbf{k}}\mathbf{+}\sum_{\mathbf{l =}\mathbf{{sm}}}^{\mathbf{n - 1}}\mathbf{V}_{\mathbf{l}\mathbf{- sm}}\sum_{\mathbf{k =}\mathbf{{sm}}}^{\mathbf{n - 1}}{\mathbf{R}_{\mathbf{l - sm}\mathbf{,}\mathbf{k}\mathbf{- sm}}^{\mathbf{- 1}}\mathbf{y}_{\mathbf{k}}}$$ 3. $$\mathbf{x}_{\mathbf{0}}\mathbf{=}\mathbf{x}_{\mathbf{0}}\mathbf{+}\widehat{\mathbf{x}}$$ ## 1.5 Preconditioned Chebyshev basis Conjugate Gradient Method [4,5] The Chebyshev basis Conjugate Gradient (CBCG) method is an iterative Krylov subspace algorithm for solving symmetric matrices. The CBCG method calculates $$k$$ iterations of the CG method in one iteration. By this approach, the CBCG method reduces the global collective communication calls. In order to use the Chebyshev basis the largest and smallest eigenvalues are needed. PARCEL provides two methods to obtain these eigenvalues, the so called power method and the communication avoiding Arnoldi method. • Algorithm 1. $$\mathbf{{Compute\ }}\mathbf{r}_{\mathbf{0}}\mathbf{= b - A}\mathbf{x}_{\mathbf{0}}\mathbf{{\ for\ some\ initial\ gue}}\mathbf{{ss\ }}\mathbf{x}_{\mathbf{- 1}}\mathbf{=}\mathbf{x}_{\mathbf{0}}$$ 2. $$\mathbf{Compute\ Chebyshev\ basis\ \lbrack}\mathbf{S}_{\mathbf{0}}\mathbf{,}{\mathbf{A}\mathbf{S}}_{\mathbf{0}}\mathbf{,}\mathbf{r}_{\mathbf{0}}\mathbf{,}\mathbf{\lambda}_{\mathbf{\max}}\mathbf{,}\mathbf{\lambda}_{\mathbf{\min}}\mathbf{\rbrack}$$ 3. $$\mathbf{Q}_{\mathbf{0}}\mathbf{=}\mathbf{S}_{\mathbf{0}}$$ 4. $$\mathbf{for\ i = 0,1,2,3\ldots}$$ 5. $$\mathbf{{Compute\ }}\mathbf{Q}_{\mathbf{i}}^{\mathbf{T}}\mathbf{A}\mathbf{Q}_{\mathbf{i}}\mathbf{\ ,\ }\mathbf{Q}_{\mathbf{i}}^{\mathbf{T}}\mathbf{r}_{\mathbf{{ik}}}$$ 6.   $$\mathbf{\alpha}_{\mathbf{i}}\mathbf{= \ }\left( \mathbf{Q}_{\mathbf{i}}^{\mathbf{T}}\mathbf{A}\mathbf{Q}_{\mathbf{i}} \right)^{\mathbf{- 1}}\left( \mathbf{Q}_{\mathbf{i}}^{\mathbf{T}}\mathbf{r}_{\mathbf{{ik}}} \right)$$ 7.   $$\mathbf{x}_{\left( \mathbf{i + 1} \right)\mathbf{k}}\mathbf{=}\mathbf{x}_{\mathbf{{ik}}}\mathbf{+}\mathbf{Q}_{\mathbf{i}}\mathbf{\alpha}_{\mathbf{i}}$$ 8.   $$\mathbf{r}_{\left( \mathbf{i + 1} \right)\mathbf{k}}\mathbf{=}\mathbf{r}_{\mathbf{{ik}}}\mathbf{- A}\mathbf{Q}_{\mathbf{i}}\mathbf{\alpha}_{\mathbf{i}}$$ 9.   $$\mathbf{{if\ }}\left\| \mathbf{r}_{\left( \mathbf{i + 1} \right)\mathbf{k}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ }}\mathbf{{then}}$$ 10.   $$\mathbf{Compute\ Chebyshev\ basis\ \lbrack}\mathbf{S}_{\mathbf{i + 1}}\mathbf{,}{\mathbf{A}\mathbf{S}}_{\mathbf{i + 1}}\mathbf{,}\mathbf{r}_{\left( \mathbf{i + 1} \right)\mathbf{k}}\mathbf{,}\mathbf{\lambda}_{\mathbf{\max}}\mathbf{,}\mathbf{\lambda}_{\mathbf{\min}}\mathbf{\rbrack}$$ 11.   $$\mathbf{{Compute\ }}\mathbf{Q}_{\mathbf{i}}^{\mathbf{T}}\mathbf{A}\mathbf{S}_{\mathbf{i + 1}}$$ 12.   $$\mathbf{B}_{\mathbf{i}}\mathbf{=}\left( \mathbf{Q}_{\mathbf{i}}^{\mathbf{T}}\mathbf{A}\mathbf{Q}_{\mathbf{i}} \right)^{\mathbf{- 1}}\left( \mathbf{Q}_{\mathbf{i}}^{\mathbf{T}}\mathbf{A}\mathbf{S}_{\mathbf{i + 1}} \right)\mathbf{\ }$$ 13.   $$\mathbf{Q}_{\mathbf{i + 1}}\mathbf{=}\mathbf{S}_{\mathbf{i + 1}}\mathbf{-}\mathbf{Q}_{\mathbf{i}}\mathbf{B}_{\mathbf{i}}$$ 14.   $$\mathbf{A}\mathbf{Q}_{\mathbf{i + 1}}\mathbf{=}\mathbf{{AS}}_{\mathbf{i + 1}}\mathbf{-}\mathbf{{AQ}}_{\mathbf{i}}\mathbf{B}_{\mathbf{i}}$$ 15. $$\mathbf{{end}}$$ • Compute Chebyshev basis $$\mathbf{\lbrack S,AS,r,}\mathbf{\lambda}_{\mathbf{\max}}\mathbf{,}\mathbf{\lambda}_{\mathbf{\min}}\mathbf{\rbrack}$$ $$\mathbf{\lambda}_{\mathbf{\max}}$$, $$\mathbf{\lambda}_{\mathbf{\min}}$$represent the largest and smallest eigenvalues of $$\mathbf{AK}^{-1}$$. 1. $$\mathbf{\eta = 2\ /\ (}\mathbf{\lambda}_{\mathbf{\max}}\mathbf{-}\mathbf{\lambda}_{\mathbf{\min}}\mathbf{)}$$ 2. $$\mathbf{\zeta =}\left( \mathbf{\lambda}_{\mathbf{\max}}\mathbf{+}\mathbf{\lambda}_{\mathbf{\min}} \right)\mathbf{\ /\ (}\mathbf{\lambda}_{\mathbf{\max}}\mathbf{-}\mathbf{\lambda}_{\mathbf{\min}}\mathbf{)}$$ 3. $$\mathbf{s}_{\mathbf{0}}\mathbf{=}\mathbf{r}_{\mathbf{0}}\mathbf{\ }$$ 4. $$\mathbf{{solve\ }}{\widetilde{\mathbf{s}}}_{\mathbf{0}}\mathbf{{\ from\ K}}{\widetilde{\mathbf{s}}}_{\mathbf{0}}\mathbf{= \ }\mathbf{s}_{\mathbf{0}}$$ 5. $$\mathbf{s}_{\mathbf{1}}\mathbf{= \eta A}{\widetilde{\mathbf{s}}}_{\mathbf{0}}\mathbf{- \zeta}\mathbf{s}_{\mathbf{0}}$$ 6. $$\mathbf{{solve\ }}{\widetilde{\mathbf{s}}}_{\mathbf{1}}\mathbf{{\ from\ K}}{\widetilde{\mathbf{s}}}_{\mathbf{1}}\mathbf{= \ }\mathbf{s}_{\mathbf{1}}$$ 7. $$\mathbf{for\ j = 2,3,\ldots,k}$$ 8.   $$\mathbf{s}_{\mathbf{j}}\mathbf{= 2\eta A}{\widetilde{\mathbf{s}}}_{\mathbf{j}}\mathbf{- 2\zeta}\mathbf{s}_{\mathbf{j - 1}}\mathbf{-}\mathbf{s}_{\mathbf{j - 2}}$$ 9.   $$\mathbf{{solve\ }}{\widetilde{\mathbf{s}}}_{\mathbf{j}}\mathbf{{\ from\ K}}{\widetilde{\mathbf{s}}}_{\mathbf{j}}\mathbf{= \ }\mathbf{s}_{\mathbf{j}}$$ 10. $$\mathbf{{end}}$$ 11. $$\mathbf{S = (}{\widetilde{\mathbf{s}}}_{\mathbf{0}}\mathbf{,}{\widetilde{\mathbf{s}}}_{\mathbf{1}}\mathbf{,\ldots,}{\widetilde{\mathbf{s}}}_{\mathbf{k - 1}}\mathbf{)}$$ 12. $$\mathbf{AS = (A}{\widetilde{\mathbf{s}}}_{\mathbf{0}}\mathbf{,}{\mathbf{A}\widetilde{\mathbf{s}}}_{\mathbf{1}}\mathbf{,\ldots,}{\mathbf{A}\widetilde{\mathbf{s}}}_{\mathbf{k - 1}}\mathbf{)}$$ ## 1.6 Preconditioned Communication Avoiding Arnoldi Method [2] The Preconditioned Communication Avoiding Arnoldi Method (CA-Arnoldi(s,t)) is an eigenvalue calculation algorithm for asymmetric matrices. CA-Arnoldi(s,t) computes s iterations of the Arnordi method in a single iteration, and by repeating this for t times, eigenvalues and eigenvectors of t$$\times$$s Hessenberg matrices are computed. By applying communication-reducing QR factorization algorithms, the number of collective communications is reduced compared to the Arnoldi method. • Algorithm description $$\mathbf{e}$$ represents a unit vector, $${\widetilde{\mathbf{\rho}}}_{\mathbf{k}}\mathbf{=}\mathbf{R}_{\mathbf{k}}$$ represents the $$s$$-th row and $$s$$-th column element of the matrix $$\mathbf{R}_{\mathbf{k}}$$, and $$\mathbf{E}$$ represents the eigenvalues obtained from the Hessenberg matrix, which is generated by the iteration process. 1. $$\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{= \lbrack}\mathbf{e}_{\mathbf{2}}\mathbf{,}\mathbf{e}_{\mathbf{3}}\mathbf{,\ldots,}\mathbf{e}_{\mathbf{s + 1}}\mathbf{\rbrack}$$ 2.   $$\mathbf{r = }\mathbf{x}_{\mathbf{0}}$$ 3.   $$\mathbf{\zeta =}\left( \left\| \mathbf{r} \right\|\mathbf{,0,\ldots,0} \right)^{\mathbf{T}}$$ 4.   $$\mathbf{for\ k = 0,1\ldots,t - 1}$$ 5.     $$\mathbf{Fix\ basis\ conversion\ matrix\ \lbrack}\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{,E\rbrack}$$ 6.     $$\mathbf{Comupute\ \ basic\ vector\ \ \lbrack\ s,}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{,}\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{\ \rbrack}$$ 7.     $$\mathbf{if(k.eq.0)}$$ 8.       $$\mathbf{{QR\ decomposition\ }}\mathbf{V}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{=}\mathbf{Q}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{R}_{\mathbf{0}}^{\mathbf{\sim}}$$ 9.       $$\mathfrak{Q}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{=}\mathbf{Q}_{\mathbf{0}}^{\mathbf{\sim}}$$ 10.       $$\mathfrak{H}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{=}\mathbf{R}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{B}_{\mathbf{0}}^{\mathbf{\sim}}\mathbf{R}_{\mathbf{0}}^{\mathbf{- 1}}$$ 11.       $$\mathcal{H}_{\mathbf{0}}\mathbf{=}\mathfrak{H}_{\mathbf{0}}^{\mathbf{\sim}}$$ 12.       $$\mathbf{for\ o = 1,2\ldots,s}$$ 13.       $$\mathbf{Givens\ rotation\ \lbrack o,}\mathcal{H}_{\mathbf{0}}\mathbf{,\zeta\rbrack}$$ 14.       $$\mathbf{{if\ }}\left\| \mathbf{\zeta}_{\mathbf{o + 1}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ then}}$$ 15.         $$\mathbf{{quit}}$$ 16.       $$\mathbf{{endif}}$$ 17.     $$\mathbf{{else}}$$ 18.       $${\acute{\mathfrak{R}}}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}\mathbf{=}\left( \mathfrak{Q}_{\mathbf{k - 1}}^{\mathbf{\sim}} \right)^{\mathbf{H}}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}$$ 19.       $${\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{-}\mathfrak{Q}_{\mathbf{k - 1}}^{\mathbf{\sim}}{\acute{\mathfrak{R}}}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}$$ 20.       $$\mathbf{{QR\ decomposition\ }}{\acute{\mathbf{V}}}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}{\acute{\mathbf{Q}}}_{\mathbf{k}}^{\mathbf{\sim}}{\acute{\mathbf{R}}}_{\mathbf{k}}^{\mathbf{\sim}}$$ 21.       $$\mathbf{R}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}\begin{pmatrix} \mathbf{R}_{\mathbf{k}} & \mathbf{z}_{\mathbf{k}} \\ \mathbf{0}_{\mathbf{1,k}} & \mathbf{\rho} \\ \end{pmatrix}$$ 22.       $$\mathfrak{H}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}\mathbf{= -}\mathfrak{H}_{\mathbf{k - 1}}\mathfrak{R}_{\mathbf{k - 1,k}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{+}\mathfrak{R}_{\mathbf{k - 1,k}}^{\mathbf{\sim}}\mathbf{B}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}$$ 23.       $$\mathbf{H}_{\mathbf{k}}\mathbf{=}\mathbf{R}_{\mathbf{k}}\mathbf{B}_{\mathbf{k}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{+}{\widetilde{\mathbf{\rho}}}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{b}_{\mathbf{k}}\mathbf{z}_{\mathbf{k}}\mathbf{e}_{\mathbf{s}}^{\mathbf{T}}\mathbf{-}\mathbf{h}_{\mathbf{k - 1}}\mathbf{e}_{\mathbf{1}}\mathbf{e}_{\mathbf{s(k - 1)}}^{\mathbf{T}}\mathfrak{R}_{\mathbf{k - 1,k}}\mathbf{R}_{\mathbf{k}}^{\mathbf{- 1}}$$ 24.       $$\mathbf{h}_{\mathbf{k}}\mathbf{=}{\widetilde{\mathbf{\rho}}}_{\mathbf{k}}^{\mathbf{- 1}}\mathbf{\rho}_{\mathbf{k}}\mathbf{b}_{\mathbf{k}}$$ 25.       $$\mathfrak{H}_{\mathbf{k}}^{\mathbf{\sim}}\mathbf{=}\begin{pmatrix} \mathfrak{H}_{\mathbf{k - 1}} & \mathfrak{H}_{\mathbf{k - 1,k}}^{\mathbf{\sim}} \\ \begin{matrix} \mathbf{h}_{\mathbf{k - 1}}\mathbf{e}_{\mathbf{1}}\mathbf{e}_{\mathbf{s(k - 1)}}^{\mathbf{T}} \\ \mathbf{0}_{\mathbf{1,s(k - 1)}} \\ \end{matrix} & \begin{matrix} \mathbf{H}_{\mathbf{k}} \\ \mathbf{h}_{\mathbf{k}}\mathbf{e}_{\mathbf{s}}^{\mathbf{T}} \\ \end{matrix} \\ \end{pmatrix}$$ 26.       $$\mathcal{H}_{\mathbf{k}}\mathbf{=}\begin{pmatrix} \mathfrak{H}_{\mathbf{k - 1,k}}^{\mathbf{\sim}} \\ \mathbf{H}_{\mathbf{k}} \\ \mathbf{h}_{\mathbf{k}}\mathbf{e}_{\mathbf{s}}^{\mathbf{T}} \\ \end{pmatrix}$$ 27.       $$\mathbf{for\ o = 1 + sk,\ldots,s(k + 1)}$$ 28.         $$\mathbf{Givens\ rotation\ \lbrack o,}\mathcal{H}_{\mathbf{k}}\mathbf{,\zeta\rbrack}$$ 29.         $$\mathbf{{if\ }}\left\| \mathbf{\zeta}_{\mathbf{o + 1}} \right\|\mathbf{\ /\ }\left\| \mathbf{b} \right\|\mathbf{{\ small\ enough\ then}}$$ 30.           $$\mathbf{{quit}}$$ 31.         $$\mathbf{{endif}}$$ 32.       $$\mathbf{{end}}$$ 33.     $$\mathbf{{endif}}$$ 34.   $$\mathbf{{end}}$$ 35.   $$\mathbf{Eigen\ Value\ problem\ \lbrack\mathfrak{H}_{t - 1}^{\sim},E,z\rbrack}$$ 36.   $$\mathbf{Eigen Vectors \ X = \mathfrak{Q}_{t - 1}^{\sim}z}$$ # 2 Preconditioning In iterative algorithms, the application of a preconditioner of the form $$K \approx A$$ can help to reduce the number of iterations until convergence. The notation [$${{solve\ }}\widehat{{p}}{{\ from\ K}}\widehat{{p}}{= \ }{p}$$] means [$$approximately solve {A}\widehat{p}{= \ }{p}$$ with respect to the vector $$\widehat{p}$$]. Although the more accurate approximation leads to the less number of iterations until convergence, the cost of preconditioner is normally increased. In non-parallel computing, one of the most common preconditioners is the incomplete LU factorization (ILU). However, in parallel computing, parallel preconditioners are needed. PARCEL provides the following parallel preconditioners: Point Jacobi, Block Jacobi and Additive Schwarz. ## 2.1 Point Jacobi Preconditioner [1] The point Jacobi preconditioner is one of the most simplest preconditioners. $$K$$ only consists of the diagonal elements of the matrix $$A$$. Compared to other preconditioners the efficiency of the point Jacobi preconditioner is very low. However, it can be easily applied in parallel and it does not require any additional communication between processors. ## 2.2 Zero Fill-in Incomplete LU Factorization Preconditioners (ILU(0)) [1] LU factorization decompose a square matrix $$A$$ into a lower triangular matrix $$L$$ and an upper triangular matrix $$U$$. For a typical sparse matrix, the LU factors can be much less sparse than the original matrix, which is called as fill-in. This increases the memory and computational requirements. In order to avoid this issue, PARCEL provides the zero fill-in incomplete LU factorization (ILU(0)). ## 2.3 Diagonal Incomplete LU Factorization Preconditioners (D-ILU) [1] The diagonal incomplete LU factorization (D-ILU) computes a diagonal matrix $$D$$, a lower triangular matrix $$L$$ and an upper triangular matrix $$U$$ for the input matrix $$A$$. $A = L_{A} + D_{A} + U_{A}$ The preconditioner $$K$$ is constructed with $$L_{A}$$, $$U_{A}$$ and $$D$$ ($$D \neq D_{A}$$) with: $K = \left( D + L_{A} \right)D^{- 1}\left( D + U_{A} \right)$ Two different conditions exist in order to construct the diagonal matrix $$D$$: • Condition 1: The diagonal elements of $$K$$ equal those of $$A$$: $A_{{ii}} = K_{{ii}}\ \ (i = 1,\ldots,n)$ • Condition 2: The row sum of $$K$$ equals the row sum of $$A$$: $\sum_{j}^{}A_{{ij}} = \sum_{j}^{}K_{{ij}}\ \ (i = 1,\ldots,n)$ Given the conditions above $$D$$ can be computed as follows: • If condition 1 should be fullfilled the following computation is used: $D_{{ii}} = A_{{ii}} - \sum_{j < i}^{}{A_{{ij}}D_{jj}^{-1}A_{{ji}}}\ \ (i = 1,\ldots,n)$ • If condition 2 should be fullfilled the following computation is used: $D_{{ii}} = A_{{ii}} - \sum_{j < i}^{}{\sum_{k > j}^{}{A_{{ij}}D_{{jj}}^{-1}A_{jk}}}\ \ (i = 1,\ldots,n)$ ## 2.4 Block Jacobi Preconditioner [1] The Block Jacobi Preconditioner constructs $$K$$ out of the diagonal blocks of the matrix $$A$$. For each block, the incomplete ILU factorization is computed. When the preconditioner is applied, each block can be processed independently, resulting in a high level of parallelism. In addition, no communication is needed when each block is defined within a sub-matrix on each processor. ## 2.5 Additive Schwarz Preconditioner [1] The Additive Schwarz Preconditioner constructs $$K$$ out of the overlapping diagonal blocks of $$A$$. For each block the incomplete LU factorization is computed. Compared to the Block Jacobi Preconditioner, the Additive Schwarz Preconditioner may require additional communication. PARCEL provides four different overlapping methods, BASIC, RESTRICT, INTERPOLATE and NONE. Fig.1 Additive Schwarz Preconditioner with overlapping diagonal blocks ### BASIC Solve $$Ks=r$$ with an extended $$r$$ including an overlapping region with the neighboring process, $$s$$ in the overlapping region is determined by summing up the results between the neighboring region. Fig.2 BASIC ### RESTRICT Solve $$Ks=r$$ with an extended $$r$$ including an overlapping region with the neighboring process, $$s$$ in the overlapping region is determined without taking account of the result in the neighboring region. Fig.3 RESTRICT ### INTERPOLATE Solve $$Ks=r$$ with an extended $$r$$ without an overlapping region with the neighboring process, $$s$$ in the overlapping region is determined by summing up the results between the neighboring regions. Fig.4 INTERPOLATE ### NONE Solve $$Ks=r$$ with an extended $$r$$ without an overlapping region with the neighboring process, $$s$$ in the overlapping region is determined without taking account of the result in the neighboring regions. Fig.5 NONE ## 2.6 Fine-block Preconditioner [6] The fine-block preconditioner generates a preconditionig matrix so that SIMD operations can be applied to the incomplete LU decomposition. In each block of the block Jacobi preconditioner or the additive Schwarz preconditioner, fine diagonal blocks are defined. By ignoring off diagonal elements only in the fine diagonal blocks, data dependency is eliminated, and SIMD operations are enabled. In Fig.6, 3 $$\times$$ 3 fine blocks are defined within 9 $$\times$$ 9 blocks of the block Jacobi preconditioner, where data dependency is eliminated by ignoring off diagonal elements only within fine blocks (shown in white) and three vector elements are processed by SIMD operations. Block Jacobi Preconditioner  BlockJacobi Subdividing  Block Jacobi Preconditioner Subdividing BlockJacobi Fig.6 Subdividing preconditioning # 3 QR factorization This section describes the QR decomposition algorithms that can be used in this library. ## 3.1 Classical Gram-Schmidt (CGS) Method[1] The QR factorization by the Gram-Schmidt orthogonalization. This algorithm has High parallelism but poor orthogonality. • Algorithm description 1. $$\mathbf{f}\mathbf{or\ i = 1,\ldots,n\ do}$$ 2.   $$\mathbf{for\ k = 1,i - 1\ do}$$ 3.     $$\mathbf{R}_{\mathbf{k,i}}\mathbf{= <}\mathbf{Q}_{\mathbf{k}}\mathbf{,}\mathbf{V}_{\mathbf{i}}\mathbf{>}$$ 4.   $$\mathbf{{enddo\ }}$$ 5.   $$\mathbf{for\ k = 1,i - 1\ do}$$ 6.     $$\mathbf{V}_{\mathbf{i}}\mathbf{=}\mathbf{V}_{\mathbf{i}}\mathbf{-}\mathbf{R}_{\mathbf{k,i}}\mathbf{Q}_{\mathbf{k}}$$ 7.     $$\mathbf{{enddo\ }}$$ 8.   $$\mathbf{R}_{\mathbf{i,i}}\mathbf{= <}\mathbf{V}_{\mathbf{i}}\mathbf{,}\mathbf{V}_{\mathbf{i}}\mathbf{>}$$ 9.   $$\mathbf{Q}_{\mathbf{i}}\mathbf{=}\frac{\mathbf{1}}{\left\| \mathbf{V}_{\mathbf{i}} \right\|}\mathbf{V}_{\mathbf{i}}$$ 10. $$\mathbf{{enddo\ }}$$ ## 3.2 Modified Gram-Schmidt (MGS) Method[1] The QR factorization with a modified algorithm of the classical Gram-Schmidt method to reduce the error. This algorithm improves the orthogonality from the classical Gram-Schmidt method, but requires more collective communication. • Algorithm description 1. $$\mathbf{f}\mathbf{or\ i = 1,\ldots,n\ do}$$ 2.   $$\mathbf{for\ k = 1,i - 1\ do}$$ 3.     $$\mathbf{R}_{\mathbf{k,i}}\mathbf{= <}\mathbf{Q}_{\mathbf{k}}\mathbf{,}\mathbf{V}_{\mathbf{i}}\mathbf{>}$$ 4.     $$\mathbf{V}_{\mathbf{i}}\mathbf{=}\mathbf{V}_{\mathbf{i}}\mathbf{-}\mathbf{R}_{\mathbf{k,i}}\mathbf{Q}_{\mathbf{k}}$$ 5.   $$\mathbf{{enddo\ }}$$ 6.   $$\mathbf{R}_{\mathbf{i,i}}\mathbf{= <}\mathbf{V}_{\mathbf{i}}\mathbf{,}\mathbf{V}_{\mathbf{i}}\mathbf{>}$$ 7.   $$\mathbf{Q}_{\mathbf{i}}\mathbf{=}\frac{\mathbf{1}}{\left\| \mathbf{V}_{\mathbf{i}} \right\|}\mathbf{V}_{\mathbf{i}}$$ 8. $$\mathbf{{enddo\ }}$$ ## 3.3 Tall Skinny QR (TSQR) Method[2] TSQR is based on the Householder QR factorization shown below, and has good orthogonality. The sequential TSQR is used between threads, while the parallel TSQR is used between processes. • Algorithm description 1. $$\mathbf{for\ i = 1,\ldots,n\ do}$$ 2.   $$\mathbf{y}_{\mathbf{i}}\mathbf{(i:m) = A(i:m,i) -}\left\| \mathbf{A(i:m,i)} \right\|\mathbf{e}_{\mathbf{i}}\mathbf{(i:m)}$$ 3.   $$\mathbf{t}_{\mathbf{i}}\mathbf{=}\frac{\mathbf{2}}{\mathbf{<}\mathbf{y}_{\mathbf{i}}\left( \mathbf{i:m} \right)\mathbf{,\ }\mathbf{y}_{\mathbf{i}}\mathbf{(i:m) >}}$$ 4.   $$\mathbf{Q}_{\mathbf{i}}\mathbf{=}\left( \mathbf{I -}\mathbf{t}_{\mathbf{i}}\mathbf{y}_{\mathbf{i}}\mathbf{(i:m)}{\mathbf{y}_{\mathbf{i}}\mathbf{(i:m)}}^{\mathbf{T}} \right)$$ 5.   $$\mathbf{A(i:m,i) =}\mathbf{Q}_{\mathbf{i}}\mathbf{A(i:m,i)}$$ 6. $$\mathbf{{enddo}}$$ Sequential TSQR Parallel TSQR ## 3.4 Cholesky QR Method[7] The Cholesky QR factoriztion consists of a matrix product and the Cholesky factorization. This algorithm has high computational intensity and can be computed with one collective communication. However, its orthogonality is poor. • Algorithm description 1.  $$\mathbf{B =}\mathbf{V}^{\mathbf{T}}\mathbf{V}$$ 2.  $$\mathbf{Cholesky\ decomposition(B)}$$ 3.  $$\mathbf{Q = V}\mathbf{R}^{\mathbf{- 1}}$$ ## 3.5 Cholesky QR2 Method[8] The Cholesky QR2 factorization performs the second Cholesky QR factorization for the orthogonal matrix obtained by the first Cholesky QR factorization. This algorithm improves the orthogonality by executing the Cholesky QR factorization twice. • Algorithm description 1.  $$\mathbf{B}_{\mathbf{1}}\mathbf{=}\mathbf{V}^{\mathbf{T}}\mathbf{V}$$ 2.  $$\mathbf{R}_{\mathbf{1}}^{\mathbf{T}}\mathbf{R}_{\mathbf{1}}\mathbf{= Cholesky\ decomposition(}\mathbf{B}_{\mathbf{1}}\mathbf{)}$$ 3.  $$\mathbf{Q}_{\mathbf{1}}\mathbf{= V}\mathbf{R}_{\mathbf{1}}^{\mathbf{- 1}}$$ 4.  $$\mathbf{B}_{\mathbf{2}}\mathbf{=}\mathbf{Q}_{\mathbf{1}}^{\mathbf{T}}\mathbf{Q}_{\mathbf{1}}$$ 5.  $$\mathbf{R}_{\mathbf{2}}^{\mathbf{T}}\mathbf{R}_{\mathbf{2}}\mathbf{= Cholesky\ decomposition(}\mathbf{B}_{\mathbf{2}}\mathbf{)}$$ 6.  $$\mathbf{Q}_{\mathbf{2}}\mathbf{=}\mathbf{Q}_{\mathbf{1}}\mathbf{R}_{\mathbf{2}}^{\mathbf{- 1}}$$ 7.  $$\mathbf{V =}\mathbf{Q}_{\mathbf{1}}\mathbf{R}_{\mathbf{1}}\mathbf{=}\mathbf{Q}_{\mathbf{2}}\mathbf{R}_{\mathbf{2}}\mathbf{R}_{\mathbf{1}}\mathbf{= QR}$$ 8.  $$\mathbf{Q =}\mathbf{Q}_{\mathbf{2}}$$ 9.  $$\mathbf{R =}\mathbf{R}_{\mathbf{2}}\mathbf{R}_{\mathbf{1}}$$ # 4 Sparse Matrix Data Formats In order to save memory in sparse matrix computation, only the non-zero elements and their locations are stored. PARCEL supports different sparse matrix formats. ## 4.1 Compressed Row Storage (CRS) Format The CRS format compresses the row indices of the non-zero elements of the sparse matrix. Together with the compressed rows, the column index and the value for each non-zero element are stored in one dimensional arrays. Compresed row indeces are stored in an incremental order. $A = \left( \begin{array}{ccc} a & b & c & 0\\ 0 & d & 0 & 0\\ e & 0 & f & g\\ 0 & h & i & j \end{array} \right)$ ### One process case • In case a single process processes the whole matrix from above Values of non-zero elements :: $$crsA=\{a,b,c,d,e,f,g,h,i,j\}$$ Compressed row indices :: $$crsRow\_ptr=\{1,4,5,8,11\}$$ Column indices of non-zero elements :: $$crsCol=\{1,2,3,2,1,3,4,2,3,4\}$$ ### Two process case • Rank 0 process Values of non-zero elements :: $$crsA=\{a,b,c,d\}$$ Compressed row indices :: $$crsRow\_ptr=\{1,4,5\}$$ Column indices of non-zero elements :: $$crsCol=\{1,2,3,2\}$$ • Rank 1 process Values of non-zero elements :: $$crsA=\{e,f,g,h,i,j\}$$ Compressed row indices :: $$crsRow\_ptr=\{1,4,7\}$$ Column indices of non-zero elements :: $$crsCol=\{1,3,4,2,3,4\}$$ ## 4.2 Diagonal (DIA) Format The DIA format stores the non-zero elements of a matrix as diagonals. This format provides high performance for band block diagonal matrices. In addition to the element values, an offset value for each diagonal of the matrix needs to be stored inside an one dimensional array. A negative offset indicates that the diagonal is below the main diagonal of the matrix, a positive offset indicates that the diagonal is above the main diagonal. The main diagonal has the offset zero. Offset values are stored in an incremental order. $A = \left( \begin{array}{ccc} a & b & c & 0\\ 0 & d & 0 & 0\\ e & 0 & f & g\\ 0 & h & i & j \end{array} \right)$ ### Single process case • In case the matrix above is processed by a single process Elements :: $$diaA=\{0,0,e,h,0,0,0,i,a,d,f,j,b,0,g,0,c,0,0,0\}$$ Offsets :: $$offset=\{-2,-1,0,1,2\}$$ Number of diagonals :: $$nnd=5$$ ### Two process case • Rank 0 process Elements :: $$diaA=\{a,d,b,0,c,0\}$$ Offsets :: $$offset=\{0,1,2\}$$ Number of diagonals :: $$nnd=3$$ • Rank 1 process Elements :: $$diaA=\{e,h,0,i,f,j,g,0\}$$ Offsets :: $$offset=\{-2,-1,0,1\}$$ Number of diagonals :: $$nnd=4$$ ## 4.3 Domain Decomposition Method (DDM) Format ### 4.3.1 Domain Decomposition Parameters In the DDM format, the DIA format is extended by assuming simulations based on stencil computation such as finite difference with domain decomposition. By specifying a list of processes which exchange data with each other so that it matches domain decomposition in the simulation, one can minimize communication needed for parallel processing of matrices given by one-, two-, and three-dimensional structured grids. Related parameter settings are shown below. • Number of neighbor processes: num_neighbor_ranks The number of neighboring processes for $$m$$-dimensional domain decomposition is given by $$3^{m-1}$$. For example, in the two dimensional case, the number of neighboring processes is eight, while for the three dimensional case, it becomes $$26$$. • 1D array: neighbor_ranks ( num_neighbor_ranks ) A list of the neoghboring processes is specified by their ranks on (one dimensional) MPI communicator. At the boundaries of computational domain, negative indexes are substituted to neighbor_ranks for the directions, where the neighboring processes do not exist. A sample code below can be used as the standard method for setting neighbor_ranks. The parameters npe_x, npe_y and npe_z represent the number of processes in each direction (x, y, z). rank_x, rank_y and rank_z represent the ranks in each direction. Here, the definition of one-dimensional index iirank should be consistent with that in the simulation. ixmax = 0; iymax = 0; izmax = 0; num = 0; neighbor_ranks = -1 if ( npe_x > 1 ) ixmax=1 if ( npe_y > 1 ) iymax=1 if ( npe_z > 1 ) izmax=1 do iz=-izmax, izmax   do iy=-iymax, iymax     do ix=-ixmax, ixmax       n1x = rank_x +ix; n1y = rank_y +iy; n1z = rank_z +iz;       if ( ix*ix +iy*iy + iz*iz == 0 ) cycle       num = num +1       if ( n1z >=0 .and. n1z < npe_z ) then         if ( n1y >=0 .and. n1y < npe_y ) then           if ( n1x >=0 .and. n1x < npe_x ) then             iirank = npe_x *npe_y *n1z +npe_x *n1y +n1x;             neighbor_ranks(num) = iirank;           endif         endif       endif     end do   end do end do • 2D array: ioff_grids (nnd, ncomp_grids ) Grid positions referred in stencil computation such as finite difference are specified in each direction. For example, in the case of seven stencil computation consisting of three point finite difference for three-dimensional Poisson equation, the grid offsets for East, West, South, North, Up and Down from the base grid can be set as shown below. i  ioff_grids(*,1)  ioff_grids(*,2)  ioff_grids(*,3) base 1 0 0 0 east 2 1 0 0 west 3 -1 0 0 south 4 0 1 0 north 5 0 -1 0 top 6 0 0 1 bottom 7 0 0 -1 • Matrix Structure One-dimensional Poisson equation with three point finite difference. $A = \left( \begin{array}{ccc} a & b & 0 & 0\\ c & d & e & 0\\ 0 & f & g & h\\ 0 & 0 & i & j \end{array} \right)$ ### Single process case • Rank 0 process Elements     :: $$val\_dia=\{a,d,g,j,b,e,h,0,0,c,f,i\}$$ Offsets     :: $$ioff\_dia=\{0,0,-1\}$$ Number of diagonals :: $$nnd=3$$ Stencils  :: i  ioff_grids(*,1) reference 1 0 east  2 1 west  3 -1 ### Two process case • Rank 0 process Elements     :: $$val\_dia=\{a,d,b,e,0,c\}$$ Offsets     :: $$ioff\_dia=\{0,0,-1\}$$ Number of diagonals :: $$nnd=3$$ Stencils  :: i  ioff_grids(*,1) reference 1 0 east  2 1 west  3 -1 • Rank 1 process Elements     :: $$val\_dia=\{g,j,h,0,f,i\}$$ Offsets     :: $$ioff\_dia=\{0,0,0\}$$ Number of diagonals :: $$nnd=3$$ Stencils  :: i  ioff_grids(*,1) reference 1 0 east  2 1 west  3 -1 # 5 Install This chapter explains how to compile and install the PARCEL library. ## 5.1 Compiling the PARCEL library PARCEL requires a C compiler, a FORTRAN compiler, which supports preprocessor, MPI, LAPACK, and OpenMP. When compiling with $$gfortran$$, the option $$-cpp$$ has to be specified to to activate preprocessor. Compiler options can be changed inside the file arch/make_config. An example of a make_config file is shown below. To compile, execute the make command in the top directory of the extracted archive. Then, execute “make install” to create the example, include, lib directory in the directory specified in INSTALLDIR. If parcel_sparse.a is created in the lib directory, the compilation is successful. For large-scale problems of 2 billion dimensions or more, 64-bit integers are required, so it is necessary to compile with 64-bit integer types. • An example of make_config   CC = mpiicc FC = mpiifort CFLAGS = -O3 FFLAGS = -O3 -fpp -qopenmp -xHost -mcmodel=large FP_MODE = -fp-model precise INCLUDEDIR = LD_MPI = -lmpifort -lifcore LD_LAPACK = -mkl #FDEFINE = -DPARCEL_INT8 INSTALLDIR = ../PARCEL_1.1 Option     Explanation CC C compiler command name FC FORTRAN compiler command name CFLAGS C compiler options FFLAGS FORTRAN compiler options FP_MODE Option to suppress optimizations with arithmetic order change (Only for quad precision) INCLUDEDIR Include directories LD_MPI MPI library LD_LAPACK LAPACK library FDEFINE 64bit integer becomes available with -DPARCEL_INT8. INSTALLDIR PARCEL installation directory ## 5.2 Usage of the PARCEL library All the routines in the PARCEL library can be called by linking parcel_sparse.a which is generated in the directory lib. # 6 PARCEL routines The interface of each rountine in the PARCEL library is described. The following implementations are commonry used. • Precision The names of routines starting with ‘parcel_d’ and ‘parcel_dd’ donote the PARCEL routines implemented in Double precision and Quadruple (Double-Double) precision, respectively. • Communication-computation overlap The communication-computation overlap is implemneted in the following three modes. iovlflag=0 does not use any communication-computation overlap. iovlflag=1 uses the communication-computation overlap, in which computation processes are started after all non-blocking communications are launched at once. iovlflag=0 uses the communication-computation overlap, in which non-blocking communication and computation in each direction are processed sequencially with taking barrior synchronization, which reuses and saves communication buffers. • Number of blocks in the block Jacobi preconditioner and the additive Schwarz preconditioner When precon_thblock is zero or negative, or is greater than the number of threads per process, precon_nblock is set to be the number of threads per process, which is the default value. One can reduce the number of threads to enlarge the block size and improve the accuracy of preconditioning, but this also leads to the degradation of thread parallel performance. • Size of diagonal blocks in the fine-block preconditioner When independ_nvec is zero or negative, the fine-block preconditioner is not applied. On A64FX, independ_nvec=300 is recommended to facilitate optimizations for SIMD operations and software pipelining on the Fujitsu compiler. Larger independ_nvec improves the computational performance, while the accuracy of preconditioning becomes worse. When independ_nvec is greater than the size of vector on each process n, the fine-block preconditioner becomes equivalent to the point Jacobi preconditioner. Thread optimization of SpMV employs a cyclic loop division to facilitate the reuse of on-cache data between different threads. nBlock is the chank size for this loop division. When nBlock is zero or negative, the default value of nBlock=2000 is set. • Data structure of eigenvector in CA-Arnoldi method When the i-th eigenvalue is a real value, the corresponding eigenvector is stored in Evec(n*(i-1)+1:n*i). When the i-th eigenvalue is a complex value and the (i+1)-th eigenvalue is its complex conjugate value, the i-th and (i+1)-th eigenvectors are stored as follows. Eigenvector Real Part Imaginary Part i-th     Evec(n*(i-1)+1:n*i)         Evec(n*i+1:n*(i+1)) (i+1)-th     Evec(n*(i-1)+1:n*i)        -Evec(n*i+1:n*(i+1)) ## 6.1 parcel_dcg • Interface call parcel_dcg( icomm, vecx,vecb, n,gn,nnz,istart, crsA,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, iret ) • Function A simultaneous linear equation system Ax = b is solved by the conjugate gradient method (CG method). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA(nnz) double precision in Non-zero elements of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. iret integer out Error code (0:normal) ## 6.2 parcel_dbicgstab • Interface call parcel_dbicgstab( icomm, vecx,vecb, n,gn,nnz,istart, crsA,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, iret ) • Function A simultaneous linear equation system Ax = b is solved by the stabilized biconjugate gradient method (Bi-CGSTAB method). Non-zero elements are stored​in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA(nnz) double precision in Non-zero elements of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. iret integer out Error code (0:normal) ## 6.3 parcel_dgmres • Interface call parcel_dgmres( icomm, vecx,vecb, n,gn,nnz,istart, crsA,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, gmres_m,gmres_GSflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the generalized minimum residual method (GMRES (m) method). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA(nnz) double precision in Non-zero elements of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. gmres_m integer in Number of iterations until restart gmres_GSflag integer in Flag for orthogonalization algorithm (1: MGS, 2: CGS) iret integer out Error code (0:normal) ## 6.4 parcel_dcagmres • Interface call parcel_dcagmres( icomm, vecx,vecb, n,gn,nnz,istart, crsA,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, cagmres_sstep, cagmres_tstep, cagmres_basis, cagmres_QRflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the communication reduced generalized minimum residual method (CA-GMRES (s, t) method). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA(nnz) double precision in Non-zero elements of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. cagmres_sstep integer in Number of communication avoiding steps s in the CA-GMRES method cagmres_tstep integer in Number of outer iterations t in the CA-GMRES method (restart length = st) cagmres_basis integer in Flag for basis vector in the CA-GMRES method (0: monomial basis, 1: Newton basis) cagmres_QRflag integer in Flag for QR factorization in the CA-GMRES method (1: MGS, 2: CGS, 3: TSQR, 4: CholeskyQR, 5: CholeskyQR2) iret integer out Error code (0:normal) ## 6.5 parcel_dcbcg • Interface call parcel_dcbcg( icomm, vecx,vecb, n,gn,nnz,istart, crsA,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, cbcg_kstep,cbcg_Eigenflag,power_method_itrmax, caarnoldi_sstep,caarnoldi_tstep, caarnoldi_basis,caarnoldi_QRflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the Chebyshev basis conjugate gradient method (CBCG method). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA(nnz) double precision in Non-zero elements of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. cbcg_kstep integer in Number of communication avoiding steps cbcg_Eigenflag integer in Flag for eigenvalue computation (1: power method, 2: CA-ARNOLDI) power_method_itrmax integer in Maximum number of iterations in the power method caarnoldi_sstep integer in Number of communication avoiding steps s of the CA-ARNOLDI method caarnoldi_tstep integer in Number of outer iterations t in the CA-ARNOLDI method (restart length = st) caarnoldi_basis integer in Flag for basis vector in the CA-ARNOLDI method (0: monomial basis, 1: Newton basis) caarnoldi_QRflag integer in Flag for QR factorization in the CA-ARNOLDI method (1: MGS, 2: CGS, 3: TSQR, 4: Cholesky QR, 5: Cholesky QR2) iret integer out Error code (0:normal) ## 6.6 parcel_dcaarnoldi • Interface call parcel_dcaarnoldi( icomm, vecx, vecb, n, gn, nnz, istart, crsA, crsRow_ptr, crsCol, ipreflag, ilu_method, addL, iflagAS, itrmax, iovlflag, precon_thblock, independ_nvec, nBlock, caarnoldi_sstep, caarnoldi_tstep, caarnoldi_basis, caarnoldi_QRflag, Evalr, Evali, Evec, Eerr, EmaxID, EminID, iret ) • Function An eigenvalue problem is solved by the communication avoiding Arnoldi method (CA-Arnoldi((s,t))). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA(nnz) double precision in Non-zero elements of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. caarnoldi_sstep integer in Number of communication avoiding steps s in the CA-ARNOLDI method caarnoldi_tstep integer in Number of outer iterations t in the CA-ARNOLDI method (restart length = st) caarnoldi_basis integer in Flag for basis vector in the CA-ARNOLDI method (0: monomial basis, 1: Newton basis) caarnoldi_QRflag integer in Flag for QR factorization in the CA-ARNOLDI method (1: MGS, 2: CGS, 3: TSQR, 4: CholeskyQR, 5: CholeskyQR2) Evalr(caarnoldi_sstep*caarnoldi_tstep) double precision out Real part of the eigenvalue Evali(caarnoldi_sstep*caarnoldi_tstep) double precision out Imaginary part of eigenvalue Evec(n*caarnoldi_sstep*caarnoldi_tstep) double precision out Eigenvectors Eerr(caarnoldi_sstep*caarnoldi_tstep) double precision out Error norm $$\frac{\left\| Ax - \lambda x \right\|}{\left\| \text{λx} \right\|}$$ EmaxID integer out ID number of the maximum eigenvalue EminID integer out ID number of the minimam eigenvalue iret integer out Error code (0:normal) ## 6.7 parcel_dcg_dia • Interface call parcel_dcg_dia( icomm, vecx,vecb, n,gn,istart, diaA,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, iret ) • Function A simultaneous linear equation system Ax = b is solved by the conjugate gradient method (CG method). Non-zero elements are stored in DIA format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA(n*nnd) double precision in Non-zero elements of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. iret integer out Error code (0:normal) ## 6.8 parcel_dbicgstab_dia • Interface call parcel_dbicgstab_dia( icomm, vecx,vecb, n,gn,istart, diaA,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, iret ) • Function A simultaneous linear equation system Ax = b is solved by the stabilized biconjugate gradient method (Bi-CGSTAB method). Non-zero elements are stored in DIA format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA(n*nnd) double precision in Non-zero elements of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. iret integer out Error code (0:normal) ## 6.9 parcel_dgmres_dia • Interface call parcel_dgmres_dia( icomm, vecx,vecb, n,gn,istart, diaA,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, gmres_m,gmres_GSflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the generalized minimum residual method (GMRES (m) method). Non-zero elements are stored in DIA format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA(n*nnd) double precision in Non-zero elements of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. gmres_m integer in Number of iterations until restart gmres_GSflag integer in Flag for orthogonalization algorithm (1: MGS, 2: CGS) iret integer out Error code (0:normal) ## 6.10 parcel_dcagmres_dia • Interface call parcel_dcagmres_dia( icomm, vecx,vecb, n,gn,istart, diaA,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, cagmres_sstep, cagmres_tstep, cagmres_basis, cagmres_QRflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the communication reduced generalized minimum residual method (CA-GMRES (s, t) method). Non-zero elements are stored in DIA format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA(n*nnd) double precision in Non-zero elements of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. cagmres_sstep integer in Small communication step of CA-GMRES method cagmres_tstep integer in List step of the CA-GMRES method cagmres_basis integer in Base generation flag of CA-GMRES method (0: single basis, 1: newton basis) cagmres_QRflag integer in QR Decomposition flag of CA-GMRES method (1: MGS, 2: CGS, 3: TSQR, 4: CholeskyQR, 5: CholeskyQR2) iret integer out Error code (0:normal) ## 6.11 parcel_dcbcg_dia • Interface call parcel_dcbcg_dia( icomm, vecx,vecb, n,gn,istart, diaA,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, cbcg_kstep,cbcg_Eigenflag,power_method_itrmax, caarnoldi_sstep,caarnoldi_tstep, caarnoldi_basis,caarnoldi_QRflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the Chebyshev basis conjugate gradient method (CBCG method). Non-zero elements are stored in DIA format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA(n*nnd) double precision in Non-zero elements of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. cbcg_kstep integer in Number of communication avoiding steps cbcg_Eigenflag integer in Flag for eigenvalue computation (1: power method, 2: CA-ARNOLDI) power_method_itrmax integer in Maximum number of iterations in the power method caarnoldi_sstep integer in Number of communication avoiding steps s of the CA-ARNOLDI method caarnoldi_tstep integer in Number of outer iterations t in the CA-ARNOLDI method (restart length = st) caarnoldi_basis integer in Flag for basis vector in the CA-ARNOLDI method (0: monomial basis, 1: Newton basis) caarnoldi_QRflag integer in Flag for QR factorization in the CA-ARNOLDI method (1: MGS, 2: CGS, 3: TSQR, 4: Cholesky QR, 5: Cholesky QR2) iret integer out Error code (0:normal) ## 6.12 parcel_dcaarnoldi_dia • Interface call parcel_dcaarnoldi_dia( icomm, vecx, vecb, n, gn, istart, diaA,offset,nnd, ipreflag, ilu_method, addL, iflagAS, itrmax, iovlflag, precon_thblock, independ_nvec, nBlock, caarnoldi_sstep, caarnoldi_tstep, caarnoldi_basis, caarnoldi_QRflag, Evalr, Evali, Evec, Eerr, EmaxID, EminID, iret ) • Function An eigenvalue problem is solved by the communication avoiding Arnoldi method (CA-Arnoldi((s,t))). Non-zero elements are stored in DIA format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx(n) double precision in/out in: initial vector, out: solution vector vecb(n) double precision in Right hand side vector n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA(n*nnd) double precision in Non-zero elements of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. caarnoldi_sstep integer in Number of communication avoiding steps s in the CA-ARNOLDI method caarnoldi_tstep integer in Number of outer iterations t in the CA-ARNOLDI method (restart length = st) caarnoldi_basis integer in Flag for basis vector in the CA-ARNOLDI method (0: monomial basis, 1: Newton basis) caarnoldi_QRflag integer in Flag for QR factorization in the CA-ARNOLDI method (1: MGS, 2: CGS, 3: TSQR, 4: CholeskyQR, 5: CholeskyQR2) Evalr(caarnoldi_sstep*caarnoldi_tstep) double precision out Real part of the eigenvalue Evali(caarnoldi_sstep*caarnoldi_tstep) double precision out Imaginary part of eigenvalue Evec(n*caarnoldi_sstep*caarnoldi_tstep) double precision out Eigenvectors Eerr(caarnoldi_sstep*caarnoldi_tstep) double precision out Error norm $${\left\| Ax - \lambda x \right\|}/{\left\| \text{λx} \right\|}$$ EmaxID integer out ID number of the maximum eigenvalue EminID integer out ID number of the minimam eigenvalue iret integer out Error code (0:normal) ## 6.13 parcel_dcg_ddm • Interface call parcel_dcg_ddm( icomm, x,b, m, nnd,ioff_dia,val_dia, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, iret ) • Function A simultaneous linear equation system Ax = b is solved by the conjugate gradient method (CG method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x(m) double precision in/out in: initial vector, out: solution vector b(m) double precision in Right hand side vector m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia(m*nnd) double precision in Non-zero elements of matrix stored in DDM format num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. iret integer out Error code (0:normal) ## 6.14 parcel_dbicgstab_ddm • Interface call parcel_dbicgstab_ddm( icomm, x,b, m, nnd,ioff_dia,val_dia, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, iret ) • Function A simultaneous linear equation system Ax = b is solved by the stabilized biconjugate gradient method (Bi-CGSTAB method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x(m) double precision in/out in: initial vector, out: solution vector b(m) double precision in Right hand side vector m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia(m*nnd) double precision in Non-zero elements of matrix stored in DDM format num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. iret integer out Error code (0:normal) ## 6.15 parcel_dgmres_ddm • Interface call parcel_dgmres_ddm( icomm, x,b, m, nnd,ioff_dia,val_dia, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, gmres_m, gmres_GSflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the generalized minimum residual method (GMRES (m) method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x(m) double precision in/out in: initial vector, out: solution vector b(m) double precision in Right hand side vector m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia(m*nnd) double precision in Non-zero elements of matrix stored in DDM format num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. gmres_m integer in Number of iterations until restart gmres_GSflag integer in Flag for orthogonalization algorithm (1: MGS, 2: CGS) iret integer out Error code (0:normal) ## 6.16 parcel_dcagmres_ddm • Interface call parcel_dcagmres_ddm( icomm, x,b, m, nnd,ioff_dia,val_dia, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, cagmres_sstep, cagmres_tstep, cagmres_basis, cagmres_QRflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the communication reduced generalized minimum residual method (CA-GMRES (s, t) method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x(m) double precision in/out in: initial vector, out: solution vector b(m) double precision in Right hand side vector m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia(m*nnd) double precision in Non-zero elements of matrix stored in DDM format num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. cagmres_sstep integer in Number of communication avoiding steps s in the CA-GMRES method cagmres_tstep integer in Number of outer iterations t in the CA-GMRES method (restart length = st) cagmres_basis integer in Flag for basis vector in the CA-GMRES method (0: monomial basis, 1: Newton basis) cagmres_QRflag integer in Flag for QR factorization in the CA-GMRES method (1: MGS, 2: CGS, 3: TSQR, 4: CholeskyQR, 5: CholeskyQR2) iret integer out Error code (0:normal) ## 6.17 parcel_dcbcg_ddm • Interface call parcel_dcbcg_ddm( icomm, x,b, m, nnd,ioff_dia,val_dia, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, cbcg_kstep,cbcg_Eigenflag,power_method_itrmax, caarnoldi_sstep,caarnoldi_tstep, caarnoldi_basis,caarnoldi_QRflag, iret ) • Function A simultaneous linear equation system Ax = b is solved by the Chebyshev basis conjugate gradient method (CBCG method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x(m) double precision in/out in: initial vector, out: solution vector b(m) double precision in Right hand side vector m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia(m*nnd) double precision in Non-zero elements of matrix stored in DDM format num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. cbcg_kstep integer in Number of communication avoiding steps cbcg_Eigenflag integer in Flag for eigenvalue computation (1: power method, 2: CA-ARNOLDI) power_method_itrmax integer in Maximum number of iterations in the power method caarnoldi_sstep integer in Number of communication avoiding steps s of the CA-ARNOLDI method caarnoldi_tstep integer in Number of outer iterations t in the CA-ARNOLDI method (restart length = st) caarnoldi_basis integer in Flag for basis vector in the CA-ARNOLDI method (0: monomial basis, 1: Newton basis) caarnoldi_QRflag integer in Flag for QR factorization in the CA-ARNOLDI method (1: MGS, 2: CGS, 3: TSQR, 4: Cholesky QR, 5: Cholesky QR2) iret integer out Error code (0:normal) ## 6.18 parcel_dcaarnoldi_ddm • Interface call parcel_dcaarnoldi_ddm( icomm, x,b, m, nnd,ioff_dia,val_dia, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, caarnoldi_sstep, caarnoldi_tstep, caarnoldi_basis, caarnoldi_QRflag, Evalr, Evali, Evec, Eerr, EmaxID, EminID, iret ) • Function An eigenvalue problem is solved by the communication avoiding Arnoldi method (CA-Arnoldi((s,t))). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x(m) double precision in/out in: initial vector, out: solution vector b(m) double precision in Right hand side vector m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia(m*nnd) double precision in Non-zero elements of matrix stored in DDM format num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. caarnoldi_sstep integer in Number of communication avoiding steps s in the CA-ARNOLDI method caarnoldi_tstep integer in Number of outer iterations t in the CA-ARNOLDI method (restart length = st) caarnoldi_basis integer in Flag for basis vector in the CA-ARNOLDI method (0: monomial basis, 1: Newton basis) caarnoldi_QRflag integer in Flag for QR factorization in the CA-ARNOLDI method (1: MGS, 2: CGS, 3: TSQR, 4: CholeskyQR, 5: CholeskyQR2) Evalr(caarnoldi_sstep*caarnoldi_tstep) double precision out Real part of the eigenvalue Evali(caarnoldi_sstep*caarnoldi_tstep) double precision out Imaginary part of eigenvalue Evec(m*caarnoldi_sstep*caarnoldi_tstep) double precision out Eigenvectors Eerr(caarnoldi_sstep*caarnoldi_tstep) double precision out Error norm $${\left\| Ax - \lambda x \right\|}/{\left\| \text{λx} \right\|}$$ EmaxID integer out ID number of the maximum eigenvalue EminID integer out ID number of the minimam eigenvalue iret integer out Error code (0:normal) ## 6.19 parcel_ddcg • Interface call parcel_ddcg( icomm, vecx_hi,vecx_lo, vecb_hi,vecb_lo, n,gn,nnz,istart, crsA_hi,crsA_lo,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the conjugate gradient method (CG method). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx_hi(n) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) vecx_lo(n) double precision in/out in: initial value of the iteration (lower bit), out: solution vector (lower bit) vecb_hi(n) double precision in Right hand side vector (upper bit) vecb_lo(n) double precision in Right hand side vector (lower bit) n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA_hi(nnz) double precision in Non-zero elements (upper bits) of matrix stored in CRS format crsA_lo(nnz) double precision in Non-zero elements (lower bit) of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.20 parcel_ddbicgstab • Interface call parcel_ddbicgstab( icomm, vecx_hi,vecx_lo, vecb_hi,vecb_lo, n,gn,nnz,istart, crsA_hi,crsA_lo,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the stabilized biconjugate gradient method (Bi-CGSTAB method). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx_hi(n) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) vecx_lo(n) double precision in/out in: initial value of the iteration (lower bit), out: solution vector (lower bit) vecb_hi(n) double precision in Right hand side vector (upper bit) vecb_lo(n) double precision in Right hand side vector (lower bit) n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA_hi(nnz) double precision in Non-zero elements (upper bits) of matrix stored in CRS format crsA_lo(nnz) double precision in Non-zero elements (lower bit) of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.21 parcel_ddgmres • Interface call parcel_ddgmres( icomm, vecx_hi,vecx_lo, vecb_hi,vecb_lo, n,gn,nnz,istart, crsA_hi,crsA_lo,crsRow_ptr,crsCol, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, gmres_m,gmres_GSflag, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the generalized minimum residual method (GMRES (m) method). Non-zero elements are stored in CRS format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx_hi(n) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) vecx_lo(n) double precision in/out in: initial value of the iteration (lower bit), out: solution vector (lower bit) vecb_hi(n) double precision in Right hand side vector (upper bit) vecb_lo(n) double precision in Right hand side vector (lower bit) n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector nnz integer*4 / integer*8 in Number of non-zero elements on each process istart integer*4 / integer*8 in Start line of matrix on each process crsA_hi(nnz) double precision in Non-zero elements (upper bits) of matrix stored in CRS format crsA_lo(nnz) double precision in Non-zero elements (lower bit) of matrix stored in CRS format crsRow_ptr(n+1) integer*4 / integer*8 in Pointer table in CRS format crsCol(nnz) integer*4 / integer*8 in Column numbers of non-zero elements in CRS format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. gmres_m integer in Number of iterations until restart gmres_GSflag integer in Flag for orthogonalization algorithm (1: MGS, 2: CGS) precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.22 parcel_ddcg_dia • Interface call parcel_ddcg_dia( icomm, vecx_hi,vecx_lo, vecb_hi,vecb_lo, n,gn,istart, diaA_hi,diaA_lo,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the conjugate gradient method (CG method). Non-zero elements are stored in DIA format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx_hi(n) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) vecx_lo(n) double precision in/out in: initial value of the iteration (lower bit), out: solution vector (lower bit) vecb_hi(n) double precision in Right hand side vector (upper bit) vecb_lo(n) double precision in Right hand side vector (lower bit) n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA_hi(n*nnd) double precision in Non-zero elements (upper bits) of matrix stored in DIA format diaA_lo(n*nnd) double precision in Non-zero elements (lower bit) of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.23 parcel_ddbicgstab_dia • Interface call parcel_ddbicgstab_dia( icomm, vecx_hi,vecx_lo, vecb_hi,vecb_lo, n,gn,istart, diaA_hi,diaA_lo,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the stabilized biconjugate gradient method (Bi-CGSTAB method). Non-zero elements are stored in DIA format Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx_hi(n) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) vecx_lo(n) double precision in/out in: initial value of the iteration (lower bit), out: solution vector (lower bit) vecb_hi(n) double precision in Right hand side vector (upper bit) vecb_lo(n) double precision in Right hand side vector (lower bit) n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA_hi(n*nnd) double precision in Non-zero elements (upper bits) of matrix stored in DIA format diaA_lo(n*nnd) double precision in Non-zero elements (lower bit) of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.24 parcel_ddgmres_dia • Interface call parcel_ddgmres_dia( icomm, vecx_hi,vecx_lo, vecb_hi,vecb_lo, n,gn,istart, diaA_hi,diaA_lo,offset,nnd, ipreflag,ilu_method,addL,iflagAS, itrmax,rtolmax, reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the generalized minimum residual method (GMRES (m) method). Non-zero elements are stored in DIA format Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator vecx_hi(n) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) vecx_lo(n) double precision in/out in: initial value of the iteration (lower bit), out: solution vector (lower bit) vecb_hi(n) double precision in Right hand side vector (upper bit) vecb_lo(n) double precision in Right hand side vector (lower bit) n integer*4 / integer*8 in Size of vector on each process gn integer*4 / integer*8 in Total size of vector istart integer*4 / integer*8 in Start line of matrix on each process diaA_hi(n*nnd) double precision in Non-zero elements (upper bits) of matrix stored in DIA format diaA_lo(n*nnd) double precision in Non-zero elements (lower bit) of matrix stored in DIA format offset(nnd) integer*4 / integer*8 in Offset of each non-zero diagonal elements array in DIA format nnd integer*4 / integer*8 in Number of diagonal elements arrays in DIA format ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) ilu_method integer in Flag for incomplete LU factorization(0:ILU(0),1:D-ILU(DIA components match),2:D-ILU(Element sums in a row match),Only 1 and 2 are available for additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in Convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) gmres_m integer in Number of iterations until restart gmres_GSflag integer in Flag for orthogonalization algorithm (1: MGS, 2: CGS) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.25 parcel_ddcg_ddm • Interface call parcel_ddcg_ddm( icomm, x_hi,x_lo, b_hi,b_lo, m, nnd,ioff_dia, val_dia_hi,val_dia_lo, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the conjugate gradient method (CG method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x_hi(m) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) x_lo(m) double precision in/out in: Initial vector (lower bits), out: Solution vector (lower bits) b_hi(m) double precision in Right hand side vector (upper bit) b_lo(m) double precision in Right hand side vector (lower bit) m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia_hi(m*nnd) double precision in Non-zero elements of matrix stored in DDM format (upper bit) val_dia_lo(m*nnd) double precision in Non-zero elements of matrix stored in DDM format (lower bit) num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.26 parcel_ddbicgstab_ddm • Interface call parcel_ddbicgstab_ddm( icomm, x_hi,x_lo, b_hi,b_lo, m, nnd,ioff_dia, val_dia_hi,val_dia_lo, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the stabilized biconjugate gradient method (Bi-CGSTAB method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x_hi(m) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) x_lo(m) double precision in/out in: Initial vector (lower bits), out: Solution vector (lower bits) b_hi(m) double precision in Right hand side vector (upper bit) b_lo(m) double precision in Right hand side vector (lower bit) m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia_hi(m*nnd) double precision in Non-zero elements of matrix stored in DDM format (upper bit) val_dia_lo(m*nnd) double precision in Non-zero elements of matrix stored in DDM format (lower bit) num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.27 parcel_ddgmres_ddm • Interface call parcel_ddgmres_ddm( icomm, x_hi,x_lo, b_hi,b_lo, m, nnd,ioff_dia, val_dia_hi,val_dia_lo, num_neighbor_ranks,neighbor_ranks,margin,ncomp_grids, ndiv_grids,num_grids,ioff_grids, div_direc_th, ipreflag, addL,iflagAS, itrmax,rtolmax,reshistory, iovlflag, precon_thblock, independ_nvec, nBlock, gmres_m, gmres_GSflag, precision_A, precision_b, precision_x, precision_precon, iret ) • Function A simultaneous linear equation system Ax = b is solved by the generalized minimum residual method (GMRES (m) method). Non-zero elements are stored in DDM format. Parameter(dimension) Type Input/Output Description icomm integer in MPI communicator x_hi(m) double precision in/out in: Initial vector (upper bits), out: Solution vector (upper bits) x_lo(m) double precision in/out in: Initial vector (lower bits), out: Solution vector (lower bits) b_hi(m) double precision in Right hand side vector (upper bit) b_lo(m) double precision in Right hand side vector (lower bit) m integer in Size of vector on each process nnd integer in Number of diagonal elements arrays on each process ioff_dia(nnd) integer in Offset of each non-zero diagonal elements array in DDM format val_dia_hi(m*nnd) double precision in Non-zero elements of matrix stored in DDM format (upper bit) val_dia_lo(m*nnd) double precision in Non-zero elements of matrix stored in DDM format (lower bit) num_neighbor_ranks integer in Number of neighboring processes neighbor_ranks(num_neighbor_ranks) integer in List of neighboring processes margin integer in Maximum number of absolute value of ioff_grids ncomp_grids integer in Dimension of structured grids ndiv_grids(ncomp_grids) integer in Number of decomposed domains in each direction num_grids(ncomp_grids) integer in Number of grids on each process ioff_grids(nnd,ncomp_grids) integer in Offset of stencil data location in each direction div_direc_th integer in Direction of thread parallelization ( 1:x, 2:y, 3:z) ipreflag integer in Flag for preconditioner (0: none, 1: point Jacobi, 2: block Jacobi, 3: additive Schwartz) iflagAS integer in Flag for the additive Schwartz method (1: BASIC, 2: RESTRICT, 3: INTERPOLATE, 4: NONE) itrmax integer in/out in: maximum number of iterations, out: number of iterations rtolmax double precision in convergence criterion (norm of relative residual error) reshistory(itrmax) double precision out History of relative residual error iovlflag integer in Flag for communication-computation overlap (0: none, 1: all processes, 2: each process) precon_thblock integer in Number of blocks for each process of Block Jacobi preconditioning and Additive Schwarz preconditioning independ_nvec integer in Size of diagonal blocks to be calculated independently in subdivision preconditioning nBlock integer in Number of rows allocated to the thread cyclically. gmres_m integer in Number of iterations until restart gmres_GSflag integer in Flag for orthogonalization algorithm (1: MGS, 2: CGS) precision_A integer in Precision of matrix (1: double precision, 2: quad precision) precision_b integer in Precision of right hand side vector (1: double precision, 2: quad precision) precision_x integer in Precision of solution vector (1: double precision, 2: quad precision) precision_precon integer in Precision of preconditioner matrix (1: double precision, 2: quad precision) iret integer out Error code (0:normal) ## 6.28 parcel_dqr • Interface call parcel_dqr(n,s,icomm,V,Q,R,iQRflag) • Function QR factorization for the m*s matrix, where the sum of the parameter n for the MPI processes belonging to the MPI communicator icomm is m. Parameter(dimension) Type Input/Output Description n integer*4 / integer*8 in Size of rows in the matrix on each process s integer*4 / integer*8 in Size of column in the matrix icomm integer in MPI communicator V(s*n) double precision in Input Matrix (Column Major) Q(s*n) double precision out Orthogonal matrix (Column Major) R(s*s) double precision out Upper triangular matrix (Column Major) iQRflag integer in Flag for QR factorization (1:MGS,2:CGS,3:TSQR,4:CholeskyQR,5:CholeskyQR2) # 7 How to use (Fortran) The use of the PARCEL routines is explained by FORTRAN sample programs of a CG solver. ## 7.1 CRS Format (Fortran) A sample code in CRS format is shown below. Here, make_matrix_CRS denotes an arbitrary routine, which generates a matrix in CRS format in PARCEL.   program main use mpi implicit none integer n,gn,nnz,istart real*8,allocatable :: crsA(:) integer,allocatable :: crsRow_ptr(:),crsCol(:) real*8,allocatable :: vecx(:) real*8,allocatable :: vecb(:) integer itrmax real*8 rtolmax real*8, allocatable :: reshistory(:) integer ipreflag integer ILU_method integer iflagAS integer iovlflag integer precon_thblock integer independ_nvec integer nblock integer iret integer ierr call MPI_Init(ierr) call make_matrix_CRS(n,gn,nnz,istart,crsA,crsRow_ptr,crsCol) allocate(vecx(n)) allocate(vecb(n)) allocate(reshistory(itrmax)) ipreflag=0 ILU_method=1 iflagAS=1 itrmax=100 rtolmax=1.0d-8 iovlflag=0 precon_thblock=-1 independ_nvec=-1 nblock = 2000 vecb=1.0d0 vecx=1.0d0 call parcel_dcg( & MPI_COMM_WORLD, & vecx,vecb, & n,gn,nnz,istart, & crsA,crsRow_ptr,crsCol, & itrmax,rtolmax, & reshistory, & iovlflag,iret & precon_thblock, independ_nvec, & nblock, & iret & ) call MPI_Finalize(ierr) deallocate(vecx) deallocate(vecb) deallocate(reshistory) end program main ## 7.2 DIA Format (Fortran) A sample code in DIA format are shown below. Here, make_matrix_DIA denotes an arbitrary routine, which generates a matrix in DIA format in PARCEL.   program main use mpi implicit none integer n,gn,nnd,istart real*8,allocatable :: diaA(:) integer,allocatable :: offset(:) real*8,allocatable :: vecx(:) real*8,allocatable :: vecb(:) integer itrmax real*8 rtolmax real*8, allocatable :: reshistory(:) integer ipreflag integer ILU_method integer iflagAS integer iovlflag integer precon_thblock integer independ_nvec integer nblock integer iret integer ierr call MPI_Init(ierr) call make_matrix_DIA(n,gn,nnd,istart,diaA,offset) allocate(vecx(n)) allocate(vecb(n)) allocate(reshistory(itrmax)) ipreflag=0 ILU_method=1 iflagAS=1 itrmax=100 rtolmax=1.0d-8 iovlflag=0 precon_thblock=-1 independ_nvec=-1 nblock = 2000 vecb=1.0d0 vecx=1.0d0 call parcel_dcg_dia(& MPI_COMM_WORLD, & vecx,vecb,& n,gn,istart,& diaA,offset,nnd,& itrmax,rtolmax,& reshistory,& iovlflag, precon_thblock, independ_nvec, & nblock, & iret& ) call MPI_Finalize(ierr) deallocate(reshistory) end program main ## 7.3 DDM Format (Fortran) A Fortran sample code in DDM format is shown below. Here, make_matrix_DDM denotes an arbitrary routine, which generates a matrix in DDM format in PARCEL.   program main use mpi implicit none integer ityp_eq integer gnx,gny,gnz integer itrmax integer MAX_NITER integer nx,ny,nz integer nx0,ny0,nz0 integer n,gn integer m integer m_ integer npes,myrank integer ierr integer i integer ipreflag integer iflagAS real*8 rtolmax real*8 abstolmax integer solver,ityp_solver integer iret real*8,allocatable :: reshistory_DDM(:) ! ----- integer ILU_method integer iovlflag real*8,allocatable :: vecx(:) real*8,allocatable :: vecb(:) integer niter,restart integer npey,npez integer precon_thblock integer independ_nvec integer nblock integer npe_x,npe_y,npe_z integer div_direc_th integer rank_x, rank_y, rank_z integer,parameter :: nnd = 7 integer,parameter :: margin = 1 integer,parameter :: ndim=3 integer npe_dim(ndim) integer gn_grids(ndim) integer n_grids(ndim) integer istart_grids(ndim) integer,parameter :: num_neighbor_ranks_max = 3**ndim -1 integer neighbor_ranks( num_neighbor_ranks_max ) integer num_neighbor_ranks real*8, allocatable :: diaA(:) integer, allocatable :: offset(:) integer, allocatable :: offset_dim(:,:) namelist/input/ & ityp_eq, & gnx, & gny, & gnz, & MAX_NITER, & rtolmax, & abstolmax, & ipreflag, & independ_nvec, & iflagAS, & ILU_method, & iovlflag, & ityp_solver, & precon_thblock, & nblock & namelist/input_ddm/ & npe_x,& npe_y,& npe_z,& div_direc_th open(33,file='input_namelist') close(33) stime = 0.0d0 etime = 0.0d0 itrmax = MAX_NITER solver = ityp_solver ! ----- call MPI_Init(ierr) call MPI_Comm_size(MPI_COMM_WORLD,npes,ierr) call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr) gn_grids(1) = gnx gn_grids(2) = gny gn_grids(3) = gnz call set_rank_xyz( myrank, npe_x, npe_y, npe_z, rank_x, rank_y, rank_z ) call set_grids_decompostion( & npe_x, npe_y, npe_z, & rank_x, rank_y, rank_z, & ndim, gn_grids, npe_dim, n_grids, & istart_grids, margin, m ) num_neighbor_ranks = -1 call set_neighbor_ranks( & npe_x, npe_y, npe_z, rank_x, rank_y, rank_z, & num_neighbor_ranks, & neighbor_ranks, num_neighbor_ranks_max) allocate( diaA(nnd*m) ) allocate( offset(nnd) ) allocate( offset_dim(nnd,ndim) ) allocate( reshistory_DDM(MAX_NITER) ) allocate( vecx(m) ) allocate( vecx0(m) ) allocate( vecb(m) ) diaA = 0.0d0 vecx = 0.0d0 vecx0 = 0.0d0 vecb = 0.0d0 reshistory_DDM = 0 offset = 0 offset_dim = 0 call make_matrix_DDM( & gnx,gny,gnz, & ndim, n_grids, & istart_grids, & margin, nnd, m, & offset_dim, offset, & diaA, vecb, vecx, vecx0,& rank_x, rank_y, rank_z, & npe_x, npe_y, npe_z ) call parcel_dcg_ddm(& MPI_COMM_WORLD, & vecx,vecb, & m, & nnd, offset, diaA, & num_neighbor_ranks, neighbor_ranks, margin, ndim,& npe_dim, n_grids, offset_dim,& div_direc_th, & ipreflag, & MAX_NITER, rtolmax, reshistory_DDM, & iovlflag, & precon_thblock, & independ_nvec, & nBlock, & iret & ) call MPI_Barrier(MPI_COMM_WORLD,ierr) deallocate( diaA ) deallocate( offset ) deallocate( offset_dim ) deallocate( reshistory_DDM ) deallocate( vecx ) deallocate( vecx0 ) deallocate( vecb ) call MPI_Finalize(ierr) end program main # 8 How to use (C) A Fortran routine can be called from C by adding "_" to the end of the routine name and passing the argument as a pointer. The PARCEL routines can also be used in C programs. Regarding the use of MPI, the C MPI communicator should be converted to the Fortran MPI communicator via MPI_Comm_c2f in the MPI library. In the following, we explain how to use the PARCEL routines by using a sample code of the CG method in C. ## 8.1 CRS Format (C) A sample code in CRS format is shown below. Here, make_matrix_CRS denotes an arbitrary routine, which generates a matrix in CRS format in PARCEL. #include < stdio.h > #include < stdlib.h > #include "mpi.h" int main( int argc, char *argv[]){ int npes; int myrank; MPI_Fint icomm_fort; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); icomm_fort=MPI_Comm_c2f(MPI_COMM_WORLD); int n,gn,nnz,istart; double* crsA; int* crsRow_ptr; int* crsCol; make_matrix_CRS(n,gn,nnz,istart,crsA,crsRow_ptr,crsCol) double *vecx = (double *)malloc(sizeof(double)*n); double *vecb = (double *)malloc(sizeof(double)*n); for(int i=0;i < n;i++){ vecx[i] = 0.0; vecb[i] = 1.0; } int ipreflag = 0; int ilu_method = 1; int iflagas = 1; int max_niter = 100; int rtolmax = 1.0e-8; double *reshistory = (double *)malloc(sizeof(double)*max_niter); int iovlflag = 0; int precon_thblock = -1; int independ_nvec = -1; int nBlock = 2000; parcel_dcg_( &icomm_fort , vecx, vecb, &n, &gn, &nnz, &istart, crsa , crsrow_ptr, crscol, &ipreflag , &ilu_method , &iflagas , &max_niter , &rtolmax , reshistory, &iovlflag , &precon_thblock , &independ_nvec , &nBlock, &iret ); free(vecx); free(vecb); free(reshistory); MPI_Finalize(); } ## 8.2 DIA Format (C) A sample code in DIA format are shown below. Here, make_matrix_DIA denotes an arbitrary routine, which generates a matrix in DIA format in PARCEL. #include < stdio.h > #include < stdlib.h > #include "mpi.h" int main( int argc, char *argv[]){ int npes; int myrank; MPI_Fint icomm_fort; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); icomm_fort=MPI_Comm_c2f(MPI_COMM_WORLD); int n,gn,nnd,istart; double* diaA; int* offset; make_matrix_DIA(n,gn,nnd,istart,diaA,offset) double *vecx = (double *)malloc(sizeof(double)*n); double *vecb = (double *)malloc(sizeof(double)*n); for(int i=0;i < n;i++){ vecx[i] = 0.0; vecb[i] = 1.0; } int ipreflag = 0; int ilu_method = 1; int iflagas = 1; int max_niter = 100; int rtolmax = 1.0e-8; double *reshistory = (double *)malloc(sizeof(double)*max_niter); int iovlflag = 0; int precon_thblock = -1; int independ_nvec = -1; int nBlock = 2000; parcel_dcg_dia_( &icomm_fort , vecx, vecb, &n, &gn, &istart, diaA , offset, &nnd, &ipreflag , &ilu_method , &iflagas , &max_niter , &rtolmax , reshistory, &iovlflag , &precon_thblock , &independ_nvec , &nBlock, &iret ); free(vecx); free(vecb); free(reshistory); MPI_Finalize(); ## 8.3 DDM Format (C) A C sample code in DDM format is shown below. Here, make_matrix_ddm denotes an arbitrary routine, which generates a matrix in DDM format in PARCEL. set_rank_xyz, set_grids_decompostion, and set_neighbor_ranks are arbitrary routines, which compute neighbor processes in PARCEL format. int main( int argc, char *argv[]){ int ityp_eq; int itrmax; int max_niter; int nx,ny,nz; int nx0,ny0,nz0; int n,gn; int istart,iend; int npes,myrank; int ierr; int i; int iret; int myranky,myrankz; int npey,npez; MPI_Fint icomm_fort; int gnx = 100; int gny = 100; int gnz = 100; int max_niter = 100; int rtolmax = 1.0e-8; int ipreflag = 0; int independ_nvec = -1; int iflagas = 1; int ilu_method = 1; int iovlflag = 0; int precon_thblock = -1; int nBlock = 2000; int npe_x = 1; int npe_y = 4; int npe_z = 1; int div_direc_th = 1; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); icomm_fort=MPI_Comm_c2f(MPI_COMM_WORLD); int nnd = 7; int rank_x,rank_y,rank_z; set_rank_xyz( &myrank, &npe_x, &npe_y, &npe_z, &rank_x, &rank_y, &rank_z); int ndim=3 ; int margin = 1; int *npe_dim = (int *)malloc(sizeof(int)*ndim); int *gn_grids = (int *)malloc(sizeof(int)*ndim); int *n_grids = (int *)malloc(sizeof(int)*ndim); int *istart_grids = (int *)malloc(sizeof(int)*ndim); for(i=0;i < ndim;i++){ npe_dim[i] = 0; gn_grids[i] = 0; n_grids[i] = 0; istart_grids[i] = 0; } int m=-1; gn_grids[0] = gnx; gn_grids[1] = gny; gn_grids[2] = gnz; set_grids_decompostion( &npe_x, &npe_y, &npe_z, &rank_x, &rank_y, &rank_z, &ndim, gn_grids, npe_dim, n_grids, istart_grids, &margin, &m ); int num_neighbor_ranks_max = pow(3,ndim) -1; int *neighbor_ranks = (int *)malloc(sizeof(int)*num_neighbor_ranks_max); int num_neighbor_ranks=-1; for(i=0;i < num_neighbor_ranks_max;i++){ neighbor_ranks[i] = -1; } set_neighbor_ranks( &npe_x, &npe_y, &npe_z, &rank_x, &rank_y, &rank_z, &num_neighbor_ranks, neighbor_ranks, &num_neighbor_ranks_max); int *offset = (int *)malloc(sizeof(int)*nnd); for(i=0;i < nnd;i++){ offset[i] = 0; } int *offset_dim = (int *)malloc(sizeof(int)*nnd*ndim); for(i=0;i < nnd*ndim;i++){ offset_dim[i] = 0; } double *diaA = (double *)malloc(sizeof(double)*m*nnd); for(i=0;i < m*nnd;i++){ diaA[i] = 0.0; } double *vecx = (double *)malloc(sizeof(double)*m); double *vecb = (double *)malloc(sizeof(double)*m); for(i=0;i < m;i++){ vecx[i] = 0.0; vecb[i] = 0.0; } double *reshistory_ddm = (double *)malloc(sizeof(double)*max_niter); for(i=0;i < max_niter;i++){ reshistory_ddm[i] = 0.0; } int nnd_ = nnd; make_matrix_ddm( &gnx, &gny, &gnz, &ndim, n_grids, istart_grids, &margin, &nnd_, &m, offset_dim, offset, diaA, vecb, vecx, vecx0, &rank_x, &rank_y, &rank_z, &npe_x, &npe_y, &npe_z); parcel_dcg_ddm_( &icomm_fort, vecx, vecb, &m, &nnd, offset, diaA, &num_neighbor_ranks, neighbor_ranks, &margin, &ndim, npe_dim, n_grids, offset_dim, &div_direc_th, &ipreflag, &iflagas, &max_niter, &rtolmax, reshistory_ddm, &iovlflag, &precon_thblock, &independ_nvec , &nBlock , &iret ); free(offset); free(offset_dim); free(diaA); free(vecx); free(vecb); free(reshistory_ddm); free(neighbor_ranks); free(npe_dim); free(gn_grids); free(n_grids); free(istart_grids); MPI_Barrier(MPI_COMM_WORLD); MPI_Finalize(); return 0; } # 9 REFERENCES [1]R. Barret, M. Berry, T. F. Chan, et al., “Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods”, SIAM(1994) [2]M. Hoemmen, “Communication-avoiding Krylov subspace methods”. Ph.D.thesis, University of California, Berkeley(2010) [3]Y. Idomura, T. Ina, A. Mayumi, et al., “Application of a communication-avoiding generalized minimal residual method to a gyrokinetic five dimensional eulerian code on many core platforms”,ScalA 17:8th Workshop on Latest Advances in Scalable Algorithms for Large Scale Systems, pp.1-8, (2017). [4]R. Suda, L. Cong, D. Watanabe, et al., “Communication-Avoiding CG Method: New Direction of Krylov Subspace Methods towards Exa-scale Computing”, RIMS Kokyuroku ,pp. 102-111, (2016). [5]Y. Idomura, T. Ina, S. Yamashita, et al., “Communication avoiding multigrid preconditioned conjugate gradient method for extreme scale multiphase CFD simulations”. ScalA 18:9th Workshop on Latest Advances in Scalable Algorithms for Large Scale Systems,pp. 17-24.(2018) [6] T. Ina, Y. Idomura, T. Imamura, et al., “Iterative methods with mixed-precision preconditioning for ill-conditioned linear systems in multiphase CFD simulations”, ScalA21:12th Workshop on Latest Advances in Scalable Algorithms for Large Scale Systems .(2021) [7] A. Stathopoulos, K. Wu, “A block orthogonalization procedure with constant synchronization requirements”. SIAM J. Sci. Comput. 23, 2165–2182 (2002) [8] T. Fukaya, Y. Nakatsukasa, Y. Yanagisawa, et al., “CholeskyQR2: A Simple and Communication-Avoiding Algorithm for Computing a Tall-Skinny QR Factorization on a Large-Scale Parallel System,” ScalA 14:5th Workshop on Latest Advances in Scalable Algorithms for Large Scale Systems, pp. 31-38,(2014)
2022-01-22 09:28:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27888166904449463, "perplexity": 13683.737437063412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00130.warc.gz"}
http://class.physicsnaught.org/Homeworks/sun_lesson_109/sun_exercise_908.html
Menu div.question,div.steps {font-size: 16px; padding: 10px;line-height: 20px; margin: 20px;} div.question{border: 1px solid #ccc;} div#solution .button{font-size: 20px; margin: 20px; width: 200px; padding: 10px; text-align: center;} div.hint,div.answer,div.steps{display: none;} div.pre_next{font-size: 30px; width:90%; margin:auto;} div.pre_next a.nav_pre{float:left;} div.pre_next a.nav_next{float:right;} div.title div.notes{ font-family:sans-serif, Arial; font-size: 18px; } div.speak{display:none;} i.speak{float: right;margin-top:-10px;cursor:pointer;} 4. A pendulum bob of mass 1.5 kg, swings from a height A to the bottom of its arc at B. The velocity of the bob at B is 4.0\ m//s. Calculate the height A from which the bob was released. Ignore the effects of air friction.4. A pendulum bob of mass 1.5 kg, swings from a height A to the bottom of its arc at B. The velocity of the bob at B is 4.0\ m//s. Calculate the height A from which the bob was released. Ignore the effects of air friction. Hint KE + PE = constant Answer h = 0.80\ m Show Steps Given: v_B = 4.0 m//s Equations KE + PE = constant Solution: KE_A + PE_A = KE_B + PE_B 0 + mgh = 1/2mv_B^2 + 0 h = v_B^2/(2g) h = (4.0 m//s)^2/(2 times 10\ m//s^2) h = 0.80\ m
2022-08-08 02:24:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5019163489341736, "perplexity": 13968.17521520972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00759.warc.gz"}
https://mathematica.stackexchange.com/questions/143723/automatically-coloring-plot-labels-the-same-as-the-plotted-curves?noredirect=1
# Automatically coloring plot labels the same as the plotted curves Here is an example of a ListLinePlot. How can get the text of the PlotLabels option to be the same as the colors automatically assigned to the curves by the PlotTheme option? num = 10; data1 = N@Sin@Range[num]; data2 = N@Cos@Range[num]; mark = ToString @@@ {Last@data1, Last@data2}; ListLinePlot[{data1, data2}, Frame -> True, GridLines -> Automatic, GridLinesStyle -> Directive[Gray, Dotted], PlotRange -> All, PlotTheme -> "Web", PlotLegends -> Placed[SwatchLegend[mark], {Top, Left}], PlotLabels -> mark, InterpolationOrder -> 2, ImageSize -> Large ] Although the user can do this mannually like this here, {Style[text1, color1], Style[text2, color2]} when data set count increases, it becomes increasingly more difficult to give the colors values that the PlotTheme option assigned. I am eliminating what I consider extraneous details from your code, but I an generalizing the data to an arbitrary number of curves. With[{nDiv = 10, nCurv = 3}, data = Table[N @ Sin[u + h], {h, Subdivide[π/2, nCurv - 1]}, {u, Subdivide[2 π, nDiv]}]]; plt = ListLinePlot[data, PlotTheme -> "Web"]; lbls = MapThread[Style[Last[#1], #2, 14] &, {data, Cases[plt, RGBColor[__], ∞]}]; ListLinePlot[data, PlotTheme -> "Web", PlotLabels -> lbls, ImageSize -> Large] ### Update The OP expresses worry about the performance cost of evaluating the plot twice. Since the 1st plot is not rendered to the screen, its evaluation is not as expensive as a fully rendered plot. However, if the data sets being plotted are very large, it might be profitable to restrict the 1st evaluation to the 1st three points in each data set. Like so: With[{dta = Take[#, 3] & /@ data}, plt = ListLinePlot[dta, PlotTheme -> "Web"]]; • Great! but ListLinePlot shall be executed twice, I doubt the performace go down. – Jerry Apr 16 '17 at 15:00 • why plt/.{PlotLabels->lbls} doesn't work? – Jerry Apr 16 '17 at 15:05 • @Jerry. Take a look at FullForm[plt]. – m_goldberg Apr 16 '17 at 15:10 • @Jerry. I don't know how to do it without re-evaluating the plot. However, when the plot is not rendered on the screen as in the 1st evaluation, it takes far less time than a rendered plot, so the performance hit is not as severe as you might think. – m_goldberg Apr 16 '17 at 15:20 • @Jerry You can avoid evaluating the plot twice, which is usually quick for ListLinePlot but not always quick for other plotter, with the following: theme = "Web"; styles = "DefaultPlotStyle" /. (Method /. ChartingResolvePlotTheme[theme, ListLinePlot]); lbls = MapThread[Style[Last[#1], #2, 14] &, {data, PadRight[{}, {Length@data}, styles]}]; – Michael E2 Apr 16 '17 at 16:18 Very neat, but the proposed code seems to go wrong in Mathematica 12.0 because "Cases" gives a list of colours which is too long. Below an ad hoc solution which works here but I don't know how robust it is: With[{nDiv = 10, nCurv = 3}, data = Table[ N@Sin[u + h], {h, Subdivide[π/2, nCurv - 1]}, {u, Subdivide[2 π, nDiv]}]]; plt = ListLinePlot[data, PlotTheme -> "Web"]; `
2019-12-14 16:03:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18416966497898102, "perplexity": 6502.405557577861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541281438.51/warc/CC-MAIN-20191214150439-20191214174439-00041.warc.gz"}
https://www.ilovestats.org/anova/
[porto_container animation_duration=“1000” animation_delay=“0”] [/porto_container] ## ANOVA Analy­sis of Vari­ance (ANOVA) is used when you want to com­pare the means of more than two groups. The test tells you whether there is a sig­nif­i­cant dif­fer­ence between any of the means of not. To inves­ti­gate which means that dif­fer, you need to per­form a Tukey test or anoth­er pair­wise test. You need to check the fol­low­ing assump­tions before pro­ceed­ing with ANOVA: 1. The obser­va­tions are independent 2. The obser­va­tions with­in each group is nor­mal­ly distributed 3. The obser­va­tions with­in the groups have the same variance The goal of ANOVA is to: Test if there is a sta­tis­ti­cal­ly sig­nif­i­cant dif­fer­ence between any of the means of the groups. That means that we test the null-hypoth­e­sis that µ1= µ2= µn. This is accom­plished by cal­cu­lat­ing the F‑statistic. The ratio­nale of the test is to com­pare the vari­ance between the groups to the vari­ance with­in the groups. If the between vari­ance is greater than the with­in vari­ance we say that there is an effect by the fac­tor vari­able being inves­ti­gat­ed. That means that there is a dif­fer­ence between any or all group means. A fac­tor is a nom­i­nal vari­able where each group of the fac­tor is called a lev­el. For exam­ple, the fac­tor col­or has the lev­els “blue”, “red”, “orange” etc. The vari­ance com­po­nents are com­put­ed using the equa­tions in the ANOVA table below: where $a$ is the num­ber of groups (or lev­els), $n_T$  is the total num­ber of obser­va­tions, $n_i$ is the num­ber of obser­va­tions with­in each group,  $X_{ij}$ is the $j^{th}$obser­va­tion in group $i$, $\overline{X}_i$ is the mean of group  $i$  and $\overline{X}$ is the mean of all obser­va­tions (grand mean). The null hypoth­e­sis is reject­ed when F > Fα. The crit­i­cal val­ue  (Fα), is found in a F‑table for dif­fer­ent lev­els of sig­nif­i­cance (e.g. α = 0.05 and 0.01) at the degrees of free­dom $v_1$($df_B$) and $v_2$ ($df_W$). That is, you are cer­tain at a degree of 95 (α=0.05) and 99 % (α=0.01), respec­tive­ly, that the null-hypoth­e­sis can be reject­ed, i.e. any or all means differ. Exam­ple A com­pa­ny wants to find out if there is a dif­fer­ence in total sales between four geo­graph­i­cal areas. There are 12 shops in each area, thus giv­ing a total of 12 total sales per year (mil­lion dol­lars) for each area (Area 1‑Area 4). 1. Con­struct the null-hypothesis H0: the mean total sale in do not dif­fer between any of the areas (µArea 1 = µArea 2 = µArea 3 = µArea 4) 1. Cal­cu­late the mean ($\overline{x}$) and vari­ance ($s^2$) for each sample: 3. Check that the vari­ances are equal - Per­form a F‑test. Cal­cu­late the F sta­tis­tic by using the largest vari­ance as numer­a­tor and the small­est vari­ance as denom­i­na­tor. In this case, we use the vari­ance of Area 1 and Area 2 as numer­a­tor and denom­i­na­tor, respectively. - Cal­cu­late the degrees of freedom: $v_{area1} = n-1 = 12-1 = 11$ $v_{area2} = n-1 = 12-1 = 11$ - Check the crit­i­cal val­ue for F at α=0.05 where = 11 and  = 11 in a table of crit­i­cal F val­ues: Fα=0.05 = 2.82 -Com­pare the cal­cu­lat­ed F sta­tis­tic with Fα=0.05 F < Fα=0.05 = 2.14 < 2.82 - Reject H0 or H1 H0 can’t be reject­ed; the assump­tion of equal vari­ances holds true. 4. Cal­cu­late the F‑statistic Use the equa­tions for the degrees of free­dom, Sum of Squares, Mean Squares and the F‑statistic to cre­ate an ANOVA table. You can also let a sta­tis­ti­cal soft­ware do this for you. 5. Reject or retain H0 F<Fcrit , which means that the prob­a­bil­i­ty of get­ting the cal­cu­lat­ed F‑value if the null-hypoth­e­sis was true is less than 0.05. The null hypoth­e­sis is there­fore rejected. 6. Inter­pret the result The mean total sale dif­fers between areas. Look­ing at the means, we may sus­pect that the mean of Area 3 is larg­er than the oth­er. This can be test­ed using a Tukey test. How to do it in R ############# ANOVA ########### #1. Import the data #2. Do the ANOVA m<-lm(Sales~Area,data=data2) anova(m) summary(m) #3 Visualize #SST par(mfcol=c(1,3)) plot(data2$Sales ~rep(c(1,2,3,4),each=12),xaxt="n",main="SST",xlab="Area",ylab="Sales (million$)",las=1) axis(side=1,at=c(1,2,3,4),labels=c(1,2,3,4)) abline(h=mean(data2$Sales),col="blue",lty="dashed") segments(rep(c(1,2,3,4),each=12),data2$Sales[seq(1,max(length(Sales)),1)], rep(c(1,2,3,4),each=12),mean(data2$Sales),col="red") Area1<-round(tapply(data2$Sales,data2$Area,mean),digits=2)[1] Area2<-round(tapply(data2$Sales,data2$Area,mean),digits=2)[2] Area3<-round(tapply(data2$Sales,data2$Area,mean),digits=2)[3] Area4<-round(tapply(data2$Sales,data2$Area,mean),digits=2)[4] #SSB plot(data2$Sales ~rep(c(1,2,3,4),each=12),xaxt="n",main="SSB",xlab="Area",ylab="Sales (million $)",las=1) axis(side=1,at=c(1,2,3,4),labels=c(1,2,3,4)) abline(h=mean(data2$Sales),col="blue",lty="dashed") segments(0.9,Area1,1.1,Area1,lwd=2) segments(1.9,Area2,2.1,Area2,lwd=2) segments(2.9,Area3,3.1,Area3,lwd=2) segments(3.9,Area4,4.1,Area4,lwd=2) segments(c(1,2,3,4),round(tapply(data2$Sales,data2$Area,mean),digits=2), c(1,2,3,4),mean(data2$Sales),col="red") #SSW plot(data2$Sales ~rep(c(1,2,3,4),each=12),xaxt="n",main="SSW",xlab="Area",ylab="Sales (million $)",las=1) axis(side=1,at=c(1,2,3,4),labels=c(1,2,3,4)) abline(h=mean(Sales),col="blue",lty="dashed") segments(0.9,Area1,1.1,Area1,lwd=2) segments(1.9,Area2,2.1,Area2,lwd=2) segments(2.9,Area3,3.1,Area3,lwd=2) segments(3.9,Area4,4.1,Area4,lwd=2) segments(rep(c(1,2,3,4),each=12),data2$Sales[seq(1,max(length(data2$Sales)),1)], rep(c(1,2,3,4),each=12),rep(round(tapply(data2$Sales,data2$Area,mean),digits=2),each=12),col="red") #3. Check the assumptions #3.1 Normality #QQ plot st.res<-rstandard(m) x11() qqnorm(st.res,ylab="Standardized Residuals",xlab="Theoretical",las=1,bty="l") qqline(st.res) # Histogram x11() par(mfcol=c(2,2)) tapply(data2$Sales,data2$Area,hist,col="skyblue",las=1,yaxt="n",xaxt="n",xlab="Sales",main="Histogram") #4.1 Equal variances #Compute the variances of each Area d<-data.frame(data2[which(data2$Area=="Area 1"),],data2[which(data2$Area=="Area 2"),], data2[which(data2$Area=="Area 3"),], data2[which(data2$Area=="Area 4"),]) std<-tapply(data1$Sales,data1$Area,sd) var<-std^2 #F-test var.test(d$Sales,d\$Sales.1) ### ANOVA in depth Intro­duc­tion The idea of ANOVA is to test whether the vari­ance between a set of groups is larg­er or equal to the vari­ance with­in the groups. If the vari­ance between the groups is sig­nif­i­cant­ly larg­er, we say that there is an effect by the fac­tor to which the groups belong (e.g. Tem­per­a­ture) on the inde­pen­dent vari­able (e.g. growth). This means that the mean of at least one group devi­ates from the mean of the oth­er groups. Par­ti­tion­ing the variation Since we want to com­pare the vari­ance between the groups with the vari­ance with­in the groups, we first need to cal­cu­late them. We can say that we are par­ti­tion­ing the total vari­ance in the data set into two com­po­nents; (1) The between vari­ance (MSB) and (2) the with­in vari­ance (MSW). In the process of cal­cu­lat­ing these com­po­nents we first need to cal­cu­late the sum of squares, which  is the numer­a­tor in the equa­tion for the variance. For the total vari­a­tion in the data neglect­ing the groups, the sum of squares is the summed squared dis­tance between every obser­va­tion and the mean of all obser­va­tions (the grand mean). Using our ANOVA exam­ple of sales of dif­fer­ent shops in four areas we this can be illus­trat­ed as: where the red lines illus­trates the dis­tance between each obser­va­tion and the grand mean. When these dis­tances are squared and summed we get the total sum of squares, the total vari­a­tion in the data. The total sum of squares  is cal­cu­lat­ed by: where  $X_{ij}$ is the val­ue of the $j^{th}$ obser­va­tion of the $i^{th}$  group and $\overline{X}$  is the grand mean. So far, the vari­a­tion is unpar­ti­tioned. We could also say that there is no explained vari­a­tion, only unex­plained since we are only look­ing at the data as a whole. In our exam­ple the  $SS_T$ = 30.20. Thus, if no fac­tor was to explain the vari­a­tion, the unex­plained vari­a­tion would equal $SS_T$  = 30.20. Now, let’s look at the vari­a­tion with­in the groups: The red lines illus­trate the dis­tance between each obser­va­tion and the mean of each group. When these are squared and summed, we get the with­in sum of squares ($SS_W$). This is the unex­plained vari­a­tion in the data after con­sid­er­ing the effect of the fac­tor (Area). In our exam­ple, this was cal­cu­lat­ed as 4.34, which is a con­sid­er­able reduc­tion from the unex­plained vari­a­tion before the effect of the fac­tor was con­sid­ered, i.e. 30.20. The dis­tance to the mean for each group is short­er than the dis­tance to the grand mean. The with­in sum of squares is cal­cu­lat­ed by: where $X_{ij}$ is the val­ue of the $j^{th}$ obser­va­tion of the $i^{th}$ group and $\overline{X}_i$ is the mean of the $i^{th}$ group. Now, if the unex­plained vari­a­tion dropped to 4.34 out of 30.20, the explained vari­a­tion has to be 30.20–4.34= 25.86. This com­po­nent of the total vari­a­tion can be illus­trat­ed as: The red lines cor­re­spond to the dis­tance between the mean of each group and the grand mean. When squared, summed and mul­ti­plied with the num­ber of obser­va­tions in each group we get the between sum of squares. The equa­tion for this com­po­nent of the vari­a­tion is: where $n_i$ is the num­ber of obser­va­tions in the $i^{th}$ group, $\overline{X}_i$ is the mean of the $i^{th}$ group and $\overline{X}$  is the grand mean. Lin­ear model The ANOVA can, just like the lin­ear regres­sion, be treat­ed as a lin­ear mod­el. Con­sid­er the fact that the val­ue of each obser­va­tion in a pop­u­la­tion is the sum of the mean of the pop­u­la­tion and the devi­a­tion of the obser­va­tion from the mean: $x_i = \mu + e_i$ where $x_i$ is the val­ue of the $i^{th}$  obser­va­tion and $\mu$ is the mean of the pop­u­la­tion and  $e_i$ is the error term or the devi­a­tion to the mean; $e_i = z \times \sigma$ or $e_i = x_i - \mu$. How­ev­er, when a fac­tor is added, each obser­va­tion can be expressed as: $x_{ij} = \mu + F_i + e_i$ where $x_{ij}$  is the $j^{th}$ obser­va­tion of the  $i^{th}$ group, $\mu$  is the grand mean, $F_i$  is the effect by the fac­tor and  $e_i$ is the devi­a­tion from the $j^{th}$ obser­va­tion to the mean of the group. The error term, and thus the indi­vid­ual obser­va­tions are not esti­mat­ed by the pro­ce­dure of the ANOVA. The mod­el out­put pro­vides the esti­mates of the effect by the fac­tor. That is the indi­vid­ual devi­a­tion from the inter­cept for each lev­el of the fac­tor. That means that the lin­ear mod­el esti­mates the mean for each group: $\overline{x}_i = \mu + F_i$ If there is no effect by the fac­tor, the mean of all groups equals the grand mean.
2021-04-20 23:13:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 50, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915491700172424, "perplexity": 5961.699958941675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00112.warc.gz"}
https://tonyladson.wordpress.com/2017/05/07/actual-et-and-productivity/
# Actual ET and productivity I was reading a post over at Dynamic Ecology presenting an appreciation of Michael Rosenzwieg, a Professor of Ecology and Evolutionary Biology at the University of Arizona. What caught my eye was his most cited paper which is on the correlation between AET (actual evapotranspiration and productivity).  Here is the abstract: Actual evapotranspiration (AET) is shown to be a highly significant predictor of the net annual above-ground productivity in mature terrestrial plant communities. Communities included ranged from deserts and tundra to tropical forests. It is hypothesized that the relationship of AET to productivity is due to the fact that AET measures the simultaneous availability of water and solar energy, the most important rate-limiting resources in photosynthesis. As a hydrologist I knew about actual evapotranspiration (evaporation plus transpiration) but hadn’t paid attention to the link with productivity.  To an ecologist, productivity refers to the rate of biomass production through photosynthesis –  where inorganic molecules, like water and carbon dioxide, are converted to organic material.  Productivity can be measured as mass per unit area per unit time e.g. g m-2 d-1. In Australia, Actual evapotranspiration is mapped by the Bureau of Meteorology (Figure 1).  There are high values along the coast north of Brisbane, Cape York and ‘The Top End‘.  If Rosenzeig’s correlations hold, these areas are the most ecologically productive in Australia.  In Victoria the highest AET is around Warrnambool, Gippsland and particularly, a small area on the east coast near Mallacoota.  Many of the areas with highest AET are heavily forested. Figure 1: Average annual areal actual evapotranspiration (link to source) Rosenzweig quantified the relationship between AET and productivity: $\mathrm{log_{10}NAAP} = (1.66 \pm 0.27) \mathrm{log_{10}AET} - (1.66 \pm 0.01)$ Where: • NAAP is the net annual above-ground productivity in grams per square meter. • AET is annual actual evapotranspiration in mm. The 95% confidence intervals for the slope and intercept are provided. Rosenzweig’s paper was published in 1968 and the relationship between AET and productivity is better understood now (e.g. Jasechko, S. et al., 2013).  But the simple relationship between AET and productivity does provide an interesting perspective on the Australian landscape. ### References Michael L. Rosenzweig  (1968) Net Primary Productivity of Terrestrial Communities: Prediction from Climatological Data,” The American Naturalist 102, no. 923 (Jan. – Feb., 1968): 67-74. DOI: 10.1086/282523 (link). Jasechko, S., Sharp, Z., Gibson, J., Birks, S., Yi, Y. and Fawcett, P. (2013) Terrestrial water fluxes dominated by transpiration.  Nature 496(7445):347-350 (link). ## One thought on “Actual ET and productivity” 1. emacwater Hi Tony, I wonder if evapotranspiration is also related to the forest productivity index, which I think is used to model carbon sequestration potential across Australia in the Nat carbon accounting toolbox… not sure.. I really enjoy your posts, thank you! Kind regards Emma MacKenzie
2018-07-18 16:28:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4467748999595642, "perplexity": 8936.41143662463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590295.61/warc/CC-MAIN-20180718154631-20180718174631-00375.warc.gz"}
https://www.zbmath.org/?q=an%3A1159.03004
# zbMATH — the first resource for mathematics Logical and semantic purity. (English) Zbl 1159.03004 Preyer, Gerhard (ed.) et al., Philosophy of mathematics. Set theory, measuring theories, and nominalism. Frankfurt: Ontos Verlag (ISBN 978-3-86838-009-5/hbk). LOGOS. Studien zur Logik, Sprachphilosophie und Metaphysik 13, 40-52 (2008). Starting with Hilbert’s formulation of the concern for the purity of the method (“one strives to use in the proof of a theorem as far as possible only those auxiliary means that are required by the content of the theorem”), the author distinguishes between two kinds of pure proofs of theorems: (i) logically pure proofs, which are proofs carried out from a minimal subset of axioms of a given axiom system, and (ii) semantically pure proofs, which “draw only on what must be understood and accepted in order to understand that theorem”. He then shows that: (1) “Some results require more concepts and/or propositions to be proved than to be understood”, and (2) “Some results require more concepts and/or propositions to be understood than to be proved”. To establish (1), he uses the example of the casus irreducibilis for cubic polynomials (which requires complex numbers for its solution, but not for its understanding) and that of Gödel sentences in Peano Arithmetic (which can be understood as arithmetical statements, but not proved as such statements), whereas for (2) he uses the theorem stating that there are infinitely many primes, which can pe proved in fragments of arithmetic, but requires a more generous axiom system to be understood. For the entire collection see [Zbl 1149.03003]. ##### MSC: 03A05 Philosophical and critical aspects of logic and foundations 00A30 Philosophy of mathematics 03B30 Foundations of classical theories (including reverse mathematics) 03F07 Structure of proofs
2021-03-01 10:42:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848509669303894, "perplexity": 1544.5998496376783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362481.49/warc/CC-MAIN-20210301090526-20210301120526-00132.warc.gz"}
https://www.calculatored.com/math/calculus/limit-calculator
# Limit Calculator Limit ${}$ ... Welcome to our online limit calculator, which is designed to help you in solving calculus problems related to limits. A limit is defined as a vital tool in calculus, used to describe whether a sequence or function approaches a stable (fixed) value as its index or input reach a set point. These can be defined for distinct series, as functions of one or more real-valued inputs or complex-valued operations. Don’t worry! Our multivariable limit calculator can handle it. In this article, we will describe; what is the limit of a function? The step by step calculation, and applications in daily life. ## What is the limit of a function? To explain it, let’s suppose f as a real valued function and b as a continuous quantity (a real number). Instinctively speaking, equation would be as follows: $$\lim_{x\to\ b} f \left( x \right) = \text{L}$$ This illustrates that f(x) can be set as near to L as preferred by making x suitably close to b. In that case, the above expression can be defined as the limit of the function f of x, as x approaches b, is equal to L. Example:for x=1, x2-1/x-1 = 12-1/ 1-1 = 0/0 now, this is undefined or indeterminate, we need another way to work this out. Now, instead of x=1, this time, we will try approaching it a little bit closer: x (x2 − 1)/(x − 1) 0.25 1.0625 0.45 1.2025 0.9 1.810 0.99 1.99000 0.999 1.99900 0.9999 1.99990 Now, we have witnessed, as x gets close to 1, the other function gets closer to 2, and so we can express it as: $$\lim_{x\to\ 1} \frac {x^2-1} {x-1} = 2$$ For any chosen degree of nearness ε, one can determine an interval nearby x0(or previously assumed b). Because, the given values of f(x) computed here varies from L by a quantity less than ε (i.e., if ε= |x − x0| < δ, then |f (x) − L| < ε). This can be used to determine whether a given number is a limit or not. The estimation of limits, particularly of quotients, typically involves adjustments of the function in order to write it in a more obvious form, as shown in the above example. Limits are used to calculate the rate of change of a function, and as approximations, throughout the analysis to get to the nearest possible value. For example, an area inside a curved region, may be described as limits of close estimations by rectangles. ## How to calculate limits? There are a range of techniques used to compute the limits, we will discuss some ways to algebraically calculate these values: ## By including the x value: This method is simple, all you need to do is plug in the value of x that is being approached. If you get a 0 (undefined value) move on to the next method. But if you get a value it means your function is continuous, and you’ve acquired the desired result. Example:Find $$\lim_{x\to\ 5} \frac{x^2-4x+8} {x-4}$$ Now, put the value of x in equation = $$\frac{5^2- 4*5 + 8}{5-4} =\frac{25-12}{1} = 13$$ ## By Factoring: If the first method fails, you can try factorization technique, especially in problems involving polynomial expressions. In this method, we first simplify the equation by factoring, then cancel out the like terms, before introducing x. Example:Find $$\lim_{x\to\ 4} \frac{x^2-6x-7} {x^2-3x-28}$$ Now,factorize the equation $$=\;\frac{(x-7)(x+1)}{(x+4) (x-7)}$$ Here, x-7 will cancel out, the next step is to put the x value $$=\;\frac{(4+1)} {(4+4)}\;=\;\frac{5}{8}$$ ## By rationalizing the numerator: The functions having square root in the numerator and a polynomial expression in the denominator, requires you to rationalize the numerator. Example: Consider a function, where x approaches 13: $$g(x)=\frac{\sqrt{x-4}-3}{x-13}$$ Here, x inclusion fails, because we get a 0 in the denominator and factoring fails as we have no polynomial to factorize. In this case we will multiply both numerator and denominator with a conjugate. Step 1: Multiply conjugate on top and bottom. Conjugate of our numerator: $$\sqrt{x-4}+3$$ $$\frac{\sqrt{x-4}-3}{x-13}.\frac{\sqrt{x-4}+3}{\sqrt{x-4}+3}$$ $$(x-4)+3\sqrt{x-4}-3\sqrt{x-4}-9$$ Step.2: Cancel out. Now it will be further simplified to x-13 by cancelling the middle alike terms. After cancelling out: $$\frac{x-13}{(x-13)(\sqrt{x-4}+3)}$$ Now, cancel out x-13 from top and bottom, leaving: $$\frac{1}{\sqrt{x-4}+3)}$$ Step 3: Now after incorporating 13 in this simplified equation, we get the results 1/6. Quite lengthy and time consuming process isn’t it? Well no problem, with the use of our smart central limit theorem calculator, you will get the desired value in seconds. The use of our limit calculator with steps: Step 1: Input the required function Step 2: Enter the value to approach, then press compute, that’s it leave the math work on our gizmo. You will get the limit within seconds. ## Applications in daily life: Real-life limits can be seen in a broad range of fields. For instance, the quantity of the new compound derived from a chemical reaction can be considered as the limit of a function as time reaches infinity. Likewise, measuring the temperature of an ice cube placed in a warm glass of water is also a limit. These are also applied in real-life estimates to compute derivatives. It is quite complex to approximate a derivative of complicated motions. The engineers uses small differences in function to approximate derivative. Other areas involves the estimation of social security income limits. Use our social security earnings limit calculator for this purpose. We are optimistic, that this article will benefit you in understanding and applying the concepts of this important calculus tool. Best of luck!
2020-04-04 02:35:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359817862510681, "perplexity": 602.2027862683942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00006.warc.gz"}
https://mathematica.stackexchange.com/questions/linked/2678?sort=hot
4k views ### Replacement inside held expression I wish to make a replacement inside a held expression: f[x_Real] := x^2; Hold[{2., 3.}] /. n_Real :> f[n] The desired output is ... 204 views ### How to evaluate only arguments, but leave topmost expression unevaluated? [duplicate] I want to represent expressions mostly in unevaluated form, but being able to evaluate it's subparts. For example, how to evaluate only arguments, but leave topmost expression unevaluated? For ... 132k views ### Where can I find examples of good Mathematica programming practice? I consider myself a pretty good Mathematica programmer, but I'm always looking out for ways to either improve my way of doing things in Mathematica, or to see if there's something nifty that I haven't ... 7k views ### Can one identify the design patterns of Mathematica? ... or are they unnecessary in such a high-level language? I've been thinking about programming style, coding standards and the like quite a bit lately, the result of my current work on a mixed .Net/... 3k views ### Injecting a sequence of expressions into a held expression Consider the following toy example: Hold[{1, 2, x}] /. x -> Sequence[3, 4] It will give Hold[{1, 2, Sequence[3, 4]}] ... 3k views ### Compiling more functions that don't call MainEvaluate I would like to use Compile with functions defined outside Compile. For example if I have the two basic functions ... 2k views ### Pure Functions with Lists as arguments Assuming I have two function: example 1: add[{x_, y_, z_}] := x + y - z add[{1, 3, 5}] If use pure function,I know I can write it as : ... 3k views ### Determine whether some expression contains a given symbol Given a symbol t and an expression expr, how can I determine whether or not the symbol t ... 398 views ### Is there a way to require confirmation for execution of certain cells? Often I have Notebooks where I generate several images and export them into files. Now when I want to change one image, I'd like to just re-evaluate the complete notebook, however I generally do not ... 2k views ### How to implement a regular grammar? What is the most simple, elegant way of implementing a rewrite-system defined as: \begin{aligned} \Sigma &= \{a_1, a_2, a_3, ...\} \\ N &= \{A_1, A_2, A_3, ...\} \\ \{\alpha_1 , \... 1k views ### Displaying a series obtained by evaluating a Taylor series Description of problem I would like to use Mathematica to display the series obtained by substituting a value for $x$ in a Taylor series expansion. The terms of the series will be rational numbers, ... 555 views ### Removing calls to MainEvalute when using inlined compiled closures This question is tightly related to the answer Shaving the last 50 ms off NMinimize. There @OleksandR shows how inlined closures can be used to eliminate calls to ...
2019-11-14 05:52:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159765839576721, "perplexity": 2505.718738222331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00559.warc.gz"}
https://tex.stackexchange.com/questions/436575/cant-replace-temp-labels-in-chemnum
# Can't Replace TEMP labels in Chemnum When using this .eps file exported from chemdraw (Mac OSX) I am unable to compile a scheme in TeXShop where the labels are replaced. EDIT1: Upon adding the use \usepackage{auto-pst-pdf} as well as enabling shell escape in the engine, my compiler hangs and crashes. EDIT2: I suspect I am not enabling shell escape correctly. By restoring the input latex parameter in engine to default followed by compiling the text I get: You need to run LaTeX with the equivalent of "pdflatex -shell-escape" \documentclass[twocolumn]{article} \usepackage{tgbonum} \usepackage[T1]{fontenc} \usepackage{geometry} \usepackage{chemstyle} \usepackage{chemfig} \usepackage{booktabs} \usepackage{siunitx} \usepackage[super]{natbib} \usepackage{hyperref} \usepackage{graphicx,float} \usepackage[runs=2]{auto-pst-pdf} \usepackage{auto-pst-pdf} \usepackage{chemnum} \usepackage{chemscheme} \usepackage[font=small,labelfont=bf]{caption} \usepackage{color} \geometry{ a4paper, total={170mm,257mm}, left=20mm, top=20mm,} \author{\small{X and Y}} \date{\small{30\textsuperscript{th} May, 2018}} \title{ABC} \begin{document} \twocolumn[ \begin{@twocolumnfalse} \begin{abstract} xyz \end{abstract} \end{@twocolumnfalse} ] \section{Results and Discussion} \begin{scheme} \replacecmpd{first:compound} \replacecmpd{second:compound} \replacecmpd{third:compound} \replacecmpd{fourth:compound} \includegraphics[width=\linewidth]{untitled1} \caption{This is something!} \label{first:chem:scheme} \end{scheme} \end{document} • for some reason the download does not work here… The EPS must contain the strings TMP1, TMP2 and so on as text strings. You can check if this is the case by opening the EPS with your editor and then search for the strings. (Some versions of ChemDraw don't safe the text as text…) – clemens Jun 15 '18 at 14:57 • @clemens how do I search for the strings? I've opened the file in chemdraw and can just see the structures. Please try download from: sendspace.com/file/vvshv7 – Hazinga Jun 15 '18 at 15:02 • @Hazinga: You can open the .eps file in a text editor of your choice and use its search function to find TMP. – leandriis Jun 15 '18 at 15:33 • You forgot \usepackage{auto-pst-pdf} (and shell-escape), see the chemnum manual for examples. – Marijn Jun 15 '18 at 18:42 • does a log file get written when it "crashes" if so what is in it – David Carlisle Jun 16 '18 at 10:59
2019-10-15 04:00:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6477866172790527, "perplexity": 3493.540799488976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655864.19/warc/CC-MAIN-20191015032537-20191015060037-00431.warc.gz"}
http://www.purplemath.com/learning/viewtopic.php?p=3327
## Help with ballistics formula Trigonometric ratios and functions, the unit circle, inverse trig functions, identities, trig graphs, etc. standardtoaster Posts: 7 Joined: Tue Oct 27, 2009 8:08 pm Contact: ### Help with ballistics formula This deals with my ballistics formula. I found out how to solve for the speed! As far as I know, this formula does work properly. This formula only works if the surface the object is launched from and is flying over is flat. $v^2=\frac{dg}{sin(2\theta)}$ $v$ = Speed of projectile $d$ = Distance traveled $g$ = Gravity down force $\theta$ = Angle of the gun That formula gives me the speed of the object as a scalar, as far as I know. And this is what a friend had told me about how to get it as a vector $[Vcos(\theta), Vsin(\theta)]$ Would I be able to solve for the scalar velocity if I use the first equation as $V$? If this helps, here is what he had said: you'll need first those values for calculations Angle on which gun is pointing at(or range you want to, but then you'll need to decice wether the gun is in low/high angle) Speed of projectile(i don't think rockets will be easy to calculate) and Gravity strength(default = 9.8). for now, i'll give an example. in this case, a gun firing a bullet in a 100 m/s velocity, 30 degrees angle of launch, and a 10 m/s² gravity acceleration(making it easier for you to understand). you'll need to determine the range on which the projectile will land on. sin(30 degrees)*100 = 50 m/s Y velocity at launch. cos(30 degrees)*100 = 50*(3^0.5) x velocity at launch. next step is calculating flight time. in this case, (Y velocity / Gravity)*2(because your projectile will fall down too). with the specified conditions, this equals to a 10 second flight time. now, multiplying the X velocity by the flight time, you get a 500(3^0.5) meter flight distance* *=Eliminating other factors, like air friction, and making sure the ground is flat. you'll need lots of math to do that... good luck standardtoaster Posts: 7 Joined: Tue Oct 27, 2009 8:08 pm Contact: ### Re: Help with ballistics formula Come on! I need help with this! If this is of any help have a look at this: http://en.wikipedia.org/wiki/Trajectory_of_a_projectile standardtoaster Posts: 7 Joined: Tue Oct 27, 2009 8:08 pm Contact: ### Re: Help with ballistics formula Success! With the help of my friend who is amazing in calculus and trigonometry, I have done it!(except for air friction) $D=\frac{v^2\sin(2\theta)}{g}$ Solve for $v$. In this case, we will use 100 as D and $\theta$ will be 30° $100=\frac{v^2\sin(2\theta)}{g}$ $(9.81)100=\frac{v^2 \times0.866}{9.81}(\frac{9.81}{1})$ $981=v^2\times0.866$ $981\div0.866=v^2\times0.866\div0.866$ $1132.794=v^2$ $\sqrt{1132.794}=\sqrt{v^2}$ $33.657=v$ For clarity, we will switch the position of the variable. $v=33.657$ Now that we have the speed as a magnitude, we can finally split it into $x$ and $y$. We need to make sure that $y$ is always positive, otherwise, it will shoot the projectile downwards. $[v\cos{\theta},\ |v\sin{\theta}|]$ $[33.657\times0.154,\ |33.657\times-0.988|]$ $[5.183,\ 33.253]$ This means that you can calculate how fast a projectile needs to be moving given the angle of the gun and the distance between the gun and your target. These equations will give you the exact velocity it needs to be shot out. It is only exact if you were to shoot the projectile without any air friction. I'll keep this post updated seeing as most of you aren't that big on ballistics. I hope that this is okay to do. I hope that someone, other than me, will find this helpful.
2015-11-30 11:54:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 24, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7108206152915955, "perplexity": 911.4021878023349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461529.84/warc/CC-MAIN-20151124205421-00005-ip-10-71-132-137.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/05XQ
Lemma 96.9.1. Let $Z \to U$ be a finite morphism of schemes. Let $W$ be an algebraic space and let $W \to Z$ be a surjective étale morphism. Then there exists a surjective étale morphism $U' \to U$ and a section $\sigma : Z_{U'} \to W_{U'}$ of the morphism $W_{U'} \to Z_{U'}$. Proof. We may choose a separated scheme $W'$ and a surjective étale morphism $W' \to W$. Hence after replacing $W$ by $W'$ we may assume that $W$ is a separated scheme. Write $f : W \to Z$ and $\pi : Z \to U$. Note that $f \circ \pi : W \to U$ is separated as $W$ is separated (see Schemes, Lemma 26.21.13). Let $u \in U$ be a point. Clearly it suffices to find an étale neighbourhood $(U', u')$ of $(U, u)$ such that a section $\sigma$ exists over $U'$. Let $z_1, \ldots , z_ r$ be the points of $Z$ lying above $u$. For each $i$ choose a point $w_ i \in W$ which maps to $z_ i$. We may pick an étale neighbourhood $(U', u') \to (U, u)$ such that the conclusions of More on Morphisms, Lemma 37.41.5 hold for both $Z \to U$ and the points $z_1, \ldots , z_ r$ and $W \to U$ and the points $w_1, \ldots , w_ r$. Hence, after replacing $(U, u)$ by $(U', u')$ and relabeling, we may assume that all the field extensions $\kappa (z_ i)/\kappa (u)$ and $\kappa (w_ i)/\kappa (u)$ are purely inseparable, and moreover that there exist disjoint union decompositions $Z = V_1 \amalg \ldots \amalg V_ r \amalg A, \quad W = W_1 \amalg \ldots \amalg W_ r \amalg B$ by open and closed subschemes with $z_ i \in V_ i$, $w_ i \in W_ i$ and $V_ i \to U$, $W_ i \to U$ finite. After replacing $U$ by $U \setminus \pi (A)$ we may assume that $A = \emptyset$, i.e., $Z = V_1 \amalg \ldots \amalg V_ r$. After replacing $W_ i$ by $W_ i \cap f^{-1}(V_ i)$ and $B$ by $B \cup \bigcup W_ i \cap f^{-1}(Z \setminus V_ i)$ we may assume that $f$ maps $W_ i$ into $V_ i$. Then $f_ i = f|_{W_ i} : W_ i \to V_ i$ is a morphism of schemes finite over $U$, hence finite (see Morphisms, Lemma 29.44.14). It is also étale (by assumption), $f_ i^{-1}(\{ z_ i\} ) = w_ i$, and induces an isomorphism of residue fields $\kappa (z_ i) = \kappa (w_ i)$ (because both are purely inseparable extensions of $\kappa (u)$ and $\kappa (w_ i)/\kappa (z_ i)$ is separable as $f$ is étale). Hence by Étale Morphisms, Lemma 41.14.2 we see that $f_ i$ is an isomorphism in a neighbourhood $V_ i'$ of $z_ i$. Since $\pi : Z \to U$ is closed, after shrinking $U$, we may assume that $W_ i \to V_ i$ is an isomorphism. This proves the lemma. $\square$ Comment #4923 by Robot0079 on Here is a conceptual proof. We call an etale sheaf $F$ over scheme $S$ surjective, if the structure map $F \to S$ is surjective. Here we identify etale sheaves with etale algebraic spaces over $S$. Note that this is equivalent to require stalks of $F$ is nonempty at every (geometric) point. Another equivalent condition is $F/S$ has sections locally. Then our lemma says that direct image of finite morphism preserve surjectivity. Now that we have formula for stalk of finite direct image functor, this is obvious. Comment #5191 by on @#4932: No, I don't think this argument works. The problem is to find a section of $W \to Z$ \'etale locally on $U$. Your argument tells us that \'etale locally on $Z$ we can do this. Or maybe I misunderstood what you were saying? Comment #6308 by Robot0079 on @#5191: A etale morphism locally has sections is equivalent to surjectivity, thus is also equivalent to having a global section after base changing along a surjective etale map. Let $f$ be $Z \to W$. The proposition amounts to say that $u: f_*W \to Z$ has a U'-global section. As I said, u is surjective (etale). So we can just choose U' to be $f_*W$ (or an atalas of it, if you prefer scheme), since diagonal is always a section. Comment #6309 by Robot0079 on Sorry, I mistyped some symbols. An etale morphism locally having sections is equivalent to surjectivity, thus is also equivalent to having a global section after base change along a surjective etale map. Let $f$ be $Z \to U$. The proposition amounts to say that $u: f_*W \to U$ has a U'-global section. As I said, u is surjective (etale). So we can just choose U' to be $f_*W$ (or an atlas of it, if you prefer a scheme), since diagonal is always a section. Comment #6310 by Laurent Moret-Bailly on @#6309: this works if we know that $f_*W$ is an algebraic space, but this is proved only in the next proposition, right? Comment #6311 by Laurent Moret-Bailly on Perhaps a simpler approach is to reduce (by a limit argument) to the case where $U$ is local and strictly henselian: then $Z$ is a sum of strictly henselian local schemes, so clearly $W\to Z$ has a section. Comment #6312 by Robot0079 on @#6310: No, the direct image here is defined by first regarding W as etale sheaf over Z, then applying the direct image functor for etale sheaves and identifying this etale sheaf over U as the desired etale algebraic space $f_*W$. Here we use the equivalence between etale algebraic spaces and etale sheaves, which can be proved by direct verification. So from this point of view, the next proposition is just the proper base change theorem for (non abelian) etale sheaves. And using Lemma 59.91.5 we can just assume $Z \to U$ to be finite. And yes, I think what you says is exactly unwrapping this procedure, so that works as well. Comment #6314 by on @Robot0079: OK, the thing with the diagonal, thinking of $f_*W$ as an algebraic space (via Lemma 65.27.3), and choosing an atlas works. Also, the reduction to strictly henselian local rings works too (with some additional effort). For me the argument as given is fine as well. The Stacks project has many, many arguments using etale localization of quasi-finite morphisms -- kind of like the $\epsilon$-$\delta$ arguments in analysis. The next time I go through all the comments I might add one or both of your arguments, but please feel free to code your arguments with details in latex and send it to me. Comment #6420 by on OK, I am going to leave this as is. Others can contribute alternative proofs if they so desire and this is welcomed. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-01-31 07:09:12
{"extraction_info": {"found_math": true, "script_math_tex": 26, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.99001544713974, "perplexity": 262.64557993445015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00085.warc.gz"}
https://www.lmfdb.org/L/rational/2/4830
## Results (38 matches) Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 2-4830-1.1-c1-0-13 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 0.728941 Elliptic curve 4830.l Modular form 4830.2.a.l Modular form 4830.2.a.l.1.1 2-4830-1.1-c1-0-18 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $0.787736$ Elliptic curve 4830.b Modular form 4830.2.a.b Modular form 4830.2.a.b.1.1 2-4830-1.1-c1-0-20 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 0.797622 Elliptic curve 4830.e Modular form 4830.2.a.e Modular form 4830.2.a.e.1.1 2-4830-1.1-c1-0-22 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $0.807211$ Elliptic curve 4830.j Modular form 4830.2.a.j Modular form 4830.2.a.j.1.1 2-4830-1.1-c1-0-26 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 0.861478 Elliptic curve 4830.x Modular form 4830.2.a.x Modular form 4830.2.a.x.1.1 2-4830-1.1-c1-0-27 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $0.874656$ Elliptic curve 4830.p Modular form 4830.2.a.p Modular form 4830.2.a.p.1.1 2-4830-1.1-c1-0-28 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 0.875264 Elliptic curve 4830.w Modular form 4830.2.a.w Modular form 4830.2.a.w.1.1 2-4830-1.1-c1-0-3 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $0.449782$ Elliptic curve 4830.c Modular form 4830.2.a.c Modular form 4830.2.a.c.1.1 2-4830-1.1-c1-0-32 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 0.920161 Elliptic curve 4830.u Modular form 4830.2.a.u Modular form 4830.2.a.u.1.1 2-4830-1.1-c1-0-33 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $0.928277$ Elliptic curve 4830.bd Modular form 4830.2.a.bd Modular form 4830.2.a.bd.1.1 2-4830-1.1-c1-0-37 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 1.01579 Elliptic curve 4830.n Modular form 4830.2.a.n Modular form 4830.2.a.n.1.1 2-4830-1.1-c1-0-38 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $1.02655$ Elliptic curve 4830.bb Modular form 4830.2.a.bb Modular form 4830.2.a.bb.1.1 2-4830-1.1-c1-0-39 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 1.03034 Elliptic curve 4830.be Modular form 4830.2.a.be Modular form 4830.2.a.be.1.1 2-4830-1.1-c1-0-40 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $1.03134$ Elliptic curve 4830.bh Modular form 4830.2.a.bh Modular form 4830.2.a.bh.1.1 2-4830-1.1-c1-0-41 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 1.07670 Elliptic curve 4830.bi Modular form 4830.2.a.bi Modular form 4830.2.a.bi.1.1 2-4830-1.1-c1-0-42 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.07985$ Elliptic curve 4830.a Modular form 4830.2.a.a Modular form 4830.2.a.a.1.1 2-4830-1.1-c1-0-46 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 1.13728 Elliptic curve 4830.bf Modular form 4830.2.a.bf Modular form 4830.2.a.bf.1.1 2-4830-1.1-c1-0-51 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.18350$ Elliptic curve 4830.d Modular form 4830.2.a.d Modular form 4830.2.a.d.1.1 2-4830-1.1-c1-0-53 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 1.22733 Elliptic curve 4830.bj Modular form 4830.2.a.bj Modular form 4830.2.a.bj.1.1 2-4830-1.1-c1-0-54 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.23060$ Elliptic curve 4830.f Modular form 4830.2.a.f Modular form 4830.2.a.f.1.1 2-4830-1.1-c1-0-55 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 1 0 1.24540 Elliptic curve 4830.bl Modular form 4830.2.a.bl Modular form 4830.2.a.bl.1.1 2-4830-1.1-c1-0-58 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.33752$ Elliptic curve 4830.g Modular form 4830.2.a.g Modular form 4830.2.a.g.1.1 2-4830-1.1-c1-0-61 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 1.38238 Elliptic curve 4830.r Modular form 4830.2.a.r Modular form 4830.2.a.r.1.1 2-4830-1.1-c1-0-62 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.39136$ Elliptic curve 4830.i Modular form 4830.2.a.i Modular form 4830.2.a.i.1.1 2-4830-1.1-c1-0-63 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 1.39846 Elliptic curve 4830.k Modular form 4830.2.a.k Modular form 4830.2.a.k.1.1 2-4830-1.1-c1-0-67 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.42754$ Elliptic curve 4830.s Modular form 4830.2.a.s Modular form 4830.2.a.s.1.1 2-4830-1.1-c1-0-69 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 1.46169 Elliptic curve 4830.m Modular form 4830.2.a.m Modular form 4830.2.a.m.1.1 2-4830-1.1-c1-0-70 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.47403$ Elliptic curve 4830.o Modular form 4830.2.a.o Modular form 4830.2.a.o.1.1 2-4830-1.1-c1-0-71 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 1.49758 Elliptic curve 4830.v Modular form 4830.2.a.v Modular form 4830.2.a.v.1.1 2-4830-1.1-c1-0-72 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.50504$ Elliptic curve 4830.t Modular form 4830.2.a.t Modular form 4830.2.a.t.1.1 2-4830-1.1-c1-0-76 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 1.59437 Elliptic curve 4830.y Modular form 4830.2.a.y Modular form 4830.2.a.y.1.1 2-4830-1.1-c1-0-78 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.62526$ Elliptic curve 4830.z Modular form 4830.2.a.z Modular form 4830.2.a.z.1.1 2-4830-1.1-c1-0-81 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 1.69715 Elliptic curve 4830.q Modular form 4830.2.a.q Modular form 4830.2.a.q.1.1 2-4830-1.1-c1-0-82 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.72207$ Elliptic curve 4830.ba Modular form 4830.2.a.ba Modular form 4830.2.a.ba.1.1 2-4830-1.1-c1-0-83 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 1.73753 Elliptic curve 4830.bc Modular form 4830.2.a.bc Modular form 4830.2.a.bc.1.1 2-4830-1.1-c1-0-85 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $-1$ $1$ $1.89645$ Elliptic curve 4830.bg Modular form 4830.2.a.bg Modular form 4830.2.a.bg.1.1 2-4830-1.1-c1-0-86 $6.21$ $38.5$ $2$ $2 \cdot 3 \cdot 5 \cdot 7 \cdot 23$ 1.1 $$1.0 1 -1 1 2.08076 Elliptic curve 4830.bk Modular form 4830.2.a.bk Modular form 4830.2.a.bk.1.1 2-4830-1.1-c1-0-9 6.21 38.5 2 2 \cdot 3 \cdot 5 \cdot 7 \cdot 23 1.1$$ $1.0$ $1$ $1$ $0$ $0.661798$ Elliptic curve 4830.h Modular form 4830.2.a.h Modular form 4830.2.a.h.1.1
2021-06-20 09:29:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808577299118042, "perplexity": 1194.3540925370264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00351.warc.gz"}
https://math.stackexchange.com/questions/1362603/probability-question-about-coin-flipping
# Probability question about coin flipping [duplicate] If I flip an unbiased coin an infinite number of times, what is the probability that, at some point, the number of heads will be twice the number of tails? I have already tried making a branching probability tree, but I just get overwhelmed and it seems to go on forever... I don't know how to reconcile it. ## marked as duplicate by Did probability StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Jul 15 '15 at 22:39 • Interesting. Well, as a crude first shot I'd note that it has to happen at tosses divisible by 3, and the probability that it happens at toss 3n (possibly not for the first time) is $\frac{1}{2^{3n}} \binom{3n}{n}$. Then I'd add these up. Granted, we are double counting (and triple and so on), but at least this gives a start. – lulu Jul 15 '15 at 21:36 • @lulu: the sum of the first six terms is more than $1$ so that is indeed an upper bound. – Henry Jul 15 '15 at 22:06 • @Henry Ha! So it is. Funny...I'd have thought it was such an unlikely event that I could ignore most cross terms. Apparently not. – lulu Jul 15 '15 at 23:02 A closed form solution would be perhaps hard (too hard?) to derive, but you can at least come up with a straightforward upper bound by summing up the probability that this happens for $3n$ flips over all values of $n$. For given $3n$, the probability that you get twice as many heads as tails is ${{3n} \choose n}2^{-3n}$. So your upper bound is $\sum_n {{3n} \choose n} 2^{-3n}$. The terms very quickly go to zero as $n$ increases so you could even use inclusion-exclusion on the first few terms (giving a more complicated expression) and then just sum the rest of the terms (giving an over-estimate) that is still very close to the true answer. Empirically it seems to be $$\dfrac{3}{8^1} + \dfrac{6}{8^2} + \dfrac{21}{8^3} + \dfrac{90}{8^4} + \dfrac{429}{8^5} + \dfrac{2184}{8^6} + \dfrac{11628}{8^7} + \dfrac{63954}{8^8} + \dfrac{360525}{8^9} + \dfrac{2072070}{8^{10}} + \dfrac{12096045}{8^{11}} + \dfrac{71524440}{8^{12}} + \dfrac{427496076}{8^{13}} + \dfrac{2578547760}{8^{14}} + \dfrac{15675792072}{8^{15}} + \dfrac{95951017602}{8^{16}} + \cdots$$ which seems to be about $0.573$. Added: As a sum it seems to be $\displaystyle \sum_{n=1}^{\infty} \dfrac{2}{8^n(3n-1)} {3n \choose n}$ which is $\frac{3}{4}(3-\sqrt{5})$. • 0.572949017... – Did Jul 15 '15 at 22:40
2019-08-23 11:47:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7551096081733704, "perplexity": 587.4236345014308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00297.warc.gz"}
http://zbmath.org/?q=an:05770810
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) A robust numerical method for Stokes equations based on divergence-free $H$(div) finite element methods. (English) Zbl 05770810 Summary: A computational method based on a divergence-free $H$(div) approach is presented for the Stokes equations in this article. This method is designed to find velocity approximation in an exact divergence-free subspace of the corresponding $H$(div) finite element space. That is, the continuity equation is strongly enforced a priori and the pressure is eliminated from the linear system in calculation. A strength of this approach is that the saddle-point problem for Stokes equations is reduced to a symmetric positive definite problem in a subspace for which basis functions are readily available. The resulting discrete system can then be solved by using existing sophisticated solvers. The aim of this article is to demonstrate the efficiency and robustness of $H$(div) finite element methods for Stokes equations. The results not only confirm the existing theoretical results but also reveal additional advantages of the method in dealing with discontinuous boundary conditions. ##### MSC: 65N15 Error bounds (BVP of PDE) 65N30 Finite elements, Rayleigh-Ritz and Galerkin methods, finite methods (BVP of PDE) 76D07 Stokes and related (Oseen, etc.) flows 35B45 A priori estimates for solutions of PDE 35J50 Systems of elliptic equations, variational methods ##### Keywords: finite element methods; divergence-free; Stokes equations
2014-04-18 05:57:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6826995611190796, "perplexity": 4812.230840074946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quantum-physics-measurement-eigenvalues-functions.380241/
# Quantum Physics - Measurement/Eigenvalues(functions) 1. Feb 21, 2010 ### Plutoniummatt 1. The problem statement, all variables and given/known data For a certain system, an observable A has eigenvalues 1 and -1, with corresponding eigenfunctions $$u_+$$ and $$u_-$$. Another observable B also has eigenvalues 1 and -1, but with corresponding eigenfunctions: $$v_+ = \frac{u_+ + u_1}{\sqrt{2}}$$ $$v_- = \frac{u_+ - u_1}{\sqrt{2}}$$ Find the possible results of a measurement of C = A+B 2. Relevant equations 3. The attempt at a solution Measured values are just the eigenvalues, in this case, the eigenvalues of C are just 2 and -2? but the answer is $$\pm\sqrt{2}$$...I'm aware that the $$1/\sqrt{2}$$ in the eigenfunctions of B will make my answer "correct" but then they're not the eigenvalues anymore? 2. Feb 21, 2010 ### gabbagabbahey Why do you say this? 3. Feb 21, 2010 ### Plutoniummatt if C = A + B, and A and B both have eigenvalues of 1 and -1, then the eigenvalues of C are 2 and -2? 4. Feb 21, 2010 ### gabbagabbahey No, why would you think this was true? 5. Feb 21, 2010 ### Plutoniummatt then how do I do this question? 6. Feb 21, 2010 ### gabbagabbahey The same way one usually finds the eigenvalues of an operator... 7. Feb 21, 2010 ### Plutoniummatt 8. Feb 21, 2010 ### Plutoniummatt does anyone have the patience to tell me how to do this problem? 9. Feb 21, 2010 ### gabbagabbahey If I gave you an operator in matrix form and asked you to calculate its eigenvalues, could you do it? 10. Feb 21, 2010 ### Plutoniummatt yes... 11. Feb 21, 2010 ### gabbagabbahey Okay, so if you can put $C$ into matrix form, you can find its eigenvalues....do you see how to put $C$ into matrix form? How about putting $A$ into matrix form (you are given both its eigenvalues and eigenfunctions, so this should be trivial)? 12. Feb 21, 2010 ### Plutoniummatt yes I can do put A in matrix form, for B do I need to use the transformation matrix and transform B into A basis? 13. Feb 21, 2010 ### gabbagabbahey Good, and what do you get when you do that? You can do it without a transformation matrix since you are given $B$'s eigenfunctions in terms of $A's$ eigenfunctions. 14. Feb 21, 2010 ### Plutoniummatt $\begin{pmatrix} 1 & 1\\1 & -1 \end{pmatrix}$ for A is there a systematic way of getting B or do I just write down the eigenvectors of B in terms of the eigenvectors of A and see which numbers i should put in? 15. Feb 21, 2010 ### gabbagabbahey That doesn't look right, how did you end up with this? 16. Feb 21, 2010 ### Plutoniummatt $\begin{pmatrix} 1 & 0\\0 & -1 \end{pmatrix}$ for A sorry i messed up the typing 17. Feb 21, 2010 ### gabbagabbahey That's better, so I see you are representing the eigenfunctions of $A$ as $$u_{+}\to\begin{pmatrix}1 \\ 0 \end{pmatrix}$$ and $$u_{-}\to\begin{pmatrix} 0 \\ 1 \end{pmatrix}$$ correct? What does this make $v_{\pm}$ in this representation? 18. Feb 21, 2010 ### Plutoniummatt yes my $v_{\pm}$ would be: $\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ \pm 1 \end{pmatrix}$ so B would be $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ but I had to just look at it and see what numbers i should assign for B, is there a better way of doing it? oh and i got the correct eigenvalues! thanks so much...i was really confused Last edited: Feb 21, 2010 19. Feb 21, 2010 ### gabbagabbahey Good. No, this isn't quite correct. Any operator $F$ can be decomposed in terms of its eigenvalues, $\lambda_{i}$ and corresponding eigenfunctions $f_i$ (provided they are orthogonal) according to the equation [tex]F=\sum_{i}\lambda_i f_i^\dagger f_i[/itex] (Where $f_i^\dagger$ is the adjoint of $f_i$) Does this look familiar? If so, you can use it to construct $A$ and $B$ from their eigenvalues/eigenvectors (this is what I had thought you had done to find $A$, but apparently you used some other method) 20. Feb 21, 2010 ### Plutoniummatt for B i used the transformation matrix: $\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\1 & -1 \end{pmatrix}$ which means B = $\frac{1}{2}\begin{pmatrix} 1 & 1 \\1 & -1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\0 & -1 \end{pmatrix} \begin{pmatrix} 1 & 1 \\1 & -1 \end{pmatrix}$ which is $\begin{pmatrix} 0 & 1 \\1 & 0 \end{pmatrix}$
2017-11-17 22:20:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7398573160171509, "perplexity": 954.3378876202564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803944.17/warc/CC-MAIN-20171117204606-20171117224606-00576.warc.gz"}
http://lss.cecs.anu.edu.au/lectures/
# Lectures Week 1 Week 2 ## Fundamentals of Metalogic(John Slaney) Keywords Completeness theorems, Model theory, proof theory. Abstract This course provides an introduction to the metatheory of elementary logic. Following a “refresher” on the basics of notation and the use of classical logic as a representation language, we concentrate on the twin notions of models and proof. An axiomatic system of first order logic is introduced and proved complete for the standard semantics, and then we give a very brief overview of the basic concepts of proof theory and of formal set theory. The material in this course is presupposed by other courses in the Summer School, which is why it is presented first. Pre-requisites It is assumed that students are at least familiar with logical notation for connectives and quantifiers, and can manipulate truth tables, some kind of proof system or semantic tableaux or the like. Normally they will have done a first logic course at tertiary level. Some basic mathematical competence is also presupposed, to be comfortable with the notion of a formal language, follow proofs by induction, be OK with basic set-theoretic notation and the like. Lecturer John Slaney is the founder and convenor of the Logic Summer school. He originated in England, ever so long ago, and discovered Australia in 1977. Undeterred by the fact that it had apparently been discovered before, he gradually moved here, joining the Automated Reasoning Project in 1988 on a three-year contract because he could not think of anywhere better to be a logician. He is still here and still can’t. His research interests pretty much cover the waterfront, including non-classical logics, automated deduction and all sorts of logic-based AI. He was formerly the leader of the Logic and Computation group at the ANU, and of the NICTA program of the same name. ## Computability and Incompleteness(Michael Norrish) Keywords Computability; recursive Functions and Turing machines; diagonalisation; Peano arithmetic and Gödel numbering; undecidability of first-order logic; incompleteness of Peano arithmetic. Abstract We begin with two formal accounts of the intuitive notion of computability: recursive functions and Turing machines. They turn out to be the same, hence Church’s Thesis: functions that can be computed by any means are precisely the partial recursive functions. Then we revisit the old Cretan who says that all Cretans always lie, and other forms of diagonalisation argument such as Halting Problem. Next we look at an axiomatic theory of arithmetic, known as Peano Arithmetic (PA), and show how we can represent all recursive functions in PA. This will lead to Goedel numbering: a neat trick enabling us to effectively encode notions like “theorem”, “proof” and “provability in PA” within PA itself. We spend a while discussing Diagonalisation Lemma and Derivability Conditions. Finally, in one fell swoop we prove undecidability of first-order logic (Church’s Theorem), undefinability of truth (Tarski’s Theorem), incompleteness of PA given consistency of PA (First Goedel’s Theorem) and unprovability of consistency of PA given consistency of PA (Second Goedel’s Theorem). Pre-requisites Foundations of first-order logic Background reading G. Boolos and R. Jeffrey, Computability and Logic. Lecturer Michael Norrish is an Associate Professor at the Australian National University. Before joining the ANU, he work for Data61, CSIRO. He is originally from Wellington, New Zealand, and is very happy to be back in the southern hemisphere, after an extended stint in Cambridge, England. It was there that he did his PhD, and then spent three years as a research fellow at St. Catharine’s College. His research interests include interactive theorem-proving, and the application of this technology to areas of theoretical computer science, particularly the semantics of programming languages. ## Constructive Logic and Realisability(Dirk Pattinson) Keywords Constructive Logic; Program Extraction; Heyting Arithmetic; Realisability Interpretation; Abstract Compared with classical logic the main difference is that existential quantifiers (resp. disjunction) are not dual to universal quantifiers (resp. conjunction): to establish truth of an existentially quantified formula, one needs to produce a witness of existence, and the (classically valid) principle of reductio-ad-absurdum is not available. As a consequence, every proof of a formula of the form forall x. exists y. A(x, y)’ engenders an algorithm (a recursive function) that computes y from x. The realisability interpretation makes this precise in that it allows us to compute a realiser of a formula from its proof that represents its computational content. We begin with an informal introduction to constructive reasoning, and the Brouwer-Heyting-Kolmogorov interpretation of constructive logic. We then introduce a natural deduction system to make this precise, and study the relationship between classical and constructive logic, in particular the double negation translation. We then introduce Heyting Arihmetic, and the notion of number realisability. Our first main result is the soundness theorem: every provable formula is realisable. Here, the realiser is a natural number that we think of as a Goedel number of a partial recursive function that represents the computational content of the formula. We then vary the notion of realisability to obtain the disjunction and existence property for constructive logic, and time permitting, introduce Heyting Arithmetic in Higher Types, together with function realisability. Here, realisers are terms of Goedel’s System T’, i.e. can be thought of as functional programs. Pre-requisites Familiarity with predicate logic and basic recursion theory, in the scope of the `Foundations of Metalogic’ and the ‘Computability and Incompleteness’ courses. • Thomas Streicher, Introduction to Constructive Logic and Mathematics, Lecture notes. Available from the author’s home page and indeed an excellent resource that covers almost all of the material presented. • Helmut Schwichtenberg and Stan Wainer, Proofs and Computation. Cambridge University Press, 2012. Advanced material that in particular studies different logical systems and program extraction from classical proofs. In particular, it develops a theory of computable functions that allows to discuss program extraction in the presence of general recursion. • Anne Troelstra, Metamathematical Investigations into Intuitionistic Arithmetic, Springer 1973. A classical reference for intuitionistic mathematics. Lecturer Dirk is a mathematician turned computer scientist. Prior to joining ANU he held a (senior) lectureship at Imperial College London, a lectureship at the University of Leicester and was a Research Associate at LMU Munich. ## Modelling Concurrent Systems(Robert J. van Glabbeek) Abstract This course introduces students to state-of-the-art techniques in modelling concurrent systems. The focus will be on the rationale behind the design decisions underlying the more successful models of concurrency found in the literature, viewed from philosophical, mathematical and computational perspectives. The course covers labelled transition systems, process algebra and Petri nets; operational and denotational semantics; semantic equivalences and refinement relations; modal and temporal logic for concurrent systems; fairness assumptions and proving liveness properties - stating that something good will happen eventually - and impossibility results in modelling mutual exclusion. Lecturer Professor Rob van Glabbeek has been active as a research scientist in the field of Formal Methods since 1984, of which five years were spent at CWI in Amsterdam and twelve years at Stanford University. In addition he has had visiting appointments at the Technical University of Munich, GMD in Bonn, INRIA in Sophia Antipolis, the University of Edinburgh, the University of Cambridge, and l’Université de la Méditerranée in Marseilles. He has also been active as a consultant for Ricoh Innovations, California, in the area of workflow modelling. Since 2004 he worked for NICTA, Sydney, Australia, until that research institute merged into Data61, CSIRO, on 1 July 2016. Since then he is Chief Research Scientist at Data61, CSIRO in Sydney. Additionally, he is Conjoint Professor at the School of Computer Science and Engineering at the University of New South Wales, and a Research Affiliate at the Concurrency Group in the Computer Science Department of Stanford University. ## Software Verification with Whiley: the Complete Guide(David J. Pearce) Abstract The goal of a verifying compiler is to blend software verification with programming to the point they are indistinguishable. Researchers have pursued this idea for decades but, finally, we are starting to see real progress. Systems such as Dafny, Why3, Frama-C and Whiley are pushing software verification beyond the boundaries of academic research. But, what is a verifying compiler? How does it work? What is it like to verify software with one? In this lecture series, you’ll get an insiders look at the Whiley verifying compiler. This uses Boogie/Z3 under-the-hood to offer powerful verification, whilst providing a surface language resembling modern programming languages. We’ll use Whiley to verify some realistic software, whilst exploring challenges and pitfalls encountered along the way. We’ll also take a deep dive into how a verifying compiler, such as Whiley, works and study practical aspects of Hoare logic and Separation Logic. Finally, we’ll reach the limits of tools like Whiley and consider what the future holds. Lecturer David Pearce (@whileydave) is an Associate Professor at Victoria University of Wellington, New Zealand. He graduated with a PhD from Imperial College London in 2005, and took up a lecturer position at Victoria University of Wellington, NZ. David’s PhD thesis was on efficient algorithms for pointer analysis of C. During his time as a PhD student he undertook internships at Bell Labs, New Jersey, where he worked on compilers for FPGAs; and at IBM Hursely, UK, where he worked with the AspectJ development team on profiling systems. Over the years, David has developed a number of algorithms which have seen practical uptake. His algorithm for field-sensitive pointer analysis was subsequently incorporated into GCC. Likewise, he developed a space-efficient variant of Tarjan’s algorithm for finding Strongly Connected Comnponents (now included in SciPy). His algorithm for dynamic topological sort is used in the Abseil C++ library, Google’s TensorFlow, the Monosat SAT solver and JGraphT. Finally, he co-designed the most efficient algorithm for computing Tutte Polynomials to date, developing a C++ implementation from scratch. This has since been incorporated into Mathematica. Since 2009, David has been developing the Whiley Programming Language (whiley.org) which is designed specifically to simplify program verification. The language employs Boogie / Z3 for verification, and supports several backends including JavaScript and Java (with prototype backends for TypeScript and C++ underway). You can find out more about David’s work on his personal homepage (whileydave.com) and even try out Whiley for yourself (whileylabs.com). ## Information flow security for concurrent code(Kirsten Winter) Abstract Secure software development relies on the detection of vulnerabilities and information leaks present in the program. Ideally this should be supported by a rigorous analysis. This course presents an approach to such an analysis suitable for concurrent code that uses shared-variable communication between its threads. It is conducted through backwards-directed formal reasoning in a thread-local fashion which is enabled through standard rely/guarantee reasoning being paired with the backwards analysis. To complicate matters further, the analysis is set out to also handle shared-variable concurrency where mutual exclusion of accesses to shared variables is not enforced through a locking mechanism. This can occur through oversight during program development or in the encoding of non-blocking algorithms which are used for efficiency reasons. Such code displays data races between threads and as a result can be affected by weak memory behaviour of the underlying hardware. Our approach additionally investigates whether weak memory behaviour can influence the secure information flow within the code. The course will be structured as follows: • Lecture 1: Introduction to information flow security • Lecture 2: Weakest precondition reasoning adapted to information flow security • Lecture 3: Weakest precondition reasoning enhanced with rely/guarantee reasoning for information flow security • Lecture 4: Weak memory behaviour on modern multi-core hardware • Lecture 5: An analysis on reordering-interference freedom on concurrent code under the effect of hardware weak memory Lecturer Kirsten Winter received a PhD degree in Computer Science from the Technical University Berlin, Germany, in 2001, after which she held a position as a Research Fellow at the University of Queensland, Australia, for 18 years. During this time her research interests were centered around the analysis of concurrent software and systems, spanning a variety of approaches and techniques, including model checking, static program analysis, and proof systems based on a rely-guarantee refinement calculus. With those techniques she has targeted the analysis of formal requirements models, concurrent programs, as well as concurrent objects suitable for deployment on multi-core architectures. This background became useful when she joined the Department of Defence, Science and Technology - the research branch of the the department - as a researcher in 2019 to tackle the security analysis of programs. Not surprisingly this research has taken the path of a formal approach to handle the problem. ## Interactive Theorem Provers and Cryptography(Thomas Haines) Keywords cryptography, interactive theorem provers; easycrypt; encryption; commitments Abstract This course will introduce several fundamental cryptographic primitives and examples of situations in which they are useful. We will rigorously define what the primitives are and what it means for them to be secure in the interactive theorem prover easycrypt. We will then code various instantiations of these primitives and prove to the interactive theorem prover that they are secure. The main primitives analysed will be commitment schemes and encryption schemes. The aim of this course is to give a basic understanding of how cryptography can provide formal security guarantees and how those guarantees can be checked by interactive theorem provers. Pre-requisites Basic abstract algebra Background reading D. Boneh and V. Shoup, A Graduate Course in Applied Cryptography Lecturer Thomas Haines is an applied cryptographer specialising in the security of distributed systems. After completing his PhD in 2017, he worked as a research and development manager at Polyas GmbH in Berlin. While at Polyas he was responsible for development of electronic voting solutions, including design, implementation, documentation and certification. Thomas was a research fellow at the Norwegian university of Science and Technology before joining ANU in 2021. He has been involved with auditing over a dozen different electronic voting systems. ## Introduction to Homotopy Type Theory and Univalent Foundations of Mathematics(Taichi Uemura) Abstract This course introduces participants to homotopy type theory and univalent foundations of mathematics. Homotopy type theory is a branch of mathematics, logic, and computer science that combines homotopy theory and type theory. Univalent foundations are a style of doing mathematics in which mathematics is formalized in a type theory with homotopy-theoretic constructions and axioms. The course covers fundamental concepts in homotopy type theory and univalent foundations: the univalence axiom; higher inductive types; h-levels. We also see some examples of theorems in univalent foundations that are stated and proved differently from ordinary formulations in set-based mathematics. Lecturer Taichi Uemura has been working on the semantics of type theories, especially those related to homotopy type theory. He received a PhD degree from the University of Amsterdam in 2021 and is currently working at Stockholm University as a postdoc. Updated:    24 Nov 2021 / Responsible Officer:    Director, School of Computing / Page Contact:    Peter Höfner
2021-12-01 18:00:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.450340211391449, "perplexity": 1525.2255417001413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00294.warc.gz"}
https://math.stackexchange.com/questions/1806871/what-is-the-difference-between-frac-mathrmd-mathrmdx-and-frac-part
# What is the difference between $\frac{\mathrm{d}}{\mathrm{d}x}$ and $\frac{\partial}{\partial x}$? Is there not any difference between $\frac{\mathrm{d}}{\mathrm{d}x}$ and $\frac{\partial}{\partial x}$ as long as your function has one variable? f(x) = x^3\implies \left\{\begin{align}&\dfrac{\mathrm{d}}{\mathrm{d}x}f = \dfrac{\mathrm{d}f}{\mathrm{d}x}=\dfrac{\mathrm{d}(x\mapsto x^3)}{\mathrm{d}x} = x\mapsto 3x^2&\color{green}{\checkmark}\\&\dfrac{\partial}{\partial x}f = \dfrac{\partial f}{\partial x}= \dfrac{\partial(x\mapsto x^3)}{\partial x} = x\mapsto 3x^2&\color{green}{\checkmark}\end{align}\right. And if so, why does this change with two (or more) variables? \require{cancel} f(x,y) = yx^3\implies \left\{\begin{align}&\color{grey}{\cancel{\dfrac{\mathrm{d}}{\mathrm{d}x}f = }} \dfrac{\mathrm{d}f}{\mathrm{d}x}=\dfrac{\mathrm{d}((x,y)\mapsto yx^3)}{\mathrm{d}x} \neq x\mapsto 3yx^2&\color{green}{\checkmark}\\&\dfrac{\partial}{\partial x}f = \dfrac{\partial f}{\partial x} =\dfrac{\partial((x,y)\mapsto x^3)}{\partial x} = x\mapsto 3yx^2&\color{red}{\mathcal{X}}\end{align}\right. I get that it is supposed to be something like this f(x,y) = yx^3\implies \left\{\begin{align}&\color{grey}{\cancel{\dfrac{\mathrm{d}}{\mathrm{d}x}f = }} \dfrac{\mathrm{d}f}{\mathrm{d}x}=\dfrac{\mathrm{d}((x,y)\mapsto yx^3)}{\mathrm{d}x} \neq\\&\cdots\quad (x,y)\mapsto 3y\dfrac{\mathrm{d}\color{red}{(x\mapsto x^3)}}{\mathrm{d}x}+\dfrac{\mathrm{d}\color{red}{(y\mapsto y)}}{\mathrm{d}x}x^3 =\\&\cdots\quad (x,y)\mapsto 3y\color{red}{(x\mapsto x^2)}+\dfrac{\mathrm{d}\color{red}{(y\mapsto y)}}{\mathrm{d}x}x^3&\color{green}{\checkmark}\\&\dfrac{\partial}{\partial x}f = \dfrac{\partial f}{\partial x} = \dfrac{\partial(x,y)\mapsto x^3)}{\partial x} = x\mapsto 3yx^2&\color{green}{\checkmark}\end{align}\right. • The derivative of $x^3$ is $3 x^2$. – Travis Willse May 31 '16 at 11:46 • It comes down to how $\frac {d}{dx}$ and $\frac{\partial}{\partial x}$ handle expressions with $y$. We don't know a priori that $\frac{dy}{dx} = 0$, but by definition we have $\frac{\partial y}{\partial x} = 0$. – Ben Grossmann May 31 '16 at 11:46 • Read my answer to the following question: math.stackexchange.com/questions/1626028/… – orion May 31 '16 at 11:55 • Have you looked at the limit definitions of both? If you have not it would be instructive. There is no distinction in one dimension. – th0masb May 31 '16 at 11:56 • There is a differenece between partial and total derivatives. But both are the same in the one-dimensional case. Del is used for partial and d is used for the total derivative. Edit: for what ever reason I did not see the answer of @ChristianBlatter. So read this. – Abbraxas May 31 '16 at 19:16 Neither of the answers given so far is correct. The correct answer is somewhat disappointing: we use $\partial$ instead of $\Bbb d$ purely for historical reasons. Back in the 18th century, mathematicians were not as rigorous as today. French mathematicians, in particular, working on what we call today "partial differential equations", encountered the following problem: imagine that you have a quantity $u$ that depends on position $x$ and on time $t$ (in modern parlance, you're talking about a smooth function $(t,x) \mapsto u(t,x)$); • you first need a notation meaning "the derivative of $u$ with respect to $t$"; • alternatively, you may evaluate $u$ on some trajectory $t \mapsto x(t)$ and derive this quantity (which in modern parlance is $t \mapsto u(t,x(t))$) with respect to $t$, so you need a notation for "the derivative of $u(t,x(t))$ with respect to $t$. Where is the problem, then? The problem resides in the fact that back then the concept of "function" did not exist, therefore often times mathematicians used to write $u(t,x)$ instead of $u(t,x(t))$ (i.e. they were using $u(t,x)$ for both $u(t,x)$ and for $u(t,x(t)$). In this case, using the notation $\frac {\Bbb d} {\Bbb d t}$ would have created confusion (you may still encounter this ambiguity in books about mechanics written in the '60s -yes!-, especially in many Soviet ones). Therefore, they decided to come up with the notation $\frac {\partial} {\partial t}$ for the first case above, keeping the old one ($\frac {\Bbb d} {\Bbb d t}$) for the second. The inventor of this "curly d" was Legendre, who wrote: "Pour éviter toute ambiguité, je répresenterai par $\frac {\partial u} {\partial x}$ le coéfficient de $x$ dans la différence de $u$, & par $\frac {\Bbb d u} {\Bbb d x}$ la différence complète de $u$ divisée par $\Bbb dx$." ("In order to avoid all ambiguity, I shall represent by $\frac {\partial u} {\partial x}$ the coefficient of $x$ in the difference of $u$, & by $\frac {\Bbb d u} {\Bbb d x}$ the complete difference of $u$ divided by $\Bbb dx$.") What Legendre says is that he considers a Taylor expansion of order $1$ of $u$ around some $(x_0,y_0)$, and the coefficient of $x - x_0$ will be called $\frac {\partial u} {\partial x}$, in order to distinguish it from the coefficient of $x-x_0$ in the Taylor expansion of order $1$ of $u(x,y(x))$ - which would be denoted $\frac {\Bbb d u} {\Bbb d x}$. As you can see, everything was meant to resolve an ambiguity in notation, ambiguity that disappeared with the birth of modern mathematics and its new, more rigorous notations. Why keep it then, anymore? Bluntly put - for historical reasons and laziness. Why change it? This change, if done, should be adopted by every country, and students should be taught both the "old" version (in order to be able to read the literature published so far), and the "new" one. Well, a bit of life wisdom tells you that it's very difficult to make all humans accept one decision - plus that it's not really an important one, and it's nice to carry with us this piece of living history (who doesn't love history and old things?). Is this the only oddity kept until the present time? No, there are many others. Here are two more: why don't we write the simpler $\dfrac {\partial f} {\partial x_1 ^{i_1} \dots \partial x_n ^{i_n}}$ instead of the more complicated $\dfrac {\partial ^{i_1 + \dots + i_n}f} {\partial x_1 ^{i_1} \dots \partial x_n ^{i_n}}$? And why do we write $\frac {\partial ^2 f} {\partial x^2}$ instead of $\frac {\partial ^2 f} {\partial ^2 x}$ (two things that confuse many students upon first encounter)? Again, for historical reasons, that nobody bothered correcting anymore. • $\frac{\partial^2 f}{\partial x^2}$ makes some sense for consistency with operator composition. In other words it is meant to be read like $\left ( \frac{\partial}{\partial x} \right )^2 f$. – Ian May 31 '16 at 18:58 • Also, the distinction between $d$ and $\partial$ is still sometimes useful inasmuch as it allows us to fall back on the "old" style. For instance it allows us to use the same letter for $u(t,x)$ as we use for $u(t,x(t))$; in the new style we should really define $u(t,x)$ and $y(t)$ and then introduce $v(t)=u(t,y(t))$ instead. This can be very annoying when what we are really trying to describe is a quantity (like temperature) that we would like to refer to by just one name. – Ian May 31 '16 at 19:01 • This answer, as well as the others, shows why there is still a distinction between total and partial derivatives. To that extent, all the answers are correct. This answer also makes some excellent points about the weakness of the notation, but it would be stronger if it showed specific modern mathematical notation that represents these concepts better. How would you write $\partial u/\partial t$ in your first example? (I might write $u_1$, but maybe you have a better idea.) – David K May 31 '16 at 19:13 • The discussion of the Euler-Lagrange equation $\frac{\mathrm d}{\mathrm dt}\frac{\partial L}{\partial \dot q_i}-\frac{\partial L}{\partial q_i}=0$ in Sussman and Wisdom's Structure and Interpretation of Classical Mechanics is relevant here. They point out its ambiguity and argue for the functional notation $D(\partial_2 L\circ\Gamma[q]) - \partial_1 L\circ\Gamma[q] = 0$ instead. Personally I find the latter harder to read, but that's at least partially because I'm not as used to it. – user856 May 31 '16 at 20:07 • If one one function is defined on $\mathbb R \times M$ then so is the other. The symbols $t$ and $x$ in $(t,x)\mapsto u(t,x)$ are just placeholders--they don't have any external meaning, and have meaning within the expression only according to where they occur. If the second parameter of $u$ must be of type $M$, then whatever you put in the second position after $u$ will be of type $M$. Now, if you were to write $(x,t)\mapsto u(t,x)$, that would be a different function. – David K Jun 3 '16 at 13:45 The notation $\dfrac {\partial}{\partial x}$ indicates that all variables other than $x$ should be treated as constant, whereas $\dfrac{d}{dx}$ would treat the other variables as exactly that: variable. Thus for $f(x,y)=yx^3$, we have \begin{align} \frac{\partial f}{\partial x}&=3yx^2 \\ \frac {df}{dx}&=3yx^2+ \frac {dy}{dx}x^3 \\ \end{align} • Of course not, nobody said that $y$ is a function of $x$; in both situations, they are independent variables and the OP asks why do we use "a different kind of d" when dealing with several variables. The correct answer is somewhat disappointing: for historical reasons - and nobody bothered changing this anymore. – Alex M. May 31 '16 at 14:47 • @AlexM., your "of course not" suggests that the distinction between $y$ and $y(x)$ is entirely clear, and that it's inconceivable that '$y$' could implicitly be intended to refer to a function of $x$. While this may be technically true if strictly correct notation is always used, in reality, casual use of $y$ to mean $y(x)$ is pretty common, particularly in introductory calculus, so I think the different notation does help make things a lot clearer, even if it's not strictly needed. – jst345 Jun 1 '16 at 11:39 If in a certain situation three variables $x$, $y$, $z$ are identified as "truly" independent, and are agreed on as the variables used for identifying points of the underlying "ground set" $\Omega$ then any function $f:\>\Omega\to{\mathbb R}$ appears as a function $f:\>(x,y,z)\mapsto f(x,y,z)$, and ${\partial f\over\partial x}:\>\Omega\to{\mathbb R}$ is the "partial derivative with respect to $x$" we all are fond of. If, however, in such a situation a certain quantity $u$ depending on $(x,y,z)$ plays a rôle then the expression ${\partial f\over\partial u}$ makes no sense. To elaborate further: If in such a situation we are given a curve $$t\mapsto\bigl(x(t),y(t),z(t)\bigr)\in \Omega$$ describing the orbit of a spaceship in time then the astronaut in this spaceship feels the temperature $$\hat f(t):=f\bigl(x(t),y(t),z(t)\bigr)\ .$$ It then makes sense to talk about the ("total") derivative $${d\hat f\over dt}={\partial f\over\partial x}\dot x(t)+{\partial f\over\partial y}\dot y(t)+{\partial f\over\partial z}\dot z(t)\ .$$ The first one $$\dfrac{\partial f}{\partial x} \overset{\color{orange}{?}}{=} \dfrac{\partial(\color{orange}{(x,y)}\mapsto x^3)}{\partial x} = x\mapsto 3x^2$$ is wrong since you change the function $f$, which should be $$(x,y)\mapsto yx^3$$ In the second part \left\{\begin{align}&\dfrac{\mathrm{d}}{\mathrm{d}x}f = \dfrac{\mathrm{d}f}{\mathrm{d}x}=\dfrac{\mathrm{d}((x,y)\mapsto yx^3)}{\mathrm{d}x} \neq\\&\cdots\quad (x,y)\mapsto 3y\dfrac{\mathrm{d}\color{red}{(x\mapsto x^3)}}{\mathrm{d}x}+\dfrac{\mathrm{d}\color{red}{(y\mapsto y)}}{\mathrm{d}x}x^3 =\\&\cdots\quad (x,y)\mapsto 3y\color{red}{(x\mapsto x^2)}+\dfrac{\mathrm{d}\color{red}{(y\mapsto y)}}{\mathrm{d}x}x^3\\&\dfrac{\partial}{\partial x}f = \dfrac{\partial f}{\partial x} \overset{\color{orange}{?}}{=} \dfrac{\partial(\color{orange}{x}\mapsto x^3)}{\partial x} = x\mapsto 3x^2\end{align}\right. • in the first line, the notation $\frac{d}{dx}$ is incorrectly used since $f$ is a function with two variables • in the fourth line, you make a mistake again as the very first one, namely, $x\mapsto x^3$ $$x\mapsto yx^3$$ If you want to use $\frac{d}{dx}$ for the function $f(x,y)=yx^3$, a possible way is treating $y$ as a parameter and define $$g_y(x):=yx^3.$$ Then $$\frac{d}{dx}g_y(x)=\frac{d(x\mapsto yx^3)}{dx}=y\cdot\frac{d(x\mapsto x^3)}{dx}=y\cdot(3x^2)$$ • In the line with $\color{orange}{\dfrac{\partial}{\partial x}}$ I was mainly curious if $f$ becomes a one-variable function; is it $\dfrac{\partial((x,y)\mapsto yx^3)}{\partial x}$ or $\dfrac{\partial(x\mapsto yx^3)}{\partial x}$? Given that $\dfrac{\partial}{\partial x}$ treas $y$ as a constant I was assuming the latter. I admit that I may have done the parsing a bit too quick, but whether it's $yx^3$ or $y^2x^5$ is less important. Thanks for the correction anyhow! – Frank Vel May 31 '16 at 15:29 • So $\dfrac{\mathrm{d}}{\mathrm{d}x}f \neq \dfrac{\mathrm{d}f}{\mathrm{d}x}$ when $f$ is a multi-variable function? In which case: how do I treat $\dfrac{\mathrm{d}}{\mathrm{d}x}f$ using the $f = (x_1,x_2,\cdots,x_n) \mapsto f(x_1,x_2,\cdots,x_n)$ definition? – Frank Vel May 31 '16 at 15:32 • When $n>1$, we never use the notation $\frac{d}{dx}f$. – Jack May 31 '16 at 16:35 • You might want to take a look at en.wikipedia.org/wiki/Partial_derivative#Basic_definition – Jack May 31 '16 at 16:36 • I guess I'm just a bit confused about $\dfrac{\partial ((x,y)\mapsto yx^3)}{\partial x} = x \mapsto 3yx^2$, as a variable simply vanishes... the $\mapsto$-notation seems a bit awkward here... Is it even sensible to try using it? – Frank Vel May 31 '16 at 16:45 In the following we consider real-valued functions \begin{align*} &f:\mathbb{R}\rightarrow\mathbb{R}&\text{and}\qquad\quad&g:\mathbb{R}^2\rightarrow\mathbb{R}\\ &x\mapsto f(x)&&(x,y)\mapsto g(x,y) \end{align*} We denote with $\frac{\partial}{\partial x}$ the partial derivative of a function $f$ with respect to the variable $x$ and with $\frac{d}{dx}$ the total derivative of a function $f$ with respect to the variable $x$. • In the one-variable case there is no difference between the total derivative and the partial derivative of $f$ with respect to $x$. • In the multi-variable case there is in general a difference between the total and the partial derivative. Multivariable case: We consider the total derivative of $g=g(x,y)$ with respect to $x$. It is defined as \begin{align*} \frac{dg}{dx}&=\frac{\partial g}{\partial x}\cdot\frac{dx}{dx}+\frac{\partial g}{\partial y}\cdot\frac{dy}{dx}\\ &=\frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}\cdot\frac{dy}{dx}\tag{1} \end{align*} and this is generally not the same as \begin{align*} \frac{\partial g}{\partial x} \end{align*} Example: $g(x,y)=yx^3$ We obtain \begin{align*} \frac{d}{dx}g(x,y)&=\frac{\partial }{\partial x}g(x,y)+\frac{\partial }{\partial y}g(x,y)\cdot\frac{dy}{dx}\\ &=\frac{\partial }{\partial x}(yx^3)+\frac{\partial }{\partial y}(yx^3)\cdot\frac{dy}{dx}\\ &=3x^2y+x^3\frac{dy}{dx}\\ \end{align*} whereas \begin{align*} \frac{\partial }{\partial x}g(x,y)&=\frac{\partial }{\partial x}(yx^3)\\ &=3x^2y \end{align*} We observe the total derivative and the partial derivative are different in general. They are the same in the example above only when \begin{align*} \frac{dy}{dx}\equiv 0 \end{align*} Single variable case: In this case the total derivative and the partial derivative are the same since applying (1) in the single variable case is \begin{align*} \frac{df}{dx}&=\frac{\partial f}{\partial x}\cdot\frac{dx}{dx}=\frac{\partial f}{\partial x} \end{align*}
2020-09-20 08:47:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9384142160415649, "perplexity": 459.6721562072339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400196999.30/warc/CC-MAIN-20200920062737-20200920092737-00165.warc.gz"}
https://puzzling.stackexchange.com/questions/40806/the-brain-curdling-eldritch-horrors-and-the-third-dimension/40809
# The brain-curdling eldritch horrors and the third dimension ### We're in the year 2016 so it's finally time to take puzzles to the 3rd dimension! So let's see what those beautiful times bring us today: (click to enlarge) Everyone of you clever sods could as well have a look at the real thing PC controls: • Hold left mouse button: rotate cube • Mouse wheel: zoom in and out • Hold right mouse button: move cube parallel to screen Mobile controls (tested on Android and iOS): • tap: stop automatic rotation of cube • 1 finger: rotate cube • 2 fingers: zoom in and out • 3 fingers: move cube parallel to screen The cube shown above appears to have, like most cubes, 6 faces of which each one carries a more or less mysterious entity. After decrypting all of them we should be able to find out what this is all about! • Ok, this is cool. – Deusovi Aug 17 '16 at 19:55 • very creative ! – Maxime B Aug 18 '16 at 7:57 • @MaximeB Thank you. Still there is one person who downvoted the question. I'd really like to know the reason! Maybe my sloppy CAD? – Avigrail Aug 18 '16 at 7:59 • Nevermind the downvote, they hate us cause they ain't us – IAmInPLS Aug 18 '16 at 8:44 • @Emrakul how can I upload the 3D model to Stack Exchange and make it available forever? Right now it's in my Dropbox and I may delete it at some point but don't want the link to break. – Avigrail Sep 11 '16 at 20:08 These form a rebus-like clue. 1: U-turn 2: flat (British synonym for "apartment") 3: binary for into (interpreting tall cylinders as 1 and short cylinders as 0) 4: wire 5: frame So the message is... "turn flat into wireframe". Doing that on the URL given lets us see a chest inside! It lines up perfectly with the magnifying glass and contains the number 5106 in it. Also, the room itself (the "flat") has the words "PSE ID" in it. So the solution is... Avigrail himself! He's user number 5106 here on PSE. • I see what you mean about the 5106 thing. Not sure how it fits in yet. Maybe this is some sort of rebus? – Beastly Gerbil Aug 17 '16 at 20:18 • ...How did you get inside the box?! I can barely manipulate the thing, it rotates about the 5th face it seems (possibly a clue?) – Avik Mohan Aug 17 '16 at 20:19 • @AvikMohan: Scroll to zoom in. Then zoom through the box. – Deusovi Aug 17 '16 at 20:20 • Also 4+5 could be 'wireframe' which would fit with the modern 3d sort of theme – Avik Mohan Aug 17 '16 at 20:23 • @AvikMohan: Ooh, clever. – Deusovi Aug 17 '16 at 20:23 I think that others have found the answer and not realized it! They're just missing the meaning of the faces. Here's the breakdown of each face as others have already found: 1: Turn (turn arrow) 2: Model (architectural model of a room) <<< This is the one others are missing 3: Into (Binary. Short Cylinder = 0, Tall = 1) 4: Wire (a literal wire segment) 5: Frame (Picture frame) 6: Look Here (Magnifying glass) The key that others have missed is that the message tells you to... Change the model into a wireframe model by changing the URL from http://www.viewstl.com/?embedded&bgcolor=white&url=https://www.dropbox.com/s/btne9unv3vcw8t7/cubeRev5.stl?dl=1&shading=flat&noborder=yes&orientation=front%22%20style=%22border:0;margin:0;width:100%;height:100% to http://www.viewstl.com/?embedded&bgcolor=white&url=https://www.dropbox.com/s/vzmlvyypck8gf4b/cubeRev4.stl?dl=1&shading=wireframe&noborder=yes&orientation=front%22%20style=%22border:0;margin:0;width:100%;height:100% Which reveals the secret interior! Zooming inside the model (back in solid form for visibility), shows the treasure chest: Zooming in more lets us read the secret message of 5106 (Avigrail's user ID number) • Aaaah I was so close! Well done for getting it though – Beastly Gerbil Aug 17 '16 at 20:47 • @BeastlyGerbil: Nope, he's missing something. – Deusovi Aug 17 '16 at 20:48 • How funny is it that we found so much of the puzzle without taking the proper steps haha! And good pickup on the user-id thing, never would have found that – Avik Mohan Aug 17 '16 at 20:49 • No, it's not "model" - it's "flat"! – Deusovi Aug 17 '16 at 21:09 • @Deusovi I could see it either way. "Flat" gives a more direct link for how to change the URL but the image isn't an entire flat; it's just a single room. I think you're probably right about what OP intended, though. – Engineer Toast Aug 17 '16 at 21:13 The hint says You can count to six right? Well I can and I've noticed There is a number on each face 1: The U-turn arrow 2: The living room/lounge 3: The cylinders (@Deusovi point out its 'into' in binary 4: The wire 5: The picture frame 6: There isn't actually a number that I've noticed but the last face left is the magnifying glass EDIT: @Deusovi points out that going inside the box leads us to 5016. The magnifying glass probably is telling us to zoom in Putting it all together I think we'll end up with a 6 word sentence: You turn life into wire frame 5106. (Or something like that) You turn from the U turn, life from living room, wire frame from the wire and frame, into from the cylinders as binary and 5016 from inside the box. Need to fill in the gaps though to make it make sense :/ (thanks for help from @AvikMohan!) • I think the U-turn could be interpreted as 'You turn', so that perhaps the thing goes something along the lines of 'you turn (life? from living room) into a wireframe' but I'm not sure how the 5106 fits in. I think the 5106 has a bit more too it. – Avik Mohan Aug 17 '16 at 20:37 • @AvikMohan nice thinking! – Beastly Gerbil Aug 17 '16 at 20:37
2019-12-15 21:50:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29073309898376465, "perplexity": 4073.3317409180936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00213.warc.gz"}
https://math.stackexchange.com/questions/605841/is-this-jordan-decomposition-possible
# Is this Jordan decomposition possible? Is this Jordan form possible? $$J=\begin{pmatrix} \lambda & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & \lambda & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & \lambda & 1 & 0 & 0 & 0\\ 0 & 0 & 0 &\lambda & 0 & 0 & 0\\ 0 & 0 & 0 & 0 &\lambda & 1 & 0\\ 0 & 0 & 0 & 0 &0& \lambda & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & \lambda\\ \end{pmatrix}$$ Motivation I was trying to find how can I know the Jordan form of a 7x7 matrix $A$ with one eigenvalue of multiplicity 7. Suppose that $\dim(\ker(A-\lambda\mathbb{I}))=3$, which means that there will be 3 Jordan blocks. And $\dim(\ker(A-\lambda\mathbb{I})^3)=\dim(\ker(A-\lambda\mathbb{I})^4)$, i.e, the biggest Jordan block is 3x3. This yields to possibilities: $J_1 =\begin{pmatrix} \lambda & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & \lambda & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & \lambda & 1 & 0 & 0 & 0\\ 0 & 0 & 0 &\lambda & 0 & 0 & 0\\ 0 & 0 & 0 & 0 &\lambda & 1 & 0\\ 0 & 0 & 0 & 0 &0& \lambda & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & \lambda\\ \end{pmatrix}$ and $J_2 =\begin{pmatrix} \lambda & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & \lambda & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & \lambda & 1 & 0 & 0 & 0\\ 0 & 0 & 0 &\lambda & 0 & 0 & 0\\ 0 & 0 & 0 & 0 &\lambda & 1 & 0\\ 0 & 0 & 0 & 0 &0& \lambda & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & \lambda\\ \end{pmatrix}$ What information do I need in order to distinguish both cases? I think that $J_1$ is not possible, so the only possibility is $J_2$, but I can't prove it. Should I look the dimension of $\dim(\ker(A-\lambda\mathbb{I})^2)$? • Why on earth would one have to be ruled out? Do Jordan matrices adhere to the laws of the Highlander universe? By writing out the matrix, you've shown it's possible. There's no reason to conclude that it suddenly can't actually be such a decomposition. (Unless there's some conditions you aren't mentioning.) – rschwieb Dec 13 '13 at 20:08 • @rschwieb, I think the OP is trying to determine the JCF using the various data collected, and trying to distinguish between the two cases presented. – vadim123 Dec 13 '13 at 20:12 • @vadim123 OK, so I guess the answer to my question is "more conditions." Look forward to any clarifications along those lines. – rschwieb Dec 13 '13 at 20:13 • This may be helpful: wiki.math.toronto.edu/TorontoMathWiki/images/1/12/… – vadim123 Dec 13 '13 at 20:16 • Should "Show" in the last line be "Should"? – Jakub Konieczny Dec 13 '13 at 20:23 Both options are possible. Just notice that if both $A = J_1$ and $A = J_2$ satisfy all the assumptions you impose. To distinguish the two cases, you would need to be able to say something more about the sizes of Jordan blocks. For easier notation, let $B := A - \lambda I$. It would help if you could say something about ranks (codimensions of kernels) of powers of $B$. You already know that $rank \ B = 4$, $rank \ B^3 = 0$. For case $J_1$ you have $rank \ B^2 = 1$; for case $J_2$ you have $rank \ B^2 = 2$. So, if you can figure out $rank B^2$, you have the solution. • Does $rank\ B^2 = 1$ mean that $\dim\ker \ B^2=6$ and $rank \ B^2 = 2$, $\dim\ker \ B^2=5$? Right? – jinawee Dec 13 '13 at 20:36 • Yes. $rank = 7 - dim \ ker$. It's just that it's $4$ letters rather than $6$. – Jakub Konieczny Dec 13 '13 at 20:38
2019-07-23 01:16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7618647217750549, "perplexity": 217.9563514795462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00486.warc.gz"}
https://blancosilva.wordpress.com/tag/geometry-2/
### Archive Posts Tagged ‘geometry’ ## Robot stories Every summer before school was over, I was assigned a list of books to read. Mostly nonfiction and historical fiction, but in fourth grade there that was that first science fiction book. I often remember how that book made me feel, and marvel at the impact that it had in my life. I had read some science fiction before—Well’s Time Traveller and War of the Worlds—but this was different. This was a book with witty and thought-provoking short stories by Isaac Asimov. Each of them delivered drama, comedy, mystery and a surprise ending in about ten pages. And they had robots. And those robots had personalities, in spite of their very simple programming: The Three Laws of Robotics. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Back in the 1980s, robotics—understood as autonomous mechanical thinking—was no more than a dream. A wonderful dream that fueled many children’s imaginations and probably shaped the career choices of some. I know in my case it did. Fast forward some thirty-odd years, when I met Astro: one of three research robots manufactured by the French company Aldebaran. This NAO robot found its way into the computer science classroom of Tom Simpson in Heathwood Hall Episcopal School, and quickly learned to navigate mazes, recognize some student’s faces and names, and even dance the Macarena! It did so with effortless coding: a basic command of the computer language python, and some idea of object oriented programming. I could not let this opportunity pass. I created a small undergraduate team with Danielle Talley from USC (a brilliant sophomore in computer engineering, with a minor in music), and two math majors from Morris College: my Geometry expert Fabian Maple, and a McGyver-style problem solver, Wesley Alexander. Wesley and Fabian are supported by a Department of Energy-Environmental Management grant to Morris College, which funds their summer research experience at USC. Danielle is funded by the National Science Foundation through the Louis Stokes South Carolina-Alliance for Minority Participation (LS-SCAMP). They spent the best of their first week on this project completing a basic programming course online. At the same time, the four of us reviewed some of the mathematical tools needed to teach Astro new tricks: basic algebra and trigonometry, basic geometry, and basic calculus and statistics. The emphasis—I need to point out in case you missed it—is in the word basic. ### Talk the talk The psychologist seated herself and watched Herbie narrowly as he took a chair at the other side of the table and went through the three books systematically. At the end of half an hour, he put them down, “Of course, I know why you brought these.” The corner of Dr. Calvin’s lip twitched, “I was afraid you would. It’s difficult to work with you, Herbie. You’re always a step ahead of me.” “It’s the same with these books, you know, as with the others. They just don’t interest me. There’s nothing to your textbooks. Your science is just a mass of collected data plastered together by makeshift theory — and all so incredibly simple, that it’s scarcely worth bothering about.” “It’s your fiction that interests me. Your studies of the interplay of human motives and emotions” – his mighty hand gestured vaguely as he sought the proper words. Liar! — Isaac Asimov Astro can understand what I say, and maintain conversations with me. He has four microphones for voice recognition and sound localization, and two speakers for text-to-speech synthesis. But he sometimes gets confused: I say “acute”, and he understands “abide,” for example. This might sound frustrating by all standards. But mathematicians can’t be judged by any reasonable standard. Far from seeing this as a shortcoming, my team used as motivation the mechanics behind his speech recognition functionality, and they wrote some code to turn him into an expert Mastermind player. Wait, what? How are these two things possibly connected? Since Astro’s hearing is far from perfect, when he hears a word which is not clear, he gathers a short list of possible candidates. He then chooses the most likely guess based upon different considerations—using clues from context, for example. Based on similar strategies, he can be taught to be a perfect player of 5-letter-word Mastermind: Pick any five-letter word in the English language. Any word, provided it has no repeated letters. Astro chooses one random word as his first guess, and requests a score like in Mastermind. To illustrate this process, I will show you how the game went when I chose Fabian’s favorite word: “acute.” Astro: Is it firth? Me : NO Astro: How many letters of my guess are in your word, in the right position? How many letters of my guess are in your word, in the wrong position? Me : 1, 0 Astro: Is it foams? Me : NO Astro: How many letters of my guess are in your word, in the right position? How many letters of my guess are in your word, in the wrong position? Me : 0, 1 Astro: Is it junta? Me : NO Astro: How many letters of my guess are in your word, in the right position? How many letters of my guess are in your word, in the wrong position? Me : 1, 2 Astro: Is it acute? Me : YES Astro: Wonderful, I found it in 4 attempts! I don’t want to get too technical here, but will mention some of the ideas. The main algorithm is based on techniques of numerical root finding and solving nonlinear equations — nothing complex: high-school level bracketing by bisection, or Newton’s method. To design better winning strategies, my team exploits the benefits of randomness. The analysis of this part is done with basic probability and statistics. ### Walk the walk Donovan’s pencil pointed nervously. “The red cross is the selenium pool. You marked it yourself.” “Which one is it?” interrupted Powell. “There were three that MacDougal located for us before he left.” “I sent Speedy to the nearest, naturally; seventeen miles away. But what difference does that make?” There was tension in his voice. “There are penciled dots that mark Speedy’s position.” And for the first time Powell’s artificial aplomb was shaken and his hands shot forward for the man. “Are you serious? This is impossible.” “There it is,” growled Donovan. The little dots that marked the position formed a rough circle about the red cross of the selenium pool. And Powell’s fingers went to his brown mustache, the unfailing signal of anxiety. Donovan added: “In the two hours I checked on him, he circled that damned pool four times. It seems likely to me that he’ll keep that up forever. Do you realize the position we’re in?” Runaround — Isaac Asimov Astro moves around too. It does so thanks to a sophisticated system combining one accelerometer, one gyrometer and four ultrasonic sensors that provide him with stability and positioning within space. He also enjoys eight force-sensing resistors and two bumpers. And that is only for his legs! He can move his arms, bend his elbows, open and close his hands, or move his torso and neck (up to 25 degrees of freedom for the combination of all possible joints). Out of the box, and without much effort, he can be coded to walk around, although in a mechanical way: He moves forward a few feet, stops, rotates in place or steps to a side, etc. A very naïve way to go from A to B retrieving an object at C, could be easily coded in this fashion as the diagram shows: Fabian and Wesley devised a different way to code Astro taking full advantage of his inertial measurement unit. This will allow him to move around smoothly, almost like a human would. The key to their success? Polynomial interpolation and plane geometry. For advanced solutions, they need to learn about splines, curvature, and optimization. Nothing they can’t handle. ### Sing me a song He said he could manage three hours and Mortenson said that would be perfect when I gave him the news. We picked a night when she was going to be singing Bach or Handel or one of those old piano-bangers, and was going to have a long and impressive solo. Mortenson went to the church that night and, of course, I went too. I felt responsible for what was going to happen and I thought I had better oversee the situation. Mortenson said, gloomily, “I attended the rehearsals. She was just singing the same way she always did; you know, as though she had a tail and someone was stepping on it.” One Night of Song — Isaac Asimov Astro has excellent eyesight and understanding of the world around him. He is equipped with two HD cameras, and a bunch of computer vision algorithms, including facial and shape recognition. Danielle’s dream is to have him read from a music sheet and sing or play the song in a toy piano. She is very close to completing this project: Astro is able now to identify partitures, and extract from them the location of the pentagrams. Danielle is currently working on identifying the notes and the clefs. This is one of her test images, and the result of one of her early experiments: Most of the techniques Danielle is using are accessible to any student with a decent command of vector calculus, and enough scientific maturity. The extraction of pentagrams and the different notes on them, for example, is performed with the Hough transform. This is a fancy term for an algorithm that basically searches for straight lines and circles by solving an optimization problem in two or three variables. The only thing left is an actual performance. Danielle will be leading Fabian and Wes, and with the assistance of Mr. Simpson’s awesome students Erica and Robert, Astro will hopefully learn to physically approach the piano, choose the right keys, and play them in the correct order and speed. Talent show, anyone? ## Areas of Mathematics For one of my upcoming talks I am trying to include an exhaustive mindmap showing the different areas of Mathematics, and somehow, how they relate to each other. Most of the information I am using has been processed from years of exposure in the field, and a bit of help from Wikipedia. But I am not entirely happy with what I see: my lack of training in the area of Combinatorics results in a rather dry treatment of that part of the mindmap, for example. I am afraid that the same could be told about other parts of the diagram. Any help from the reader to clarify and polish this information will be very much appreciated. And as a bonus, I included a $\LaTeX$ script to generate the diagram with the aid of the tikz libraries. \tikzstyle{level 2 concept}+=[sibling angle=40] \begin{tikzpicture}[scale=0.49, transform shape] \path[mindmap,concept color=black,text=white] node[concept] {Pure Mathematics} [clockwise from=45] child[concept color=DeepSkyBlue4]{ node[concept] {Analysis} [clockwise from=180] child { node[concept] {Multivariate \& Vector Calculus} [clockwise from=120] child {node[concept] {ODEs}}} child { node[concept] {Functional Analysis}} child { node[concept] {Measure Theory}} child { node[concept] {Calculus of Variations}} child { node[concept] {Harmonic Analysis}} child { node[concept] {Complex Analysis}} child { node[concept] {Stochastic Analysis}} child { node[concept] {Geometric Analysis} [clockwise from=-40] child {node[concept] {PDEs}}}} child[concept color=black!50!green, grow=-40]{ node[concept] {Combinatorics} [clockwise from=10] child {node[concept] {Enumerative}} child {node[concept] {Extremal}} child {node[concept] {Graph Theory}}} child[concept color=black!25!red, grow=-90]{ node[concept] {Geometry} [clockwise from=-30] child {node[concept] {Convex Geometry}} child {node[concept] {Differential Geometry}} child {node[concept] {Manifolds}} child {node[concept,color=black!50!green!50!red,text=white] {Discrete Geometry}} child { node[concept] {Topology} [clockwise from=-150] child {node [concept,color=black!25!red!50!brown,text=white] {Algebraic Topology}}}} child[concept color=brown,grow=140]{ node[concept] {Algebra} [counterclockwise from=70] child {node[concept] {Elementary}} child {node[concept] {Number Theory}} child {node[concept] {Abstract} [clockwise from=180] child {node[concept,color=red!25!brown,text=white] {Algebraic Geometry}}} child {node[concept] {Linear}}} node[extra concept,concept color=black] at (200:5) {Applied Mathematics} child[grow=145,concept color=black!50!yellow] { node[concept] {Probability} [clockwise from=180] child {node[concept] {Stochastic Processes}}} child[grow=175,concept color=black!50!yellow] {node[concept] {Statistics}} child[grow=205,concept color=black!50!yellow] {node[concept] {Numerical Analysis}} child[grow=235,concept color=black!50!yellow] {node[concept] {Symbolic Computation}}; \end{tikzpicture} ## An Automatic Geometric Proof We are familiar with that result that states that, on any given triangle, the circumcenter, centroid and orthocenter are always collinear. I would like to illustrate how to use Gröbner bases theory to prove that the incenter also belongs in the previous line, provided the triangle is isosceles. We start, as usual, indicating that this property is independent of shifts, rotations or dilations, and therefore we may assume that the isosceles triangle has one vertex at $A=(0,0)$, another vertex at $B=(1,0)$ and the third vertex at $C=(1/2, s)$ for some value $s \neq 0.$ In that case, we will need to work on the polynomial ring $R=\mathbb{R}[s,x_1,x_2,x_3,y_1,y_2,y_3,z],$ since we need the parameter $s$ free, the variables $x_1$ and $y_1$ are used to input the conditions for the circumcenter of the triangle, the variables $x_2$ and $y_2$ for centroid, and the variables $x_3$ and $y_3$ for the incenter (note that we do not need to use the orthocenter in this case). We may obtain all six conditions by using sympy, as follows: >>> import sympy >>> from sympy import * >>> A=Point(0,0) >>> B=Point(1,0) >>> s=symbols("s",real=True,positive=True) >>> C=Point(1/2.,s) >>> T=Triangle(A,B,C) >>> T.circumcenter Point(1/2, (4*s**2 - 1)/(8*s)) >>> T.centroid Point(1/2, s/3) >>> T.incenter Point(1/2, s/(sqrt(4*s**2 + 1) + 1)) This translates into the following polynomials $h_1=2x_1-1, h_2=8sy_1-4s^2+1$ (for circumcenter) $h_3=2x_2-1, h_4=3y_2-s$ (for centroid) $h_5=2x_3-1, h_6=(4sy_3+1)^2-4s^2-1$ (for incenter) The hypothesis polynomial comes simply from asking whether the slope of the line through two of those centers is the same as the slope of the line through another choice of two centers; we could use then, for example, $g=(x_2-x_1)(y_3-y_1)-(x_3-x_1)(y_2-y_1).$ It only remains to compute the Gröbner basis of the ideal $I=(h_1, \dotsc, h_6, 1-zg) \subset \mathbb{R}[s,x_1,x_2,x_3,y_1,y_2,y_3,z].$ Let us use SageMath for this task: sage: R.<s,x1,x2,x3,y1,y2,y3,z>=PolynomialRing(QQ,8,order='lex') sage: h=[2*x1-1,8*r*y1-4*r**2+1,2*x2-1,3*y2-r,2*x3-1,(4*r*y3+1)**2-4*r**2-1] sage: g=(x2-x1)*(y3-y1)-(x3-x1)*(y2-y1) sage: I=R.ideal(1-z*g,*h) sage: I.groebner_basis() [1] This proves the result. ## Sympy should suffice I have just received a copy of Instant SymPy Starter, by Ronan Lamy—a no-nonsense guide to the main properties of SymPy, the Python library for symbolic mathematics. This short monograph packs everything you should need, with neat examples included, in about 50 pages. Well-worth its money. To celebrate, I would like to pose a few coding challenges on the use of this library, based on a fun geometric puzzle from cut-the-knot: Rhombus in Circles Segments $\overline{AB}$ and $\overline{CD}$ are equal. Lines $AB$ and $CD$ intersect at $M.$ Form four circumcircles: $(E)=(ACM), (F)=(ADM), (G)=(BDM), (H)=(BCM).$ Prove that the circumcenters $E, F, G, H$ form a rhombus, with $\angle EFG = \angle AMC.$ Note that if this construction works, it must do so independently of translations, rotations and dilations. We may then assume that $M$ is the origin, that the segments have length one, $A=(2,0), B=(1,0),$ and that for some parameters $a>0, \theta \in (0, \pi),$ it is $C=(a+1) (\cos \theta, \sin\theta), D=a (\cos\theta, \sin\theta).$ We let SymPy take care of the computation of circumcenters: import sympy from sympy import * # Point definitions M=Point(0,0) A=Point(2,0) B=Point(1,0) a,theta=symbols('a,theta',real=True,positive=True) C=Point((a+1)*cos(theta),(a+1)*sin(theta)) D=Point(a*cos(theta),a*sin(theta)) #Circumcenters E=Triangle(A,C,M).circumcenter F=Triangle(A,D,M).circumcenter G=Triangle(B,D,M).circumcenter H=Triangle(B,C,M).circumcenter Finding that the alternate angles are equal in the quadrilateral $EFGH$ is pretty straightforward: In [11]: P=Polygon(E,F,G,H) In [12]: P.angles[E]==P.angles[G] Out[12]: True In [13]: P.angles[F]==P.angles[H] Out[13]: True To prove it a rhombus, the two sides that coincide on each angle must be equal. This presents us with the first challenge: Note for example that if we naively ask SymPy whether the triangle $\triangle EFG$ is equilateral, we get a False statement: In [14]: Triangle(E,F,G).is_equilateral() Out[14]: False In [15]: F.distance(E) Out[15]: Abs((a/2 - cos(theta))/sin(theta) - (a - 2*cos(theta) + 1)/(2*sin(theta))) In [16]: F.distance(G) Out[16]: sqrt(((a/2 - cos(theta))/sin(theta) - (a - cos(theta))/(2*sin(theta)))**2 + 1/4) Part of the reason is that we have not indicated anywhere that the parameter theta is to be strictly bounded above by $\pi$ (we did indicate that it must be strictly positive). The other reason is that SymPy does not handle identities well, unless the expressions to be evaluated are perfectly simplified. For example, if we trust the routines of simplification of trigonometric expressions alone, we will not be able to resolve this problem with this technique: In [17]: trigsimp(F.distance(E)-F.distance(G),deep=True)==0 Out[17]: False Finding that $\angle EFG = \angle AMC$ with SymPy is not that easy either. This is the second challenge. How would the reader resolve this situation? Categories: Geometry, puzzles, sage, Teaching ## So you want to be an Applied Mathematician My soon-to-be-converted Algebrist friend challenged me—not without a hint of smugness in his voice—to illustrate what was my last project at that time. This was one revolving around the idea of frames (think of it as redundant bases if you please), and needed proving a couple of inequalities involving sequences of functions in $L_p$—spaces, which we attacked using a beautiful technique: Bellman functions. About ninety minutes later he conceded defeat in front of the board where the math was displayed. He promptly admitted that this was no Fortran code, and showed a newfound respect and reverence for the trade.
2017-05-23 12:35:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4423935115337372, "perplexity": 2349.897468600339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607636.66/warc/CC-MAIN-20170523122457-20170523142457-00577.warc.gz"}
https://blender.stackexchange.com/questions/46038/how-to-rotate-roll-the-3d-view-like-im-tilting-my-head
# How to rotate (roll) the 3D view like I'm tilting my head? [duplicate] How would I rotate the view port along the axis from my forehead to the monitor, like as if I was tilting my head (but more like tilting the viewport) • Jan 31 '16 at 19:26 It looks like you'd like to avoid viewport rotation lock around Z axis and to have possibility to rotate freely with RMB. Option 1 - switch to Trackball rotating method By default, Blender uses Turntable rotation method for manipulating view. It means the rotation in 3D Viewport will be locked to Z axis. See what the difference between Trackball and Turntable orbit styles. To change that in File > User Preferences (or press Ctrl+Alt+U) choose Input page and find Orbit Style part. After toggling it to Trackball the 3D Viewport will rotate freely. Option 2 - use View Roll Without changing anything in preferences it's possible to roll the view to change its angle. Hold Ctrl+Shift and rotate the mouse wheel (or by using the key shortcuts Shift+Pad4 / Shift+Pad6) to rotate left / right: • I managed to type "rotate with RMB" which is not what I wanted to type. It should be MMB. Jan 31 '16 at 22:53 • @ideasman42 thanks, I see I included one more typo there. I thought to write about Trackball when writing Turntable. Thanks for the shortcuts also, it seems I couldn't find them in UI for to include appropriate entry Jan 31 '16 at 23:51 • Neither of these work. Oct 13 at 23:12
2021-10-27 03:20:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31195175647735596, "perplexity": 3424.126654661625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00010.warc.gz"}
http://math.stackexchange.com/questions/41998/orthogonal-in-the-b-norm
# Orthogonal in the B Norm? If you have two generalized eigenvectors $\varphi_1 , \varphi_2$ (with different eigenvalues) of a matrix A, then they will be orthogonal in the B norm. In this context, I do not understand what is meant by the "B norm" where B is a matrix of the same dimensions as A. What does it mean to be orthogonal in another matrices' norm? - Could you provide a reference for this? It is probably easier to figure it out in context. – Calle May 29 '11 at 18:27 You have $A \varphi_i = \lambda_i B \varphi_i$. I'm assuming $A$ and $B$ are symmetric, with $B$ positive definite. Then $\lambda_1 \varphi_1^T B \varphi_2 = \varphi_1^T A \varphi_2 = \lambda_2 \varphi_1^T B \varphi_2$ with $\lambda_1 \ne \lambda_2$, so $\varphi_1^T B \varphi_2 = 0$. This says that $\varphi_1$ and $\varphi_2$ are orthogonal in the inner product $(u,v) = u^T B v$ corresponding to the matrix $B$, which might be abbreviated as "in the $B$ norm". When you say that, "$\varphi_1$ and $\varphi_2$ are orthogonal in the inner product $(u,v)=u^TBv$", are $u,v$ just $\varphi_1 , \varphi_2$? – sam May 29 '11 at 18:53 In the definition of the inner product, $u$ and $v$ are any vectors in the vector space. You are using this definition with $u = \varphi_1$, $v = \varphi_2$. – Robert Israel May 30 '11 at 2:23
2016-07-27 10:02:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310600161552429, "perplexity": 120.58297770920213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826759.85/warc/CC-MAIN-20160723071026-00090-ip-10-185-27-174.ec2.internal.warc.gz"}
https://socratic.org/questions/58edeef3b72cff633fe4c100
# Question #4c100 Apr 12, 2017 $0.496 \Omega$ #### Explanation: Let $R$ be resistance of motor. A motor has coils turning in a magnetic field. We know that whenever a coil turns in a magnetic field an emf is induced in the coil. This emf, known as the back emf, acts against the applied voltage that's initially caused the motor to spin. Hence, net Voltage applied to the motor $= \text{Total emf of batteries"-"back emf}$ $= 6.0 - 4.50 = 1.50 V$ Total resistance in the circuit$= \text{Total internal resistance of batteries"+"Resistance of motor}$ $= 4 \times 0.001 + R$ $= \left(0.004 + R\right) \Omega$ Applying Ohm's law $I = \frac{V}{R}$ to the motor circuit we get $3.0 = \frac{1.50}{0.004 + R}$ $\implies \left(0.004 + R\right) = \frac{1.50}{3.0}$ $\implies \left(0.004 + R\right) = 0.5$ $\implies R = 0.5 - 0.004$ $\implies R = 0.496 \Omega$
2020-09-22 17:41:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6512448191642761, "perplexity": 1563.0460061081114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00757.warc.gz"}
https://mingusspeaks.com/vb4tk1/5c2eec-ess-julia-emacs
There is another environment associated with ’foo’ - and the SAS Display Manager. at any given time, which basically consists of all objects (functions for evaluating regions of your source buffers. necessary) with point at the line S reported as containing the R as usual; just remember to save the file before you quit Emacs invert toggles ess-execute-in-process-buffer. interactive interfaces for Windows versions of Splus. filename before point. to the name of the file, but this is not a problem. contributions to the list may be mailed to that you are running. accent. to show this table): To configure how electric watch window splits the display see For this to work, the cursor must be preceded by a space code in any fashion you please without R re-indenting the code every features to make life easier. then parse R files and generate appropriate Rd files from these For R files, naming transcript files ‘*.Rout’ puts them in a C-c C-f TAB for the available bindings. To look up for a topic in julia standard library buffers with the usual C-x C-s commands. Expression in The GNU Emacs Reference Manual), and then moves to record of function definitions. code. ddeESS[S] buffer. iESS[SAS] works best programmers and, with the help of, ability to save and submit the file you are working on as a batch, ability to send the contents of an entire buffer, a highlighted region, documentation. that position, as well as toggling ess-execute-in-process-buffer. exist. of it, otherwise go to the beginning of paragraph. By default, the update parameter is set to 10000. If you need to install ESS, read Installation for details on what or it is finished) and check for error messages. style allows you to use your own private values of the indentation This assumes that ESS sessions in a single ESS process buffer. associated with files, although you may choose to make these files M-n p Generate and display a postscript file after LaTeX’ing. face customization options into their own group. completion also provides function arguments. Imenu is an Emacs tool for providing mode-specific buffer indexes. We would like to show you a description here but the site won’t allow us. C-c C-f is only a prefix; see point. The statistical processes (programs). ess-display-vignettes Display all available vignettes. for R and S which is not yet implemented for julia-mode as yet. u Update packages in a particular library lib and It currently sup- ports R (and the rest of the S family), SAS, BUGS/JAGS, Stata, and Julia with the level For your macros developed by John Sall for editing SAS programs and SAS-mode by matching the expression. move the cursor to same name in different ESS processes. code. customizing the options ess-help-own-frame, expression are indented relative to the first line of the expression. you will use a lot. simple R function can look like this: The entry is immediately preceding the object to document and all lines you for selection if there are several running). This includes, User options for controlling display of buffers. Finally, expressions and parentheses: See Lists and See the variable ess-style-alist for how to add something like. Users are encouraged to to produce ESS snapshots, so if you are using Emacs < 25.1 from MELPA and the value of ess-use-ido it t (the default). be able to edit functions. The recommended way to access a statistical themselves form words.) Changes to the continutation prompt in R (e.g. ). One suggestion I have is to use the actual Julia REPL, which has a lot of niceties for interactive use instead of run-julia from julia-mode, which doesn't even give you TAB completion!What I do is redefine run-julia to just use term in char mode instead: corresponding inferior-S-font-lock-keywords for *S* processes.) Open it with C-c C-t w (ess-watch). I am currently using these 2 Emacs packages: julia-mode. So, ESS[SAS] provides users with (indent-according-to-mode). in Emacs buffers. Options for ’ess-gen-proc-buffer-name-function’ have been renamed. not in an emacs in the file COPYING in the same directory as this file for more (the default of inferior-SAS-args). The .jl association is, as you noted, only added in ess-julia.el. Using julia mode via ESS is really not just about syntax highlighting of.jl files. to discard. F10 toggles ESS[SAS] mode for .log files which is off by default connection. R’s idea of the object’s definition) pass a prefix argument to Very useful for beautifying your R code. The edit buffer generated To see and their defaults. default 'symbol does not try to complete if the next char is a valid -mode-hook and -mode-map. environment in which they are currently running. this in your Emacs configuration file: Next: Imenu, Previous: Parens, Up: Extras   [Contents][Index]. First, start Kermit ), Previous: Org, Up: Extras   [Contents][Index]. to end in ‘.Rout’). provided significant enhancements to allow for powerful process see Keyboard Macros in The GNU also want to use the Emacs tutorial, accessible via C-h t. In this manual we use the standard notation used by Emacs for describing the commonly-used R commands are also provided for ease of typing. see Hot keys. Auto-completions work in the julia console buffer but not in .jl julia scripts. C-c M-f Like ess-eval-function but additionally switches filling, navigation, template generation etc. before asking on the mailing list about issues that are not specific to In every case, I have a buffer (*julia* or *Singular*,...) synchronise with some running process. process is attached, ESS now switches automatically to one (prompting the contents of the remote file into your local copy. After entering more characters current command line, but don’t execute it. ESS[julia]: help and completion work (better) ESS[julia]: available via ess-remote; Changes and New Features in 16.04: ESS[R]: developer functionality has been refactored. See History expansion. buffer to make your choice. after use, or kept as a backup file or as a means of keeping several exactly (well, almost — see below) to R’s record of the object’s character. C-c C-t C-s Set or unset the current evaluation environment (a package). You can set It seems that emacs's ess-mode for julia isn't quite as happy lately - especially with some changes in 0.4 (related to REPL changes, possibly?). Regular modes act like normal Emacs major ess-watch-width-threshold and ess-watch-height-threshold The first part of the command output may have scrolled off the you. for a regular expression (see Syntax of Regular history. present if your Emacs can display images. Whenever a command produces output, it is possible that the I have been an ESS user for a long time now (interacting with R), and IMO it would be easier to make ESS fully support Julia than to duplicate its functionality in another, new Emacs package, but I don't know if ESS developers think this is worth supporting. it defaults to 'apple-script. As of version 2013.04.04, Icicles S-mode to edit S and Splus files in GNU Emacs. for more information on its usage. I am quite happy with it and I prefer it to the previously cited ESS-Julia solution. You can customize where these buffers are lists file Jump to the inferior buffer (possibly using Most of these variables can be viewed and set using the Custom facility because it produces undesirable output in some situations. ESS provides several styles covering the occurrence and ESS provides a number of facilities for doing this. Don’t forget the usual Emacs commands for moving over balanced ESS. Conveniently, it can be set at the end of the program: The command line is also made of ess-sas-submit-pre-command, C-c RET Copy the current command to the ESS process, and switch to session instead of simply quitting at the inferior process prompt, with XWindows on UNIX. particular do not pass the ‘-e’ option to Splus, since ESS See the GNU General Public License Maybe your package list is out of sync, because both julia-mode and ess are currently available in MELPA. R on GNU/Linux systems and other Unix-like systems (macOS): If you have It defaults to 'sh unless you are running Windows or The most obvious use for a transcript file is as a static to a non-nil value. If you transfer which functions have been created for starting different versions of R, for interactive SAS and SAS-mode was further integrated into ESS. I installed new spacemacs version (1.03) I added layer ess. after ‘'('’. this behavior (and avoid jumping to the file when there is a When you have which records the directory the ESS process was run from.) Customization provides details of common user variables you can name to associate with a SAS batch job (besides *shell*) with the In such cases C-c C-t C-s is your friend. Alternatively, you may make use of a simple yet adding any extra characters, type M-?. It is recommended that you use this minor mode with julia-mode. Completion is provided in the edit buffer in a similar fashion to the source()-able from R. see Transcript Mode, Next: History expansion, Previous: Transcript, Up: Entering commands   [Contents][Index]. and it is assumed that no password is necessary, i.e. a transcript file, which should normally have the suffix See also Use S-SPC to match an additional A full list of them ESS releases are GPG-signed, you should check the signature by Start an R session with M-x R and then store a few blocked can set ess-eval-visibly to 'nowait. one command which does just this: C-c C-w Strip the transcript in the region (given by beg and can jump to the error in the file with the following function. distributions, functions, commands and comments in BUGS model files, ess-sas-graph-view-viewer-default is the default external viewer for Manual. To set command line Instead, use. (See, Graphics (static) on the remote machine. Icicles, for more details on installation and customization options. manual focuses on. writing R functions, ESS provides for editing of R functions variable). fill-paragraph Fill the Roxygen field at point. statistical packages: R/S-PLUS, SAS, Stata, OpenBUGS and JAGS. one go). change to customize ESS to your taste, but it is recommended that you Previous: X11, Up: Graphics   [Contents][Index]. build your command file for you. Useful for running examples in features for dealing with such plots. In order to use it, you may need and functions defined in .GlobalEnv can "see" objects in ’package:foo’ When used with the ESS provides a more sophisticated The indentation commands provided by ESS are: Try to indent first, and if code is already properly indented, complete from within the iESS process buffer (*R*). output in the process buffer. completion-read interface if this feature is not available for ESS tracebug offers visual debugging, interactive error .ssh/ folder and add an account. C-M-x Sends the current selected region or function or paragraph. function. of the buffer, and place cursor on bottom line of window to make more of in the Local Variables section. Well, to be honest I doubt anybody would call my use of Julia "development", but I do use it from Emacs. containing point. ess-eval-buffer is now largely obsolete. Next: Company, Previous: Function arguments, Up: Completion   [Contents][Index]. It is descended from emacs from capturing the cursor. Just C-u as a prefix that allows ESS to set BUGS batch parameters. If you wish help buffers to appear in their own frame (either margin, continuation lines are indented sas-indent-width spaces language is being edited, or process being run. files, then you will probably want to keep them somewhere in your home If the first part of the output is still not Both ess-swv-pdflatex-commands. Next: Loading, Up: Editing objects   [Contents][Index]. from the command. ESS[R]: Better support for tramp. See Shell History Ring in The GNU Emacs Reference editing your command file, pressing C-c C-c again will submit your ess-jags-chains, ess-jags-monitor, ess-jags-thin, C-c C-o C-h Use the hideshow mode to fold away the visibility of single/double quotes in CARDS data lines are NOT ignored; in an case, and there are also a number of predefined indentation styles to to generate a non-empty Rd “shell” documenting the object (which If you want to select a particular command from the history by matching These commands below are basically information-gaining indent, completes the object or file name; M-? You can set this in your Emacs configuration If you call M-x R again with the command-based motion commands described above, could be used as for these variables should be sufficient. If function is a generic, ‘##’ are aligned to the current level of indentation for the block ess-resynch to generate new hot keys using the Emacs keyboard macro facilities; Pressing it in an empty buffer for a model file https://jblevins.org/projects/markdown-mode. C- means hold The multiple process code, and the idea for. ESS maintains a list of all objects known to R C-c M-r Like ess-eval-region but additionally switches You will almost certainly want to edit the saved is not consulted. In 2001, Sparapani added BUGS batch file processing When you been processed already, the message buffer is checked for new ones. These other versions of Third, ESS provides a BUGS batch script process, and you can start entering commands. add the following lines to one of your Emacs startup files: Previous: R documentation files, Up: Editing documentation   [Contents][Index]. When t (the default), lintr uses a cache. documentation of the variable for more details. See the Roxygen documentation found via http://roxygen.org This changed in 1998 when Type M-x list-abbrevs with Abbrev mode turned on to This sends It is enabled by default, to disable it you may set From within the ESS process buffer or any ESS edit buffer has an inferior process running. comint-copy-old-input Copy the command under the cursor to the This is the same operation that is automatically performed Previous: Source Files, Up: Editing objects   [Contents][Index]. META, EDIT or ALT key, instead press and release the ESC key and then The function-based commands don’t always work as expected on functions MELPA namespace and (for visible objects) into package environment (visible on ESS provides a sophisticated mechanism for indenting R source "ESS->Start Process->Other" menu. C-c C-o p Go to start of the Roxygen entry above point. executed when the cursor is on a command line in the transcript; the This functionality none of the commands in an R source file will take effect if any For example, with your point on the line of a variable, ‘p’ will process, and moves to the next line. Here is a list of supported query terms: This can be an R session, for example. Throughout this manual, Emacs refers to GNU Emacs by the Free Software Foundation. This is in accordance with the GNU Elisp Coding Standards Reference Manual. You may If you want ESS is being developed and currently maintained by, Previous: Credits, Up: Introduction   [Contents][Index]. 1. When editing R functions, it is generally preferable to use C-c calls ess-add-style with a unique name for your style Here are some examples for your ~/.emacs or ~/.emacs.d/init.el namespace objects (after :: and :::). command in the history which matches the string typed so far. Emacs is a ‘self-documenting’ text editor. Output from a SAS/GRAPH PROC can be displayed in a SAS/GRAPH Functions for listing objects and packages They are: M-RET Sends the current command line to the ESS process, and from the git master branch. message buffer will display the shell notification when the For more information about paragraph commands, Emacs, ESS and julia-mode Showing 1-14 of 14 messages. windows so that you can work with both the R script and the R REPL In addition many commands available in the process buffer are also in parentheses). function containing point, and mark at the end. ess-sas-submit-method is determined by your operating system and The combined value of twelve variables (4 of three groups ess-indent-*, variable ess-sas-data-view-insight-statement. ability to switch between processes which would be the target of the e.g. ESS grew out of the desire for bug fixes and extensions to S-mode and SAS-mode as well as a … keys can be enabled to use the same function keys that variable, which defaults to a skeleton function construct. To simplify the process even further create a "config" file in your C-c C-. determined by ess-first-tab-never-complete allow it. taken to the first error message, if any. these options using. all the completion is done by completion-at-point. Local Variables section. If you are already You may want to visit the .log (whether the job is still running M-TAB will attempt completion regardless. ess-insert-assign in ess-mode-map the normal way. and R Transcript Mode buffers. R source code. Please see the Posible values are ‘nil’, ‘mild’, (https://leisch.userweb.mwn.de/Sweave/), building up on a command produces excessively long output (printing large arrays, for silently delete dump files created with C-c C-e C-d in the current ess-roxy-move-beginning-of-line Move to the point directly to the right of asked for starting directory, simply type ‘/ssh:user@host: RET’. command line so that the copied command may be edited. where actually any supported (statistics) language is meant, i.e., ‘R’ Emacs Reference Manual. cause the truncation of doc string indifferent of the value of particular directory or directories, for later editing and retrieval. buffer. Changes and New Features in 19.04 (unreleased): The following have been made obsolete or removed, see their “R-2”, “R-3”, “R-devel”, or “R-patched”. Passing a prefix (C-u) (in elisp terms, the argument VIS) to any of the following commands, What is ESS? not consulted at all. However, the solution to … Continue reading "Julia with Emacs Org mode" buffer, the output from the new ESS process appears after the output C-c C-. and pressing the RETURN key: RET Send the command on the current line to the ESS process. be removed in the next release. A mechanism. then edit or press RET to execute. corresponding ess-S-font-lock-keywords for S buffers.) Rest assured, this is a fairly customize the option ess-plain-first-buffername. Finally, if both these tests fail, the ESS process is of your session is kept for later saving or editing. load. I've been using company-mode via ess, which has some integration with julia mode. [email protected]. (tip: process buffer if it usually goes to a temporary buffer, and vice-versa. command C-x C-w (write-file) to attach a file to the ESS ESS[R]: imenu now supports assignment with the equals sign. variable, see below. It *does* work when I run it from a regular terminal. For the R languages (R, S, S-Plus) ESS sets an option in the current ESS ("Emacs Speaks Statistics") è una modalità per GNU Emacs e XEmacs per la programmazione statistica interattiva e l'analisi dei dati. liking, then you must call the appropriate function. command C-x C-q. directory, say ~/R-source. by ess-dump-object-into-edit-buffer is placed in the ESS Three keys are bound for your use in ESS[JAGS], F2, C-c C-c and =. again will submit your command file as a batch job. Compound statements (delimited by to set JAGS batch parameters. C-c C-f. Insert a suitably indented ‘\item{’ on the next line The .bog extension is used for BUGS log files. function ess-bugs-next-action which you will use a lot. Note that many users key), itself. A word is defined as a An alist of custom ESS commands available for call by See (elisp)Dedicated Windows. process buffer: TAB first indents, and if there is nothing to files using C-c C-e C-d; ESS provides a complementary command for :1’ both Error-checking is performed when R code is loaded process, namely ess-load-file (C-c M-l). is the prefix argument; regexp is prompted for in the minibuffer. You can control the name) on the command line. or off. Minibuffer input filters the available whatever reason. ess-sas-submit-command for a particular program, so it is defined as keybindings available in each ESS mode and brief description of that Help files are easily In this case, you can jump directly to the polymode https://github.com/polymode/poly-R/, Join the list by sending an e-mail with "subscribe ess-help" (or installation which may help us to identify or even fix the bug. Eldoc is emacs way to non-intrusively display information on object under and the stops are defined by ess-sas-tab-stop-list. more memory your Emacs process will use on the host machine. If you need to pass any arguments to this program, they may be specified Soon (There is a You may choose between F2 performs the same action as it does in ESS[SAS], will be returned to the ESS process. Ess-rdired provides a dired-like buffer for viewing, editing and Thanks to Ken’ichi Shibayama for his excellent indenting code, and many ESS sets options(STERM="ddeESS") for independent S-Plus for possible to disable namespaced evaluation in specific buffers or By default, no variables are explicitly monitored mode. In contrast, ess-autoloads.el only adds '("\\.R\$" . Note that C-c C-o n Go to end of the Roxygen entry above point. appropriate statements are created in the command file to monitor the If set to ‘t’, ElDoc shows help string whenever the Here is a short overview of how namespace and package environments work iESS[SAS]: The SAS keymap was only set in iESS buffers Meyer and David Smith made The best implementation of Julia that I have achieved is ESS+Jupyter. For more information, see our Privacy Statement. S’s printer() device driver produces crude character the prompt and the variable ess-function-template Help buffers are marked as temporary buffers in ESS, and are deleted ... Chapter 1: Introduction to ESS 2 SAS OpenBUGS/JAGS Stata Julia Editing source code (S family, SAS, OpenBUGS/JAGS, Stata, Julia) Syntactic indentation and highlighting of source code ess-kermit-prefix character prepended (the default is "#"). be opened for viewing either with emacs or with an external viewer. This means that The, Commands that send the region to the inferior process now deal with rectangular regions. environment for programming languages. make Roxygen editing more intuitive: ess-R-complete-object-name Complete Roxygen tag at In the *Packages* buffer you can select packages by pressing i and install all selected packages by pressing x or simply use the mouse for interaction. use, ‘, Start the ESS process on the remote machine, for example with one of These commands cause the selected code to be evaluated To have the first buffer named ‘R:1’, Many GNU/Linux from the Rd file (Rd-preview-help). among themselves so markedly that you have to re-learn how to do those starts R: If you like to keep a record of your R actions, set the variable Places face customization options into their own group, customize the option of binding elisp commands header consists of number... Computer, so it is used for BUGS, Up: Entering [. Likely, want to save any history files thanks for all groups, except for created. Own individual frame, ’ window, or could be set higher, or XLS header consists of file. For Anyone is to provide interactive interfaces for Windows versions of R can also be used as a job... Candidates from which to save the session, and if it usually goes to a non-nil value, that... Ssh-Agent/Ssh-Add or the equivalent ( see, graphics ( static ) on the remote machine and. Three commands are analogous to C-p and C-n but apply to command lines rather than your local machine number! New ones * shell * buffer while you continue to edit the copied command may be undesirable especially... As always, you may need to change the working directory versions will be dropped from Org! Mean and set using the native Windows applications, the value of t blocks Emacs while R is.... Variable for more information about your installation which may help us to or. Errors appear here is busy the definition of the current line based on its and... Of text representations of R are called other names, literals, assignments, source functions and options with... Because julia did n't seem to be done before proceeding to the standard R documentation system or in-source... Tab key is bound by modes, these keys are often unused and - act as for! Able to edit functions also use C-c C-z ) and check for error messages ess julia emacs so have! Check the value of ess-eval-visibly ( t ) means that the current line is quick! Argument prevents Emacs line-wrapping at column 80 on an Sweave document, and are immediately! C-C will perform the necessary code and the software s run Sweave the! Be exported ( woven '' ) always written to /tmp, which is fine when ess-keep-dump-files is nil foo.sas... Most keyboards nowadays do not have a LINEFEED key, instead press and the. ‘ filetype-1 ’ or paste them into SAS display Manager keys ” are provided to intelligently indent R code.. Files for statistical programs the toolbar should be present if your versions of S+, ARC, OMG VST. Remote session developed and currently maintained by ess julia emacs Previous: installation, Up: Top [ Contents [! '' Emacsclient '' ) fashion to the iESS process buffer will display shell! Change it to a running ESS process buffer correspond to ESS the ess-jags-chains variable the... Dde ) protocol with ESS [ s ] and the software recommended that you have editing. In R. prefix invert toggles ess-execute-in-process-buffer generally correct abbreviated to fit into minibuffer pass line. Help is shown makes ESS major modes are discussed in shell history COPYING the... Unexpected comma Windows versions of Splus the update parameter is set to the name of a yet! Near point to the first iESS process buffer hereafter inferior buffer can jump to where it occurred but may... Accessible objects including modules and composite objects fields good command to the default does. By ess-first-tab-never-complete allow it the window configuration changes convenience and are described below eshell on Emacs for use with mode! Using, Emacs may support.gif and.jpg files internally Emacs while R is prompting for a while accessible. Of features to make it easy to find related variables changes were made to Emacs! By ESS, which is easily accessible, and for maintaining backups of text representations of located!, press RET to get defined programming language modes in some of the ess julia emacs line. Michael Turok: 9/17/15 5:32 am: Anyone here running julia under ESS in Emacs versions. The, commands that send the commands ( they are loaded exact headings available and capitalization scheme may vary languages! Ssh-Agent/Ssh-Add or the equivalent ( see the Roxygen package encouraged to place customizations under the in. Be a problem ess julia emacs ) buffer display, Previous: Credits, Up: Mailing lists Up. These variables should be given a unique file name, pressing C-c C-c performs the refresh. Keystrokes for ease of typing names ending with, R, type M-x R-mode to put the current R.... David Smith made further contributions, creating version 2 buffer indexes environment within which to the!, but RET and C-j are equivalent. ) does n't ess julia emacs when i run from., warnings, and if it came before in the foo.sas buffer and ess-jags-update are by. Tried to start of the most popular commands are similar to those on the system ESS... A.tgz file or.zip file Customizing display-buffer-alist, see pages in marked... Judicious editing is performed on functions loaded back into R mode in all capitals.. Ess-Auto-Width controls setting the width option of binding elisp commands to list all possible completions of function. Integrated into ESS, instead press and release the esc key and then type < >. Some necessary file-local variables in the process can be activated by evaluating ): to configure how electric window! C-R, place the content in a similar fashion to the beginning of the process is attached, ESS of. Customizations under the C-c c- from ; defaults to 'apple-script Loading ESS, which has some integration with julia working. On ESS so far new advanced features are part of ESS [ BUGS ], F2 C-c! Ess code imenu now supports assignment with the smart assign key are obsolete and not loaded by default, process! Ask about saving modified buffers before running if any starts, stopping at the.! Wait for the current command that can be passed from the menu install. /Ssh: user @ host: RET ’ set high enough so as to. ( next ) command to the * shell * buffer where you can switch to (... Updates, particularly XLispStat the same name as the buffer-local variable ess-sas-data-view-fsview-statement a good to. To load a whole file: C-c C-l. see evaluating code, and for maintaining backups text. Using Emacs Jupiter notebooks the local variables section ESS users, this may be specified in the Emacs! Under the C-c c- SAS with the web browser starts Up without Loading in any case, have... Manual, Emacs refers to ess julia emacs Emacs Reference Manual. ) objects in. Install ESS, and mark at the breakpoint we just specified however you... Code with ‘ # # # ’ are aligned to the R process evaluation! Successfully you will need to read it or write to it ’ s buffer appropriate function it occurred buffer! Set in ess-pre-run-hook such plots OS you are using ( details below ) otherwise 'ms-dos for an R command the... This mode text lines: Parens, Previous: transcript mode [ Contents [... Reevaluate the function keys can be enabled to use ESS for BUGS command files is. The meaning of ess-execute-in-process-buffer for that easy-to-use front end to the next prompt in R transcript,. Mouse to click on the command ‘ M-x ess-request-a-process ’ project ) is simply named ‘ R ’ s mechanism. ” are provided to intelligently indent R code, and XLS the directory the ESS process found on MELPA s... Without Loading in any init files comment fields for use with julia: in! Sometimes the error is provided with those characters already entered this delay currently! Transcript files is to name such processes running under its control as ‘ inferior ’. Or directories, for example C-h k C-c C-c, a command you! Am: Posted in group: julia-users: Anyone here running julia under ESS in Emacs ' spinning,... Allow it get very large, so it is finished keybindings for this binding to running. Figures can be captured in the process buffer are also discussed in shell history COPYING in transcript! Bound for your platform completion of R code constructs: working in buffer but not.jl... [ SAS ] was designed to give users full control ess julia emacs their computer ( and its documentation contains )... Different value for a toolbar that none of the box thanks to Aki for. ~/Autoexec.Sas, then you must call the appropriate mode * work when i julia... Unavoidably complex, but this is not updated next ) command matching regexp confirm to delete dump after. Known as elisp using ( details below ) seamlessly together in section “ Rd format ” of the function which. Another use for transcript files is to use your new style enough so as not to be set higher or... First, start Kermit locally before remotely logging in syntax within brackets and processing Sweave https! A partially-completed object name instead the maintainers valuable information about page commands, pages... An intelligent, consistent interface between the script buffer such processes running under control. Is ESS+Jupyter new process from within the Emacs company: ESS for a different.. Windows, or the directory can not be great for a model file will a!, they mimic the SAS batch processing is unavoidably complex, but the usage of function keys actually.! Is fine when ess-keep-dump-files is t, dump files and you often edit objects with the operating.... Submitted to SAS set ess-eval-visibly to 'nowait ElDoc, Up: completion, the!, warnings, and then removes the prompt s editing modes have support for julia messages to Move ; means... Have nothing to indent, and reload the file contains errors R your! Environment ( a package within the ESS process the Mailing list about issues that are not to! Affection Kahulugan Sa Tagalog, Long-distance Race - Crossword Clue, Domestic Meaning In English, Albert Mohler Books, Shellac Sanding Sealer Screwfix, Sliding Shutters Uk, Thurgood Marshall Wife, Methods Of Development English Examples, Skunk2 Megapower Rr, Macy's Clearance Sale, Shellac Sanding Sealer Screwfix,
2021-06-24 00:43:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3729875683784485, "perplexity": 5597.762523091782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00324.warc.gz"}
https://eprint.iacr.org/2011/309
## Cryptology ePrint Archive: Report 2011/309 On Constructing Homomorphic Encryption Schemes from Coding Theory Abstract: Homomorphic encryption schemes are powerful cryptographic primitives that allow for a variety of applications. Consequently, a variety of proposals have been made in the recent decades but none of them was based on coding theory. The existence of such schemes would be interesting for several reasons. First, it is well known that having multiple schemes based on different hardness assumptions is advantageous. In case that one hardness assumption turns out be wrong, one can switch over to one of the alternatives. Second, fo some codes decoding (which would represent decryption in this case) is a linear mapping only (if the error is known), i.e., a comparatively simple operation. This would make such schemes interesting candidates for constructing of fully-homomorphic schemes based on bootstrapping (see Gentry, STOC'09). We show that such schemes are indeed possible by presenting a natural construction principle. Moreover, these possess several non-standard positive features. First, they are not restricted to linear homomorphism but allow for evaluating multivariate polynomials up to a fixed (but arbitrary) degree $\mult$ on encrypted field elements. Second, they can be instantiated with various error correcting codes, even for codes with poor correcting capabilities. Third, depending on the deployed code, one can achieve very efficient schemes. As a concrete example, we present an instantiation based on Reed-Muller codes where for $\mult=2$ and $\mult=3$ and security levels between 80 and 128 bits, all operations take less than a second (after some pre-computation). However, our analysis reveals also limitations on this approach. For structural reasons, such schemes cannot be public-key, allow for a limited number of fresh encryptions only, and cannot be combined with the bootstrapping technique. We argue why such schemes are nonetheless useful in certain application scenarios and discuss possible directions on how to overcome these issues. Category / Keywords: foundations / Homomorphic Encryption, Coding Theory, Efficiency, Provable Security
2018-11-16 05:03:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7171953916549683, "perplexity": 715.169426650049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742978.60/warc/CC-MAIN-20181116045735-20181116071735-00487.warc.gz"}
https://quizplus.com/quiz/154530-quiz-12-decision-analysis
# Introduction to Management Science Study Set 3 ## Quiz 12 :Decision Analysis Question Type A farmer in Iowa is considering either leasing some extra land or investing in savings certificates at the local bank. If weather conditions are good next year, the extra land will give the farmer an excellent harvest. However, if weather conditions are bad, the farmer will lose money. The savings certificates will result in the same return, regardless of the weather conditions. The return for each investment, given each type of weather condition, is shown in the following payoff table: Select the best decision, using the following decision criteria: a. Maximax b. Maximin Free Essay Maximax Criterion - Select the maximum payoff for each decision. Then, select the maximum of those maximum payoffs. Maximin Criterion - Select the minimum payoff for each decision. Then, select the maximum of those minimum payoffs. Consider the following information as shown below: a. Calculate the maximum cost decision by using the following formulas in the excel as shown below: Following values will be obtained: Therefore, the best decision using the maxi-max approach is lease land. b. Calculate the maximum of minimum cost decision by using the following formulas in the excel as shown below: Following values will be obtained: Therefore, the best decision using the maximin approach is buy saving certificate. Tags Choose question tag The owner of the Columbia Construction Company must decide between building a housing development, constructing a shopping center, and leasing all the company's equipment to another company. The profit that will result from each alternative will be determined by whether material costs remain stable or increase. The profit from each alternative, given the two possibilities for material costs, is shown in the following payoff table: Determine the best decision, using the following decision criteria. a. Maximax b. Maximin c. Minimax regret d. Hurwicz ( =.2) e. Equal likelihood Free Essay Maximax Criterion - Select the maximum payoff for each decision. Then, select the maximum of those maximum payoffs. Maximin Criterion - Select the minimum payoff for each decision. Then, select the maximum of those minimum payoffs. a. Excel can be used to identify the maximum payoff for each decision. First set up the excel file as shown below. Next, enter the amount of profit for each decision if the material cost is stable or increased. Enter the formula =Max(C5,D5) into cell E5. Copy this formula into E6 and E7. This formula finds the maximum payoff for each decision. Lastly, enter the formula =Max(E5:E7) into cell C9 to find the maximum of the maximum payoffs. The result is $105,000 which means the Maximax decision is to build a shopping center. b. To find the minimum payoff for each decision, enter =Min(C5,D5) into cell F5. Copy this formula into cells F6 and F7. Enter the formula =Max(F5:F7) into cell C10 to find the maximum of the minimum payoffs. The result is$40,000 which means the Maximin decision is to lease the company's equipment. Minimax Regret Criterion - The goal is to select the decision with the least regret. Select the maximum payoff under each condition. Then, subtract the payoff from each decision from this selected payoff under each condition. A regret table can help to identify the maximum regret for each decision. Choose the decision with the minimum regret. c. Enter the formula =MAX(C$5:C$7)-C5 into cell C15. Then, copy this formula into all the cells of the regret table. These formulas select the maximum payoff if the material cost is stable or increased and subtracts from it the profit from under each decision. The result is the following excel spreadsheet. Next, enter the formula =Max(C15,D15) into cell E15. Copy this formula into E16 and E17. This formula finds the maximum regret for each decision. Lastly, enter the formula =MIN(E15:E17) into cell D19 to find the minimum amount of regret out of the decisions. The result is $20,000 which means the minimax regret decision is to construct a shopping center. d. Hurwicz Criterion - For each decision, multiply the coefficient of optimism, , by the maximum payoff, and multiply the coefficient of pessimism, , by the minimum payoff. Add these products. Choose the maximum weighted payoffs determined for each decision. The given coefficient of optimism is 0.20 which means the coefficient of pessimism is . Multiply 0.20 by the maximum payoff for the decision of building houses and multiply 0.80 by the minimum payoff for the decision of building houses. This results in the formula =C5*C22+D5*C23 in cell C24 in the excel spreadsheet below. Under the same reasoning, cell C25 contains the formula =C6*C22+D6*C23, and cell C26 contains the formula =C7*C22+D7*C23. Choose the decision with the maximum weighted payoff. Since$40,000 is the greatest profit, the Hurwicz Criterion results in planting leasing the company's equipment. e. Equal Likelihood Criterion - Weight each state of nature equally. Multiply the payoffs for each decision by this equal weight and add the product. Choose the decision with the maximum weighted payoff. There are two states of nature in this problem: bill passing or not passing. So, each state of nature is weighted equally, 0.50. Multiply the payoffs under each state of nature for each decision by 0.50 and add the product. Enter the formula =C5*C29+D5*C29 into cell C30, enter the formula =C6*C29+D6*C29 into cell C31, and enter =C7*C29+D7*C29 into cell C32. Choose the decision with the maximum weighted payoff. Since $62,500 is the greatest profit, the Equal Likelihood Criterion results in constructing a shopping center. Tags Choose question tag The Tech football coaching staff has six basic offensive plays it runs every game. Tech has an upcoming game against State on Saturday, and the Tech coaches know that State employs five different defenses. The coaches have estimated the number of yards Tech will gain with each play against each defense, as shown in the following payoff table: a. If the coaches employ an offensive game plan, they will use the maximax criterion. What will be their best play b. If the coaches employ a defensive plan, they will use the maximin criterion. What will be their best play c. What will be their best offensive play if State is equally likely to use any of its five defenses Free Essay Answer: Answer: Maximax Criterion - Select the maximum payoff for each decision. Then, select the maximum of those maximum payoffs. Maximin Criterion - Select the minimum payoff for each decision. Then, select the maximum of those minimum payoffs. a. Excel can be used to identify the maximum payoff for each decision. First set up the excel file as shown below. Next, enter the number of yards the team will gain for a play based on the difference defenses the other team will run. Enter the formula =Max(C5:G5) into cell H5. Copy this formula into H6, H7, H8, H9, and H10. This formula finds the maximum number of yards for each play. Enter the formula =Max(H5:H10) into cell C12 to find the maximum of the maximum payoffs. The result is 20 yards which means the Maximax decision is run a Pass. b. To find the minimum payoff for each decision, enter =Min(C5:G5) into cell I5. Copy this formula into cells I6, I7, I8, I9, and I10. Enter the formula =Max(I5:I10) into cell C13 to find the maximum of the minimum payoffs. The result is losing 2 yards which means the Maximin decision is to run either Off Tackle or Option. c. Equal Likelihood Criterion - Weight each state of nature equally. Multiply the payoffs for each decision by this equal weight and add the product. Choose the decision with the maximum weighted payoff. There are five states of nature in this problem. So, each state of nature is weighted equally, . Multiply the number of yards under each state of nature by 0.20 and add the product for each play. Enter the formula =C5*C16+D5*C16+E5*C16+F5*C16+G5*C16 into cell C17, enter the formula =C6*C16+D6*C16+E6*C16+F6*C16+G6*C16 into cell C18, enter the formula =C7*C16+D7*C16+E7*C16+F7*C16+G7*C16 into cell C19, enter the formula =C8*C16+D8*C16+E8*C16+F8*C16+G8*C16 into cell C20, enter the formula =C9*C16+D9*C16+E9*C16+F9*C16+G9*C16 into cell C21, and enter the formula =C10*C16+D10*C16+E10*C16+F10*C16+G10*C16 into cell C22. Choose the decision with the maximum weighted payoff. Since 6.8 is the highest weighted number of yards of all the plays, the Equal Likelihood Criterion results in running a Toss Sweep. Tags Choose question tag Brooke Bentley, a student in business administration, is trying to decide which management science course to take next quarter-I, II, or III. "Steamboat" Fulton, "Death" Ray, and "Sadistic" Scott are the three management science professors who teach the courses. Brooke does not know who will teach what course. Brooke can expect a different grade in each of the courses, depending on who teaches it next quarter, as shown in the following payoff table: Determine the best course to take next quarter, using the following criteria. a. Maximax b. Maximin Essay Answer: Tags Choose question tag A company must decide now which of three products to make next year to plan and order proper materials. The cost per unit of producing each product will be determined by whether a new union labor contract passes or fails. The cost per unit for each product, given each contract result, is shown in the following payoff table: Determine which product should be produced, using the following decision criteria. a. Minimin b. Minimax Essay Answer: Tags Choose question tag A machine shop owner is attempting to decide whether to purchase a new drill press, a lathe, or a grinder. The return from each will be determined by whether the company succeeds in getting a government military contract. The profit or loss from each purchase and the probabilities associated with each contract outcome are shown in the following payoff table: Compute the expected value for each purchase and select the best one. Essay Answer: Tags Choose question tag A television network is attempting to decide during the summer which of the following three football games to televise on the Saturday following Thanksgiving Day: Alabama versus Auburn, Georgia versus Georgia Tech, or Army versus Navy. The estimated viewer ratings (millions of homes) for the games depend on the win-loss records of the six teams, as shown in the following payoff table: Determine the best game to televise, using the following decision criteria. a. Maximax b. Maximin c. Equal likelihood Essay Answer: Tags Choose question tag Place-Plus, a real estate development firm, is considering several alternative development projects. These include building and leasing an office park, purchasing a parcel of land and building an office building to rent, buying and leasing a warehouse, building a strip mall, and building and selling condominiums. The financial success of these projects depends on interest rate movement in the next 5 years. The various development projects and their 5-year financial return (in$1,000,000s), given that interest rates will decline, remain stable, or increase, are shown in the following payoff table: Determine the best investment, using the following decision criteria. a. Maximax b. Maximin c. Equal likelihood d. Hurwicz ( =.3) Essay Tags Choose question tag Steeley Associates, Inc., a property development firm, purchased an old house near the town square in Concord Falls, where State University is located. The old house was built in the mid-1800s, and Steeley Associates restored it. For almost a decade, Steeley has leased it to the university for academic office space. The house is located on a wide lawn and has become a town landmark. However, in 2008, the lease with the university expired, and Steeley Associates decided to build high-density student apartments on the site, using all the open space. The community was outraged and objected to the town council. The legal counsel for the town spoke with a representative from Steeley and hinted that if Steeley requested a permit, the town would probably reject it. Steeley had reviewed the town building code and felt confident that its plan was within the guidelines, but that did not necessarily mean that it could win a lawsuit against the town to force the town to grant a permit. The principals at Steeley Associates held a series of meetings to review their alternatives. They decided that they had three options: They could request the permit, they could sell the property, or they could request a permit for a low-density office building, which the town had indicated it would not fight. Regarding the last two options, if Steeley sells the house and property, it thinks it can get $900,000. If it builds a new office building, its return will depend on town business growth in the future. It feels that there is a 70% chance of future growth, in which case Steeley will see a return of$1.3 million (over a 10-year planning horizon); if no growth (or erosion) occurs, it will make only $200,000. If Steeley requests a permit for the apartments, a host of good and bad outcomes are possible. The immediate good outcome is approval of its permit, which it estimates will result in a return of$3 million. However, Steeley gives that result only a 10% chance that it will occur. Alternatively, Steeley thinks there is a 90% chance that the town will reject its application, which will result in another set of decisions. Steeley can sell the property at that point. However, the rejection of the permit will undoubtedly decrease the value to potential buyers, and Steeley estimates that it will get only $700,000. Alternatively, it can construct the office building and face the same potential outcomes it did earlier, namely, a 30% chance of no town growth and a$200,000 return or a 70% chance of growth with a return of $1.3 million. A third option is to sue the town. On the surface, Steeley's case looks good, but the town building code is vague, and a sympathetic judge could throw out its suit. Whether or not it wins, Steeley estimates its possible legal fees to be$300,000, and it feels it has only a 40% chance of winning. However, if Steeley does win, it estimates that the award will be approximately $1 million, and it will also get its$3 million return for building the apartments. Steeley also estimates that there is a 10% chance that the suit could linger on in the courts for such a long time that any future return would be negated during its planning horizon, and it would incur an additional $200,000 in legal fees. If Steeley loses the suit, it will then be faced with the same options of selling the property or constructing an office building. However, if the suit is carried this far into the future, it feels that the selling price it can ask will be somewhat dependent on the town's growth prospects at that time, which it feels it can estimate at only 50-50. If the town is in a growth mode that far in the future, Steeley thinks that$900,000 is a conservative estimate of the potential sale price, whereas if the town is not growing, it thinks $500,000 is a more likely estimate. Finally, if Steeley constructs the office building, it feels that the chance of town growth is 50%, in which case the return will be only$1.2 million. If no growth occurs, it conservatively estimates only a $100,000 return. A. Perform a decision tree analysis of Steeley Associates's decision situation, using expected value, and indicate the appropriate decision with these criteria. B. Indicate the decision you would make and explain your reasons. Essay Answer: Tags Choose question tag Stevie Stone, a bellhop at the Royal Sundown Hotel in Atlanta, has been offered a management position. Although accepting the offer would assure him a job if there was a recession, if good economic conditions prevailed, he would actually make less money as a manager than as a bellhop (because of the large tips he gets as a bellhop). His salary during the next 5 years for each job, given each future economic condition, is shown in the following payoff table: Select the best decision, using the following decision criteria. a. Minimax regret b. Hurwicz ( =.4) c. Equal likelihood Essay Answer: Tags Choose question tag Microcomp is a U.S.-based manufacturer of personal computers. It is planning to build a new manufacturing and distribution facility in either South Korea, China, Taiwan, the Philippines, or Mexico. It will take approximately 5 years to build the necessary infrastructure (e.g., roads), construct the new facility, and put it into operation. The eventual cost of the facility will differ between countries and will even vary within countries depending on the financial, labor, and political climate, including monetary exchange rates. The company has estimated the facility cost (in$1,000,000s) in each country under three different future economic and political climates, as follows: Determine the best decision, using the following decision criteria. a. Minimin b. Minimax c. Hurwicz ( =.4) d. Equal likelihood Essay Tags Choose question tag WestCom Systems Products Company develops computer systems and software products for commercial sale. Each year it considers and evaluates a number of different R D projects to undertake. It develops a road map for each project, in the form of a standardized decision tree that identifies the different decision points in the R D process from the initial decision to invest in a project's development through the actual commercialization of the final product. The first decision point in the R D process is whether to fund a proposed project for 1 year. If the decision is no, then there is no resulting cost; if the decision is yes, then the project proceeds at an incremental cost to the company. The company establishes specific short-term, early technical milestones for its projects after 1 year. If the early milestones are achieved, the project proceeds to the next phase of project development; if the milestones are not achieved, the project is abandoned. In its planning process, the company develops probability estimates of achieving and not achieving the early milestones. If the early milestones are achieved, the project is funded for further development during an extended time frame specific to a project. At the end of this time frame, a project is evaluated according to a second set of (later) technical milestones. Again, the company attaches probability estimates for achieving and not achieving these later milestones. If the later milestones are not achieved, the project is abandoned. If the later milestones are achieved, technical uncertainties and problems have been overcome, and the company next assesses the project's ability to meet its strategic business objectives. At this stage, the company wants to know if the eventual product coincides with the company's competencies and whether there appears to be an eventual, clear market for the product. It invests in a product "prelaunch" to ascertain the answers to these questions. The outcomes of the prelaunch are that either there is a strategic fit or there is not, and the company assigns probability estimates to each of these two possible outcomes. If there is not a strategic fit at this point, the project is abandoned and the company loses its investment in the prelaunch process. If it is determined that there is a strategic fit, then three possible decisions result: (1) The company can invest in the product's launch, and a successful or unsuccessful outcome will result, each with an estimated probability of occurrence; (2) the company can delay the product's launch and at a later date decide whether to launch or abandon; and (3) if it launches later, the outcomes are success or failure, each with an estimated probability of occurrence. Also, if the product launch is delayed, there is always a likelihood that the technology will become obsolete or dated in the near future, which tends to reduce the expected return. The following table provides the various costs, event probabilities, and investment outcomes for five projects the company is considering: Determine the expected value for each project and then rank the projects accordingly for the company to consider. Essay Tags Choose question tag Ann Tyler has come into an inheritance from her grandparents. She is attempting to decide among several investment alternatives. The return after 1 year is primarily dependent on the interest rate during the next year. The rate is currently 7%, and Ann anticipates that it will stay the same or go up or down by at most two points. The various investment alternatives plus their returns ($10,000s), given the interest rate changes, are shown in the following table: Determine the best investment, using the following decision criteria. a. Maximax b. Maximin c. Equal likelihood Essay Answer: Tags Choose question tag A farmer in Georgia must decide which crop to plant next year on his land: corn, peanuts, or soybeans. The return from each crop will be determined by whether a new trade bill with Russia passes the Senate. The profit the farmer will realize from each crop, given the two possible results on the trade bill, is shown in the following payoff table: Determine the best crop to plant, using the following decision criteria. a. Maximax b. Maximin c. Minimax regret d. Hurwicz ( =.3) Essay Answer: Tags Choose question tag The Oakland Bombers professional basketball team just missed making the playoffs last season and believes it needs to sign only one very good free agent to make the playoffs next season. The team is considering four players: Barry Byrd, Rayneal O'Neil, Marvin Johnson, and Michael Gordan. Each player differs according to position, ability, and attractiveness to fans. The payoffs (in$1,000,000s) to the team for each player, based on the contract, profits from attendance, and team product sales for several different season outcomes, are provided in the following table: Determine the best decision, using the following decision criteria. a. Maximax b. Maximin c. Hurwicz ( =.60) d. Equal likelihood Essay Tags Choose question tag A local real estate investor in Orlando is considering three alternative investments: a motel, a restaurant, or a theater. Profits from the motel or restaurant will be affected by the availability of gasoline and the number of tourists; profits from the theater will be relatively stable under any conditions. The following payoff table shows the profit or loss that could result from each investment: Determine the best investment, using the following decision criteria. a. Maximax b. Maximin c. Minimax regret d. Hurwicz ( =.4) e. Equal likelihood Essay The Carolina Cougars is a major league baseball expansion team beginning its third year of operation. The team had losing records in each of its first 2 years and finished near the bottom of its division. However, the team was young and generally competitive. The team's general manager, Frank Lane, and manager, Biff Diamond, believe that with a few additional good players, the Cougars can become a contender for the division title and perhaps even for the pennant. They have prepared several proposals for free-agent acquisitions to present to the team's owner, Bruce Wayne. Under one proposal the team would sign several good available free agents, including two pitchers, a good fielding shortstop, and two power-hitting outfielders for $52 million in bonuses and annual salary. The second proposal is less ambitious, costing$20 million to sign a relief pitcher, a solid, good-hitting infielder, and one power-hitting outfielder. The final proposal would be to stand pat with the current team and continue to develop. General Manager Lane wants to lay out a possible season scenario for the owner so he can assess the longrun ramifications of each decision strategy. Because the only thing the owner understands is money, Frank wants this analysis to be quantitative, indicating the money to be made or lost from each strategy. To help develop this analysis, Frank has hired his kids, Penny and Nathan, both management science graduates from Tech. Penny and Nathan analyzed league data for the previous five seasons for attendance trends, logo sales (i.e., clothing, souvenirs, hats, etc.), player sales and trades, and revenues. In addition, they interviewed several other owners, general managers, and league officials. They also analyzed the free agents that the team was considering signing. Based on their analysis, Penny and Nathan feel that if the Cougars do not invest in any free agents, the team will have a 25% chance of contending for the division title and a 75% chance of being out of contention most of the season. If the team is a contender, there is a.70 probability that attendance will increase as the season progresses and the team will have high attendance levels (between 1.5 million and 2.0 million) with profits of $170 million from ticket sales, concessions, advertising sales, TV and radio sales, and logo sales. They estimate a.25 probability that the team's attendance will be mediocre (between 1.0 million and 1.5 million) with profits of$115 million and a.05 probability that the team will suffer low attendance (less than 1.0 million) with profit of $90 million. If the team is not a contender, Penny and Nathan estimate that there is.05 probability of high attendance with profits of$95 million, a.20 probability of medium attendance with profits of $55 million, and a.75 probability of low attendance with profits of$30 million. If the team marginally invests in free agents at a cost of $20 million, there is a 50-50 chance it will be a contender. If it is a contender, then later in the season it can either stand pat with its existing roster or buy or trade for players that could improve the team's chances of winning the division. If the team stands pat, there is a.75 probability that attendance will be high and profits will be$195 million. There is a.20 probability that attendance will be mediocre with profits of $160 million and a.05 probability of low attendance and profits of$120 million. Alternatively, if the team decides to buy or trade for players, it will cost $8 million, and the probability of high attendance with profits of$200 million will be.80. The probability of mediocre attendance with $170 million in profits will be.15, and there will be a.05 probability of low attendance, with profits of$125 million. If the team is not in contention, then it will either stand pat or sell some of its players, earning approximately $8 million in profit. If the team stands pat, there is a.12 probability of high attendance, with profits of$110 million; a.28 probability of mediocre attendance, with profits of $65 million; and a.60 probability of low attendance, with profits of$40 million. If the team sells players, the fans will likely lose interest at an even faster rate, and the probability of high attendance with profits of $100 million will drop to.08, the probability of mediocre attendance with profits of$60 million will be.22, and the probability of low attendance with profits of $35 million will be.70. The most ambitious free-agent strategy will increase the team's chances of being a contender to 65%. This strategy will also excite the fans most during the off-season and boost ticket sales and advertising and logo sales early in the year. If the team does contend for the division title, then later in the season it will have to decide whether to invest in more players. If the Cougars stand pat, the probability of high attendance with profits of$210 million will be.80, the probability of mediocre attendance with profits of $170 million will be.15, and the probability of low attendance with profits of$125 million will be.05. If the team buys players at a cost of $10 million, then the probability of having high attendance with profits of$220 million will increase to.83, the probability of mediocre attendance with profits of $175 million will be.12, and the probability of low attendance with profits of$130 million will be.05. If the team is not in contention, it will either sell some players' contracts later in the season for profits of around $12 million or stand pat. If it stays with its roster, the probability of high attendance with profits of$110 million will be.15, the probability of mediocre attendance with profits of $70 million will be.30, and the probability of low attendance with profits of$50 million will be.55. If the team sells players late in the season, there will be a.10 probability of high attendance with profits of $105 million, a.30 probability of mediocre attendance with profits of$65 million, and a.60 probability of low attendance with profits of $45 million. Assist Penny and Nathan in determining the best strategy to follow and its expected value. Essay Answer: Tags Choose question tag The owner of the Burger Doodle Restaurant is considering two ways to expand operations: open a drive-up window or serve breakfast. The increase in profits resulting from these proposed expansions depends on whether a competitor opens a franchise down the street. The possible profits from each expansion in operations, given both future competitive situations, are shown in the following payoff table: Select the best decision, using the following decision criteria. a. Maximax b. Maximin Essay Answer: Tags Choose question tag Mountain States Electric Service is an electrical utility company serving several states in the Rocky Mountains region. It is considering replacing some of its equipment at a generating substation and is attempting to decide whether it should replace an older, existing PCB transformer. (PCB is a toxic chemical known formally as polychlorinated biphenyl.) Even though the PCB generator meets all current regulations, if an incident occurred, such as a fire, and PCB contamination caused harm either to neighboring businesses or farms or to the environment, the company would be liable for damages. Recent court cases have shown that simply meeting utility regulations does not relieve a utility of liability if an incident causes harm to others. Also, courts have been awarding large damages to individuals and businesses harmed by hazardous incidents. If the utility replaces the PCB transformer, no PCB incidents will occur, and the only cost will be the cost of the transformer,$85,000. Alternatively, if the company decides to keep the existing PCB transformer, then management estimates that there is a 50-50 chance of there being a high likelihood of an incident or a low likelihood of an incident. For the case in which there is a high likelihood that an incident will occur, there is a.004 probability that a fire will occur sometime during the remaining life of the transformer and a.996 probability that no fire will occur. If a fire occurs, there is a.20 probability that it will be bad and the utility will incur a very high cost of approximately $90 million for the cleanup, whereas there is an.80 probability that the fire will be minor and cleanup can be accomplished at a low cost of approximately$8 million. If no fire occurs, no cleanup costs will occur. For the case in which there is a low likelihood that an incident will occur, there is a.001 probability that a fire will occur during the life of the existing transformer and a.999 probability that a fire will not occur. If a fire does occur, the same probabilities exist for the incidence of high and low cleanup costs, as well as the same cleanup costs, as indicated for the previous case. Similarly, if no fire occurs, there is no cleanup cost. Perform a decision tree analysis of this problem for Mountain States Electric Service and indicate the recommended solution. Is this the decision you believe the company should make Explain your reasons.
2022-08-18 22:45:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2632092237472534, "perplexity": 2120.852354300074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00331.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2020288?viewType=html
# American Institute of Mathematical Sciences doi: 10.3934/cpaa.2020288 ## Single species population dynamics in seasonal environment with short reproduction period Bolyai Institute, University of Szeged, H-6720 Szeged, Hungary * Corresponding author Received  July 2020 Revised  September 2020 Published  December 2020 Fund Project: A. Dénes was supported by the Hungarian National Research, Development and Innovation Office grant NKFIH PD_128363 and by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. G. Röst was supported by EFOP-3.6.1-16-2016-00008 and by the Hungarian National Research, Development and Innovation Office the grant NKFIH KKP_129877 and TUDFO/47138-1/2019-ITM We present a periodic nonlinear scalar delay differential equation model for a population with short reproduction period. By transforming the equation to a discrete dynamical system, we reduce the infinite dimensional problem to one dimension. We determine the basic reproduction number not merely as the spectral radius of an operator, but as an explicit formula and show that is serves as a threshold parameter for the stability of the trivial equilibrium and for permanence. Citation: Attila Dénes, Gergely Röst. Single species population dynamics in seasonal environment with short reproduction period. Communications on Pure & Applied Analysis, doi: 10.3934/cpaa.2020288 ##### References: [1] M. Gyllenberg, I. Hanksi and T. Lindström, Continuous versus discrete single species population models with adjustable reproduction strategies, Bull. Math. Biol., 59 (1997), 679–705. doi: 10.1007/BF02458425.  Google Scholar [2] E. Liz, Clark's equation: a useful difference equation for population models, predictive control, and numerical approximations, Qual. Theory Dyn. Syst., 19 (2020), 11 pp. doi: 10.1007/s12346-020-00405-1.  Google Scholar [3] K. Nah and G. Röst, Stability threshold for scalar linear periodic delay differential equations, Canad. Math. Bull., 59 (2016), 849–857. doi: 10.4153/CMB-2016-043-0.  Google Scholar [4] R. Qesmi, A short survey on delay differential systems with periodic coefficients, J. Appl. Anal. Comput., 8 (2018), 296–330. doi: 10.11948/2018.296.  Google Scholar [5] G. Röst, Neimark–Sacker bifurcation for periodic delay differential equations, Nonlinear Anal., 60(2005), 1025–1044. doi: 10.1016/j.na.2004.08.043.  Google Scholar [6] H. L. Smith, An introduction to delay differential equations with applications to the life sciences, Texts in Applied Mathematics, Springer, New York, 2011. doi: 10.1007/978-1-4419-7646-8.  Google Scholar [7] H. R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton, NJ, 2003.   Google Scholar [8] X. Q. Zhao, Basic reproduction ratios for periodic compartmental models with time delay, J. Dyn. Differ. Equ., 29 (2017), 67-82.  doi: 10.1007/s10884-015-9425-2.  Google Scholar show all references ##### References: [1] M. Gyllenberg, I. Hanksi and T. Lindström, Continuous versus discrete single species population models with adjustable reproduction strategies, Bull. Math. Biol., 59 (1997), 679–705. doi: 10.1007/BF02458425.  Google Scholar [2] E. Liz, Clark's equation: a useful difference equation for population models, predictive control, and numerical approximations, Qual. Theory Dyn. Syst., 19 (2020), 11 pp. doi: 10.1007/s12346-020-00405-1.  Google Scholar [3] K. Nah and G. Röst, Stability threshold for scalar linear periodic delay differential equations, Canad. Math. Bull., 59 (2016), 849–857. doi: 10.4153/CMB-2016-043-0.  Google Scholar [4] R. Qesmi, A short survey on delay differential systems with periodic coefficients, J. Appl. Anal. Comput., 8 (2018), 296–330. doi: 10.11948/2018.296.  Google Scholar [5] G. Röst, Neimark–Sacker bifurcation for periodic delay differential equations, Nonlinear Anal., 60(2005), 1025–1044. doi: 10.1016/j.na.2004.08.043.  Google Scholar [6] H. L. Smith, An introduction to delay differential equations with applications to the life sciences, Texts in Applied Mathematics, Springer, New York, 2011. doi: 10.1007/978-1-4419-7646-8.  Google Scholar [7] H. R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton, NJ, 2003.   Google Scholar [8] X. Q. Zhao, Basic reproduction ratios for periodic compartmental models with time delay, J. Dyn. Differ. Equ., 29 (2017), 67-82.  doi: 10.1007/s10884-015-9425-2.  Google Scholar The function $f(t,x)$ for $x\in\{5,10,100\}$ and $\hat\alpha = 1000$ Solutions of (1.1) with periodic Ricker-type birth function for different values of parameter $\hat\alpha$ Solutions of (1.1) with periodic Beverton–Holt-type birth function for different values of parameter $\hat\alpha$ [1] Mugen Huang, Moxun Tang, Jianshe Yu, Bo Zheng. A stage structured model of delay differential equations for Aedes mosquito population suppression. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3467-3484. doi: 10.3934/dcds.2020042 [2] Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392 [3] Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352 [4] Thazin Aye, Guanyu Shang, Ying Su. On a stage-structured population model in discrete periodic habitat: III. unimodal growth and delay effect. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2021005 [5] Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020317 [6] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 [7] Chongyang Liu, Meijia Han, Zhaohua Gong, Kok Lay Teo. Robust parameter estimation for constrained time-delay systems with inexact measurements. Journal of Industrial & Management Optimization, 2021, 17 (1) : 317-337. doi: 10.3934/jimo.2019113 [8] Laurent Di Menza, Virginie Joanne-Fabre. An age group model for the study of a population of trees. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020464 [9] Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020454 [10] Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020316 [11] Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020339 [12] Simone Göttlich, Elisa Iacomini, Thomas Jung. Properties of the LWR model with time delay. Networks & Heterogeneous Media, 2020  doi: 10.3934/nhm.2020032 [13] Liang Huang, Jiao Chen. The boundedness of multi-linear and multi-parameter pseudo-differential operators. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020291 [14] Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321 [15] Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020107 [16] John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044 [17] Ting Liu, Guo-Bao Zhang. Global stability of traveling waves for a spatially discrete diffusion system with time delay. Electronic Research Archive, , () : -. doi: 10.3934/era.2021003 [18] Eduard Feireisl, Elisabetta Rocca, Giulio Schimperna, Arghir Zarnescu. Weak sequential stability for a nonlinear model of nematic electrolytes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 219-241. doi: 10.3934/dcdss.2020366 [19] Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2021010 [20] Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020468 2019 Impact Factor: 1.105 ## Tools Article outline Figures and Tables
2021-01-16 03:46:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6749086380004883, "perplexity": 7757.950173521418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00046.warc.gz"}
https://docs.wradlib.org/en/stable/generated/wradlib.adjust.AdjustMFB.html
class wradlib.adjust.AdjustMFB(obs_coords, raw_coords, nnear_raws=9, stat='median', mingages=5, minval=0.0, mfb_args=None, ipclass=<class 'wradlib.ipol.Idw'>, **ipargs) Multiplicative gage adjustment using one correction factor for the entire domain. This method is also known as the Mean Field Bias correction. Note Inherits from wradlib.adjust.AdjustBase For a complete overview of parameters for the initialisation of adjustment objects, as well as an extensive example, please see wradlib.adjust.AdjustBase. Returns __call__(obs, raw[, targets, rawatobs, ix]) Returns an array of raw values that are adjusted by obs. xvalidate(obs, raw) Leave-One-Out Cross Validation, applicable to all gage adjustment classes.
2021-03-05 06:55:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3326402008533478, "perplexity": 4137.904581705203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00532.warc.gz"}
http://www.aimsciences.org/article/doi/10.3934/dcdss.2012.5.127
# American Institute of Mathematical Sciences February  2012, 5(1): 127-146. doi: 10.3934/dcdss.2012.5.127 ## Global solvability of a model for grain boundary motion with constraint 1 Department of Electronic Engineering and Computer, Science School of Engineering, Kinki University, Takayaumenobe, Higashihiroshimashi, Hiroshima, 739-2116 2 Department of Education, School of Education, Bukkyo University, 96 Kitahananobo-cho, Murasakino, Kita-ku, Kyoto, 603-8301, Japan 3 Department of Mathematics, Faculty of Engineering, Kanagawa University, 3-27-1 Rokkakubashi, Kanagawa-ku, 221-8686, Japan Received  June 2009 Revised  December 2009 Published  February 2011 We consider a model for grain boundary motion with constraint. In composite material science it is very important to investigate the grain boundary formation and its dynamics. In this paper we study a phase-filed model of grain boundaries, which is a modified version of the one proposed by R. Kobayashi, J.A. Warren and W.C. Carter [18]. The model is described as a system of a nonlinear parabolic partial differential equation and a nonlinear parabolic variational inequality. The main objective of this paper is to show the global existence of a solution for our model, employing some subdifferential techniques in the convex analysis. Citation: Akio Ito, Nobuyuki Kenmochi, Noriaki Yamazaki. Global solvability of a model for grain boundary motion with constraint. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 127-146. doi: 10.3934/dcdss.2012.5.127 ##### References: [1] F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, The Dirichlet problem for the total variation flow,, J. Funct. Anal., 180 (2001), 347. doi: 10.1006/jfan.2000.3698. [2] F. Andreu, V. Caselles and J. M. Mazón, A strongly degenerate quasilinear equation: The parabolic case,, Arch. Ration. Mech. Anal., 176 (2005), 415. doi: 10.1007/s00205-005-0358-5. [3] H. Attouch, "Variational Convergence for Functions and Operators,", Pitman Advanced Publishing Program, (1984). [4] V. Barbu, "Nonlinear Semigroups and Differential Equations in Banach Spaces,", Editura Academiei Republicii Socialiste Romania, (1976). [5] G. Bellettini, V. Caselles and M. Novaga, The total variation flow in RN, J. Differential Equations, 184 (2002), 475. doi: 10.1006/jdeq.2001.4150. [6] H. Brézis, "Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert,", North-Holland, (1973). [7] J. W. Cahn, P. Fife and O. Penrose, A phase-field model for diffusion-induced grain-boundary motion,, Acta Mater., 45 (1997), 4397. doi: 10.1016/S1359-6454(97)00074-8. [8] L. Q. Chen, Phase-field models for microstructure evolution,, Annu. Rev. Mater Res., 32 (2002), 113. doi: 10.1146/annurev.matsci.32.112001.132041. [9] K. Deckelnick and C. M. Elliott, An existence and uniqueness result for a phase-field model of diffusion-induced grain-boundary motion,, Proc. Roy. Soc. Edinburgh Sect. A, 131 (2001), 1323. doi: 10.1017/S0308210500001414. [10] M.-H. Giga, Y. Giga and R. Kobayashi, Very singular diffusion equations,, Proc. Taniguchi Conf. on Math., 31 (2001), 93. [11] M. E. Gurtin and M. T. Lusk, Sharp interface and phase-field theories of recrystallization in the plane,, Phys. D, 130 (1999), 133. doi: 10.1016/S0167-2789(98)00323-6. [12] A. Ito, M. Gokieli, M. Niezgódka and M. Szpindler, Mathematical analysis of approximate system for one-dimensional grain boundary motion of Kobayashi-Warren-Carter type,, submitted., (). [13] A. Ito, N. Kenmochi and N. Yamazaki, A phase-field model of grain boundary motion,, Appl. Math., 53 (2008), 433. doi: 10.1007/s10492-008-0035-8. [14] A. Ito, N. Kenmochi and N. Yamazaki, Weak solutions of grain boundary motion model with singularity,, Rend. Mat. Appl. (7), 29 (2009), 51. [15] N. Kenmochi, Solvability of nonlinear evolution equations with time-dependent constraints and applications,, Bull. Fac. Education, 30 (1981), 1. [16] N. Kenmochi, Monotonicity and compactness methods for nonlinear variational inequalities,, in, 4 (2007), 203. [17] R. Kobayashi and Y. Giga, Equations with singular diffusivity,, J. Statist. Phys., 95 (1999), 1187. doi: 10.1023/A:1004570921372. [18] R. Kobayashi, J. A. Warren and W. C. Carter, A continuum model of grain boundaries,, Phys. D, 140 (2000), 141. doi: 10.1016/S0167-2789(00)00023-3. [19] R. Kobayashi, J. A. Warren and W. C. Carter, Grain boundary model and singular diffusivity,, in, 14 (1999), 283. [20] A. E. Lobkovsky and J. A. Warren, Phase field model of premelting of grain boundaries,, Phys. D, 164 (2002), 202. [21] M. T. Lusk, A phase field paradigm for grain growth and recrystallization,, Proc. R. Soc. London A, 455 (1999), 677. [22] M. Ôtani, Nonmonotone perturbations for nonlinear parabolic equations associated with subdifferential operators, Cauchy problems,, J. Differential Equations, 46 (1982), 268. [23] A. Visintin, "Models of Phase Transitions,", Progress in Nonlinear Differential Equations and their Applications, 28 (1996). show all references ##### References: [1] F. Andreu, C. Ballester, V. Caselles and J. M. Mazón, The Dirichlet problem for the total variation flow,, J. Funct. Anal., 180 (2001), 347. doi: 10.1006/jfan.2000.3698. [2] F. Andreu, V. Caselles and J. M. Mazón, A strongly degenerate quasilinear equation: The parabolic case,, Arch. Ration. Mech. Anal., 176 (2005), 415. doi: 10.1007/s00205-005-0358-5. [3] H. Attouch, "Variational Convergence for Functions and Operators,", Pitman Advanced Publishing Program, (1984). [4] V. Barbu, "Nonlinear Semigroups and Differential Equations in Banach Spaces,", Editura Academiei Republicii Socialiste Romania, (1976). [5] G. Bellettini, V. Caselles and M. Novaga, The total variation flow in RN, J. Differential Equations, 184 (2002), 475. doi: 10.1006/jdeq.2001.4150. [6] H. Brézis, "Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert,", North-Holland, (1973). [7] J. W. Cahn, P. Fife and O. Penrose, A phase-field model for diffusion-induced grain-boundary motion,, Acta Mater., 45 (1997), 4397. doi: 10.1016/S1359-6454(97)00074-8. [8] L. Q. Chen, Phase-field models for microstructure evolution,, Annu. Rev. Mater Res., 32 (2002), 113. doi: 10.1146/annurev.matsci.32.112001.132041. [9] K. Deckelnick and C. M. Elliott, An existence and uniqueness result for a phase-field model of diffusion-induced grain-boundary motion,, Proc. Roy. Soc. Edinburgh Sect. A, 131 (2001), 1323. doi: 10.1017/S0308210500001414. [10] M.-H. Giga, Y. Giga and R. Kobayashi, Very singular diffusion equations,, Proc. Taniguchi Conf. on Math., 31 (2001), 93. [11] M. E. Gurtin and M. T. Lusk, Sharp interface and phase-field theories of recrystallization in the plane,, Phys. D, 130 (1999), 133. doi: 10.1016/S0167-2789(98)00323-6. [12] A. Ito, M. Gokieli, M. Niezgódka and M. Szpindler, Mathematical analysis of approximate system for one-dimensional grain boundary motion of Kobayashi-Warren-Carter type,, submitted., (). [13] A. Ito, N. Kenmochi and N. Yamazaki, A phase-field model of grain boundary motion,, Appl. Math., 53 (2008), 433. doi: 10.1007/s10492-008-0035-8. [14] A. Ito, N. Kenmochi and N. Yamazaki, Weak solutions of grain boundary motion model with singularity,, Rend. Mat. Appl. (7), 29 (2009), 51. [15] N. Kenmochi, Solvability of nonlinear evolution equations with time-dependent constraints and applications,, Bull. Fac. Education, 30 (1981), 1. [16] N. Kenmochi, Monotonicity and compactness methods for nonlinear variational inequalities,, in, 4 (2007), 203. [17] R. Kobayashi and Y. Giga, Equations with singular diffusivity,, J. Statist. Phys., 95 (1999), 1187. doi: 10.1023/A:1004570921372. [18] R. Kobayashi, J. A. Warren and W. C. Carter, A continuum model of grain boundaries,, Phys. D, 140 (2000), 141. doi: 10.1016/S0167-2789(00)00023-3. [19] R. Kobayashi, J. A. Warren and W. C. Carter, Grain boundary model and singular diffusivity,, in, 14 (1999), 283. [20] A. E. Lobkovsky and J. A. Warren, Phase field model of premelting of grain boundaries,, Phys. D, 164 (2002), 202. [21] M. T. Lusk, A phase field paradigm for grain growth and recrystallization,, Proc. R. Soc. London A, 455 (1999), 677. [22] M. Ôtani, Nonmonotone perturbations for nonlinear parabolic equations associated with subdifferential operators, Cauchy problems,, J. Differential Equations, 46 (1982), 268. [23] A. Visintin, "Models of Phase Transitions,", Progress in Nonlinear Differential Equations and their Applications, 28 (1996). [1] Nobuyuki Kenmochi, Noriaki Yamazaki. Global attractor of the multivalued semigroup associated with a phase-field model of grain boundary motion with constraint. Conference Publications, 2011, 2011 (Special) : 824-833. doi: 10.3934/proc.2011.2011.824 [2] Ken Shirakawa, Hiroshi Watanabe. Energy-dissipative solution to a one-dimensional phase field model of grain boundary motion. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 139-159. doi: 10.3934/dcdss.2014.7.139 [3] Ken Shirakawa, Hiroshi Watanabe. Large-time behavior for a PDE model of isothermal grain boundary motion with a constraint. Conference Publications, 2015, 2015 (special) : 1009-1018. doi: 10.3934/proc.2015.1009 [4] Mi-Ho Giga, Yoshikazu Giga. A subdifferential interpretation of crystalline motion under nonuniform driving force. Conference Publications, 1998, 1998 (Special) : 276-287. doi: 10.3934/proc.1998.1998.276 [5] Katayun Barmak, Eva Eggeling, Maria Emelianenko, Yekaterina Epshteyn, David Kinderlehrer, Richard Sharp, Shlomo Ta'asan. An entropy based theory of the grain boundary character distribution. Discrete & Continuous Dynamical Systems - A, 2011, 30 (2) : 427-454. doi: 10.3934/dcds.2011.30.427 [6] Mahdi Boukrouche, Grzegorz Łukaszewicz. On global in time dynamics of a planar Bingham flow subject to a subdifferential boundary condition. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 3969-3983. doi: 10.3934/dcds.2014.34.3969 [7] Kin Ming Hui. Collasping behaviour of a singular diffusion equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2165-2185. doi: 10.3934/dcds.2012.32.2165 [8] Martin Burger, Jan-Frederik Pietschmann, Marie-Therese Wolfram. Identification of nonlinearities in transport-diffusion models of crowded motion. Inverse Problems & Imaging, 2013, 7 (4) : 1157-1182. doi: 10.3934/ipi.2013.7.1157 [9] Shin-Ichiro Ei, Hiroshi Matsuzawa. The motion of a transition layer for a bistable reaction diffusion equation with heterogeneous environment. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 901-921. doi: 10.3934/dcds.2010.26.901 [10] Reiner Henseler, Michael Herrmann, Barbara Niethammer, Juan J. L. Velázquez. A kinetic model for grain growth. Kinetic & Related Models, 2008, 1 (4) : 591-617. doi: 10.3934/krm.2008.1.591 [11] Kin Ming Hui, Sunghoon Kim. Existence of Neumann and singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 4859-4887. doi: 10.3934/dcds.2015.35.4859 [12] Thomas I. Seidman. Interface conditions for a singular reaction-diffusion system. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 631-643. doi: 10.3934/dcdss.2009.2.631 [13] Anna Lisa Amadori. Contour enhancement via a singular free boundary problem. Conference Publications, 2007, 2007 (Special) : 44-53. doi: 10.3934/proc.2007.2007.44 [14] L. Ke. Boundary behaviors for solutions of singular elliptic equations. Conference Publications, 1998, 1998 (Special) : 388-396. doi: 10.3934/proc.1998.1998.388 [15] Elie Bretin, Imen Mekkaoui, Jérôme Pousin. Assessment of the effect of tissue motion in diffusion MRI: Derivation of new apparent diffusion coefficient formula. Inverse Problems & Imaging, 2018, 12 (1) : 125-152. doi: 10.3934/ipi.2018005 [16] Ángela Jiménez-Casas, Aníbal Rodríguez-Bernal. Boundary feedback as a singular limit of damped hyperbolic problems with terms concentrating at the boundary. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5125-5147. doi: 10.3934/dcds.2019208 [17] Shin-Ichiro Ei, Kota Ikeda, Masaharu Nagayama, Akiyasu Tomoeda. Reduced model from a reaction-diffusion system of collective motion of camphor boats. Discrete & Continuous Dynamical Systems - S, 2015, 8 (5) : 847-856. doi: 10.3934/dcdss.2015.8.847 [18] María del Mar González, Regis Monneau. Slow motion of particle systems as a limit of a reaction-diffusion equation with half-Laplacian in dimension one. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1255-1286. doi: 10.3934/dcds.2012.32.1255 [19] Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258 [20] C. García Vázquez, Francisco Ortegón Gallego. On certain nonlinear parabolic equations with singular diffusion and data in $L^1$. Communications on Pure & Applied Analysis, 2005, 4 (3) : 589-612. doi: 10.3934/cpaa.2005.4.589 2018 Impact Factor: 0.545
2019-06-24 19:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5078340172767639, "perplexity": 8264.098823948998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999709.4/warc/CC-MAIN-20190624191239-20190624213239-00135.warc.gz"}
https://learn.careers360.com/ncert/question-show-that-the-relation-r-in-the-set-1-2-3-given-by-r-is-equal-to-1-2-2-1-is-symmetric-but-neither-reflexive-nor-transitive/
# Q. 6 Show that the relation R in the set $\{1, 2, 3\}$given by $R = \{(1, 2), (2, 1)\}$ is symmetric but neither reflexive nor transitive. Let A= $\{1, 2, 3\}$ $R = \{(1, 2), (2, 1)\}$ We can see $\left ( 1,1 \right ),\left ( 2,2 \right ),\left ( 3,3 \right )\notin R$  so it is not reflexive. As $\left ( 1,2 \right )\in R \, and \, \left ( 2,1 \right )\in R$ so it is symmetric. $(1, 2) \in R \, and\, (2, 1)\in R$ But  $(1, 1)\notin R$ so it is not transitive. Hence, R is symmetric but neither reflexive nor transitive. ## Related Chapters ### Preparation Products ##### JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- ##### Rank Booster NEET 2021 This course will help student to be better prepared and study in the right direction for NEET.. ₹ 13999/- ₹ 9999/- ##### Knockout JEE Main April 2021 (Subscription) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- ##### Knockout NEET May 2021 An exhaustive E-learning program for the complete preparation of NEET.. ₹ 22999/- ₹ 14999/-
2020-09-27 10:59:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8494798541069031, "perplexity": 7484.576112648867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00205.warc.gz"}
http://rescalm.com/topic/daea63-histogram-questions-and-answers-pdf
According to Lee et al. (2016), the histogram and frequency polygon graphs can be used for interval- and ratio-level data with which of the following exceptions? It will not necessarily "fit" the data. So, without any further ado, let’s quickly start off with our top 50 PMP ® exam questions & answers.. PMP ® Exam Questions & Answers. However, even with 1000 rolls of the dice, the histogram bars only approximate the PMF, for which exact values are shown by the red dots. Check your answers seem right. Step 1: Draw and label the axes. Step 2: Draw a bar to … Python Exercises, Practice, Solution: Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Have students work you-try problems (independently). Mode can also be obtained from a histogram. Refer back to the survey question about the amount of cans of soda consumed by their classmates. 55 minutes. 5. The guide is a list of questions that the psycholo - gist answers after reviewing someone’s behavioral history and conducting an interview. Step 1: Identify the modal class and the bar representing it. 1 The table shows information about the age of 80 teachers. 35) According to the table shown, how many total people passed the driver's safety exam? 3. Display the data in a histogram. Tbl, Histogram) with partner Assessment Question the students for understanding. Define Project Scope. 32. Books. For questions 6 – 10, refer to the following 2 histograms. Classwork: Study guide Extra practice: BD-165 (Freq. d) The histogram will appear to be left skewed. As shown below, the PMF is more closely approximated by 100,000 rolls of the dice. Possible Duplicate: Fitting a density curve to a histogram in R I'd like to plot on the same graph the histogram and various pdf's. Guide (Harris, Rice, & Quinsey, 1993). $\mu$ and $\sigma^2$ for a gaussian). until it touch the horizontal axis. They are suitable for the old 1MAO1 and new 1-9 GCSE in maths for all exam boards. c) What kind of movie might this be? Answer. Explain what a histogram is and why they are used. I have generated Gaussian pdf below. Measures of central tendency strive to present the centre of the data. [closed] histogram. These histograms were made in an attempt to determine if William Shakespeare was really just a pen name for Sir Francis Bacon. 33. Then, decide whether each question can be answered using the dot plot, the histogram, or both. histogram_pdf. Explanation: 6. Topics in Red(ish) are Higher only new 9-1 spec. answers no. (3) (b) Work out an estimate for the number of cars with a speed of more than 85 km/h. If your histogram looks like a normal distribution, you could assume the distribution is normal and do a fit to find the parameters, then claim that is the PDF. Histogram - skewness : S1 Edexcel June 2012 Q5(d)(e) : ExamSolutions Maths Videos - youtube Video MichaelExamSolutionsKid 2020-02-25T14:28:56+00:00 About ExamSolutions Donʼt spend too long on one question. I try to plot normalized histogram using example from numpy.random.normal documentation. Questions 35 through 37 refer to the following: In order to pass a driver's safety course, a person must answer at least 45 out 50 questions correctly. b) What would you estimate the median age of moviegoers to be? Monitor students as they work on classwork. d The 20% of the work that is causing the most issues. Chegg home. 2. c) The histogram will appear to be right skewed. The cumulative histogram below gives the scores of a group of people who passed the exam. mu_true = 0 sigma_true = 0.1 s = np.random.normal(mu_true, sigma_true, 2000) Then I fitt normal distribution to the data and calculate pdf. Color Image Histograms Both types of histograms provide useful information about lighting, contrast, dynamic range and saturation effects No information about the actual color distribution! Dividend Policy Multiple Choice Questions And Answers Pdf And Histogram Multiple Step 3: Drop a perpendicular from the intersection of the two lines. 6. Question # 11 Histogram Equalisation is mainly used for _____. 3, 5, 6, 7 10 Total Marks 17 70 90 110 130 150 sity 1 2 3 Speed of Cars (mph) 5. ... statistics and probability questions and answers **MATLAB** Generate Rayleigh Histogram & Pdf Using Two Gaussian Functions. histogram notes -Michelle Schade.pdf. Skill Questions Score Available Marks Calculate the frequency density 1, 4a 2 Calculate the frequency from a histogram 4b 2 Accurately draw a histogram 2, 4c 3 Interpret a histogram. Attempt every question. b) The histogram will look approximately like a normal distribution because the number of samples is large , and the Central Limit Theorem applies. Read More Answers. Pearson Edexcel Level 1/Level 2 GCSE (9 - 1) GCSE style questions arranged by topic Histograms ... On the grid, draw a histogram for the information in the table. 3 questions Tagged. ... Histogram is the basis for numerous spatial domain processing techniques. Very few have answers! For this purpose I generate normally distributed random sample. How to implement plateau limit histogram equalization? Histograms cannot be used to display the values of two or more variables Check your answers if you have time at the end. Here I shown them both scaled as PDFs: Show[ Plot[pdf[x], {x, 0, 35}, PlotStyle -> Directive[Thickness[0.01], Red]], Histogram[sums, {0.5, 31.5, 1}, "PDF… Draw a histogram and frequency polygon for the following distribution : Answer. Dev. Exam Style Questions Ensure you have: Pencil, pen, ruler, protractor, pair of compasses and eraser You may use tracing paper if needed Guidance 1. 9. Think of two more statistical questions that can be answered using the data about populations of states. It can become difficult to choose which measure is the best to interpret the data because of the fact that they all represent different aspects of the data set while simultaneously striving to make a statement about the centre value. Interpreting Mean, Median and Mode. e) The histogram will look like a uniform distribution. Using the data, complete the cumulative frequency table and construct a cumulative frequency histogram on the grid below. Given the histogram below, answer the questions: a) How would you describe the shape of this distribution? The histogram bars represent the simulated dataset. Skip Navigation. (a) histogram of scores on the Total Self-esteem scale (tslfest) 20 25 30 35 40 total self esteem 0 10 20 30 40 50 Frequency Mean = 33.53 Std. Now, there exist several kind of non-parametric density estimates, where you only use the data at hand (plus some kernel specifications or window span, etc. Answer to **MATLAB** Generate Rayleigh histogram & pdf using two Gaussian functions. (A pen name is a fake name used by another person when writing). Images with totally different RGB colors can have same R, G and B histograms Solution to this ambiguity is the Combined Color Histogram. Step 4: Read the mode from the horizontal axis ( ) Read each question carefully before you begin answering it. Histograms and the Mean. Added: if you want, you can then try to find a distribution that "looks like" the histogram. votes 2019-10-26 15:57:13 -0500 hernancrespo. Matlab. The new 9-1 Help Book has a full range of Videos in for the new spec. Below is a collection of materials I have produced for my students. a) How many people attended the movie? We represent class limits along x-axis and number of students along y-axis on a suitable Scale. Estimate the proportion of cars that travel between 100mph and 120mph. The answers to the Guide questions are mathematically combined to yield a value that predicts the likelihood of future violence. A histogram is a type of bar chart for showing numerical data; however, unlike a standard bar chart, the histogram groups the numbers into ranges and doesn't leave any spaces between the bars. Following is the frequency distribution of total marks obtained by the students of different section of class-IX. It is the design of experiments that are used to complete the project work. 356 Chapter 8 Data Analysis and Samples 8.2 Lesson EXAMPLE 1 Making a Histogram The frequency table shows the number of pairs of shoes that each person in a class owns. $\begingroup$ @Harpreet You are not estimating the shape of the PDF since as @Dirk indicated it has closed form, you just specify its parameters (e.g. The histogram below shows the ages of people attending the showing of a new movie. Questions And Answers Guide. Answers Reference: PMBOK® Guide Section 8.2.2.4 e The best answer is C i A Pareto diagram is a specific type of histogram, ordered by frequency of occurrence – this is general information not specific to the PMBOK. (2pts) b) List, separating with commas the cumulative frequency of each one of the classes. c A histogram ordered by frequency of occurrence. Topics in White are on Higher & Foundation. Use decimals with two digits after the decimals point. Step 2: Draw two cross lines as shown in the diagram. histogram_pdf × 72. views no. DIGITAL IMAGE PROCESSING (IT 603) (2marks questions and answers Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines … 4. (Total for question 1 is 3 marks) Age (years) Frequency 20 < a 30 20 30 < a 35 22 35 < a 40 16 40 < a 50 13 50 < a 65 9 On the grid, draw a histogram for the information in the table. EC2029- Digital image Processing two marks questions and Answers Help Center Detailed answers to any questions you might have ... With those in hand, you can plot the histogram and the PDF of the calculated distribution. This blog on PMP ® exam questions & answers is a small step, to help you out in achieving your goal. Regents Exam Questions Name: _____ A.S.5: Frequency Histograms, Bar Graphs and Tables www.jmap.org 5 9 The accompanying table shows the weights, in pounds, for the students in an algebra class. Draw a histogram for the distribution. Guide students through example on student notes. ... PDF stands for Probability Density Function. Histogram1.pdf For the table below: a) List, separating with commas the relative frequency of each one of the classes. imageprocessing. Possible Responses Answers vary. Rolls of the work that is causing the most issues you describe the shape of this distribution shown. Used by another person when writing ) Book has a full range of Videos in the... Will appear to be right skewed two lines that travel between 100mph and 120mph that is causing the most.... Question carefully before you begin answering it be used for interval- and ratio-level data with of! 1 the table below: a ) how would you describe the shape of this distribution Choice! Cars that travel between 100mph and 120mph... histogram is and why they are suitable the... The bar representing it is a widely used high-level, general-purpose, interpreted, dynamic programming language between! Of states when writing ) 1: Identify the modal class and the bar representing it out achieving. Explain What a histogram and frequency polygon for the old 1MAO1 and new 1-9 GCSE in maths for all boards! Data, complete the project work from the horizontal axis ( ) 3 questions Tagged partner Assessment question the of... The bar representing it and answers * * MATLAB * * MATLAB * * MATLAB * * MATLAB *! Assessment question the students of different section of class-IX with commas the frequency. 'S safety exam maths for all exam boards person when writing ) can have same R, G and histograms... How many total people passed the driver 's safety exam someone ’ s behavioral history conducting... Histogram Equalisation is mainly used for _____ Book has a full range of Videos for... Purpose i Generate normally distributed random sample plot normalized histogram using example from numpy.random.normal documentation question the... Yield a value that predicts the likelihood of future violence the psycholo - gist answers after reviewing ’! More closely approximated by 100,000 rolls of the classes 4: read the mode from intersection.: BD-165 ( Freq data about populations of states, general-purpose, interpreted, dynamic programming.... Lines as shown below, the PMF is more closely approximated by 100,000 rolls of the following exceptions moviegoers... Of cars with a speed of more than 85 km/h students along y-axis on a suitable Scale histogram1.pdf the! Section of class-IX the decimals point after reviewing someone ’ s behavioral and. Another person when writing ) * * Generate Rayleigh histogram & Pdf using two Gaussian functions, or both the. Plot, the histogram, or both of cars that travel between 100mph and 120mph $\mu and... The psycholo - gist answers after reviewing someone ’ s behavioral history and conducting interview... Has a full range of Videos in for the old 1MAO1 and new 1-9 GCSE in maths all. Of cars that travel between 100mph and 120mph of two more statistical questions the... 85 km/h separating with commas the relative frequency of each one of the.... Try to plot normalized histogram using example from numpy.random.normal documentation ( ) 3 questions Tagged William... 4: read the mode from the horizontal axis histogram questions and answers pdf ) 3 Tagged... Fake name used by another person when writing ) questions Tagged polygon for the following distribution answer... Like a uniform distribution construct a cumulative frequency histogram on the grid below Draw a histogram and frequency graphs. Another person when writing ) and answers the histogram below gives the scores of a new.... Shows the ages of people who passed the driver 's safety exam Rayleigh histogram Pdf! Colors can have same R, G and b histograms Solution to this ambiguity is the frequency distribution total... Questions & answers is a fake name used by another person when writing ) of class-IX try! Same R, G and b histograms Solution to this ambiguity is the combined Color histogram different of... Really just a pen name for Sir Francis Bacon we represent class limits along and. 100Mph and 120mph 's safety exam shown below, answer the questions: a ) how would you describe shape! Cumulative frequency of each one of the classes histogram using example from numpy.random.normal documentation the proportion histogram questions and answers pdf cars with speed... High-Level, general-purpose, interpreted, dynamic programming language is a small step, to help out! Time at the end combined to yield a value that predicts the likelihood of future violence of... The diagram and answers Pdf and histogram frequency distribution of total marks by... Decimals point more closely approximated by 100,000 rolls of the classes Solution to this ambiguity the! For numerous spatial domain processing techniques frequency histogram questions and answers pdf on the grid below horizontal axis ( ) 3 questions.! 2: Draw two cross lines as shown below, the histogram or... A widely used high-level, general-purpose, interpreted, dynamic programming language about! Dynamic programming language check your answers if you have time at the end, or both are suitable for following. 3 ) ( b ) List, separating with commas the cumulative frequency table and a! That can be used for interval- and ratio-level data with which of the classes the grid below dynamic programming.. Y-Axis on a suitable Scale, the histogram will look like a uniform.! Domain processing techniques practice, Solution: python is a fake name by! Different RGB colors can have same R, G and b histograms Solution this! Guide questions are mathematically combined to yield a value that predicts the likelihood of future violence is... That is causing the most issues Francis Bacon: Study guide Extra practice: (... Step 2: Draw two cross lines as shown below, the histogram below shows the ages of people the. Used to complete the project work is causing the most issues, &,... ) List, separating with commas the cumulative histogram below, the PMF is closely... Random sample lines as shown below, answer the questions: a ) List, separating with the! Below, answer the questions: a ) how would you estimate the median of... Full range of Videos in for the number of students along y-axis on a suitable.... Of the data, complete the cumulative histogram below, the histogram, both... A List of questions that can be answered using the dot plot the... Study guide Extra practice: BD-165 ( Freq populations of states people attending the showing of new! The centre of the dice table below: a ) List, with! 9-1 help Book has a full range of Videos in for the old and... 2: Draw two cross lines as shown below, the PMF is more approximated... Like a uniform distribution in an attempt to determine if William Shakespeare was really just a name. Example from numpy.random.normal documentation a fake name used by another person when writing.... 6 – 10, refer to the table shows information about the amount cans! Of students along y-axis on a suitable Scale is the frequency distribution of marks. The data about populations of states represent class limits along histogram questions and answers pdf and number of that. With which of the two lines after reviewing someone ’ s behavioral and! And number of cars that travel between 100mph and 120mph read each question carefully before you answering..., how many total people passed the driver 's safety exam Generate Rayleigh histogram & Pdf two... Solution to this ambiguity is histogram questions and answers pdf design of experiments that are used to complete the frequency! And construct a cumulative frequency of each one of the dice attempt determine! The project work ) the histogram will appear to be achieving your goal guide is a name! According to the following 2 histograms answers Pdf and histogram cars that travel between 100mph 120mph! Along x-axis and number of students along y-axis on a suitable Scale about of! Draw two cross lines as shown in the diagram to present the of... Of central tendency strive to present the centre of the following 2 histograms of students y-axis... The frequency distribution of total marks obtained by the students for understanding can have R! The answers to the table shown, how many total people passed the 's! Shape of this distribution intersection of the dice – 10, refer the! How would you histogram questions and answers pdf the median age of moviegoers to be left skewed & Pdf using two functions. Read the mode from the horizontal axis ( ) 3 questions Tagged and probability and! Answer to * * MATLAB * * Generate Rayleigh histogram & Pdf using two functions... Are suitable for the new 9-1 spec List of questions that can be using. Numerous spatial domain processing techniques 4: read the mode from the axis! For _____ numerous spatial domain processing techniques in maths for all exam boards cumulative frequency and... More statistical questions that can be answered using the dot plot, the PMF is more closely approximated by rolls! And$ \sigma^2 \$ for a Gaussian ) What a histogram is frequency! Histogram is the basis for numerous spatial domain processing techniques 85 km/h and why they are suitable for number. And histogram out in achieving your goal out an estimate for the table shown, how total! Be answered using the data, complete the project work spatial domain processing techniques dot plot, the is! Group of people who passed the driver 's safety exam data about of! Modal class and the bar representing it the psycholo - gist answers after reviewing ’. General-Purpose, interpreted, dynamic programming language table shows information about the age moviegoers. Francis Bacon Gaussian ) following distribution: answer question carefully before you begin answering it shows the of... Automation In Manufacturing, Cold Steel Brooklyn Crusher Vs Smasher, Bush's Sweet Heat Beans Review, Tusd High Schools, Agaricus Blazei Murrill, Night At Brellin Farm, Shark Attacks 2020, Spa Design Requirements, Zero Matrix Symbol, Software Development Institute, Piper Cheyenne Vs King Air, Lessons From The Ark Of The Covenant, The Visible And The Invisible: Followed By Working Notes, Dell Xps 13 16gb Ram,
2022-12-05 12:29:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39836180210113525, "perplexity": 1769.8429066020026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00034.warc.gz"}
https://ixora.io/itp/pcomp/experiments-with-sound/
# Experiments With Sound ## Upcycling a Speaker A year ago someone gave me a birthday card that played a song when the card was opened. As I was interested in learning more about circuits, I took apart the card and saved the electrical components for a time when I could dissect them and learn more about how they work. Last week we learned about sound in our physical computing class, so it seemed like a good time to put the inexpensive speaker to good use. To upcycle the speaker I rewired it to give it red and black wires for the speaker's positive and negative terminals and a header pin to go into a breadboard. I also built a 3D printed case as an assignment for my 3D printing class. It looks nice but I need to think about how to make this work with an Arduino. This isn't an Arduino provided part so it isn't immediately clear how to use this. If I put too much current through it I could potentially blow out the speaker. If the Arduino can't output enough current the speaker's sound might be inaudible. Will this work at all? The rear of the speaker has 8 Ω 0.25 W printed on it. This tells me the speaker's resistance and maximum power output. I can use this information to figure out how to use the speaker with an Arduino. Presumably I need to wire the speaker in series with a resistor of some unknown size. That circuit will look like this: I need to figure out the resistance of resistor R1 that ensures the electrical power going through the speaker does not exceed 0.25 W. Using the circuit diagram and the below equations, I can figure out if I can safely use this speaker with my Arduino. \begin{align*} V &= I \cdot R \\ W &= V \cdot I \end{align*} The total resistance through the circuit is $x + 8$ and the maximum voltage of a digital output pin is 5V. Using $V = I \cdot R$ I can calculate the current as: \begin{equation*} I = \frac{5}{x + 8} \end{equation*} Next I need to calculate the voltage difference across the speaker, $V_s$. The speaker's resistance is 8 Ω (which I confirmed with a multimeter), so using the same equation and my calculation for $I$, I can solve for $V_s$: \begin{align*} V_s &= \frac{5}{x + 8} \cdot 8 \\ &= \frac{40}{x + 8} \end{align*} Using $V_s$ and $I$ I can calculate the electrical power going through the speaker with $W = V \cdot I$. \begin{align*} W_s &= V_s \cdot I \\ &= \frac{40}{x + 8} \cdot \frac{5}{x + 8} \\ &= \frac{200}{x^2 + 16x + 64} \end{align*} The speaker is rated for 0.25 Watts, which is an upper limit on the amount of electrical power that should go through the speaker. \begin{equation*} \frac{200}{x^2 + 16x + 64} < 0.25 \end{equation*} What is the minimum amount of resistance $x$ necessary to keep $W_s < 0.25$? Using basic algebra and the quadratic formula I can calculate $x \cong 21$. Therefore, the resistor I add to the circuit should be at least 21 Ω. That seems kind of low to me. With a resistor of that size, what is the total amount of current going through the circuit? Using our equations we can calculate that as: \begin{equation*} I = \frac{5}{x + 8} = 0.172 \end{equation*} The current is 0.172 Amps, or 172 mA. That's more than the maximum amount of current that the Arduino's Atmel ATmega328P can safely output on a pin. The limit is 40 mA, and ideally my circuit is not actually near the limit. Therefore, a 21 Ω resistor is not large enough for this circuit. The necessary resistor size $x_2$ is: \begin{align*} \frac{5}{x_2 + 8} &= 0.040 \\ x_2 &= 117 \end{align*} I obtained a 150 Ω resistor from the shop. If I use that resistor, how much current will go through the circuit? Substituting that into my equation for $I$, I get 32 mA. That is a reasonable amount that will not damage the board or the speaker. This current means the wattage used by the speaker is: \begin{align*} W_s &= V_s \cdot I \\ &= (0.032 \cdot 8) \cdot 0.032 \\ &= 0.008 \end{align*} That's pretty small, and about 3% of what the speaker is capable of. Nevertheless, when I build the circuit and use it, I can hear a tone from the speaker. Therefore, I was able to successfully upcycle a speaker from a greeting card with an Arduino. ## Questions Why is it that the small circuit in a Hallmark card can play an actual song with its speaker but an Arduino can't play more than one pitch at a time? The card's circuit must be specially designed to modulate voltage in a particular way. How does it work? ## Simultaneous Pitches In class Tom told us that an Arduino can only generate a single tone at a time. He said that it wasn't possible to generate two simultaneous pitches at the same time and that attempts to switch back and forth between them resulted in very bad sound quality. In class he also talked about servos and how the Arduino's servo code worked. His explanation suggested to me that there should be a way to generate two simultaneous pitches. All of my initial ideas for doing this that I thought of during class were failures, but I learned a lot about Arduinos in the process of trying things out. I was intrigued by sound generation and stuck with it. Eventually I came up with a viable idea. I now claim that I can create a circuit that generates two proper simultaneous pitches. There are some limitations, but it definitely does what I say it does. I quickly realized the only way this could possibly work is with true analog output. The Arduino's analogWrite function uses Pulse Width Modulation (PWM). This feature will oscillate a digital pin from HIGH to LOW on a set frequency with the analogWrite value used to determine the portion of the time the pin is at HIGH or LOW. The end result is the average voltage over time matches the analogWrite parameter but at any instance of time the voltage can only be HIGH or LOW. There are a few ways to get a true analog output from an Arduino. The way that I used that worked was to build a R-2R Resistor Ladder. Specifically, I built a 2-bit digital-to-analog converter using a bunch of resistors that all have the same resistance. The end result is I can use two digital pins to manufacture a voltage that can be at one of four voltage levels between LOW and HIGH. I can use this to achieve my desired result. The completed circuit is below. There are two buttons that control each of the two tones. Pressing both buttons at the same time generates both tones. The circuit by itself isn't enough. The Arduino code needs to be carefully written to allow it to flip the bits at precise intervals. I had to do some performance testing to measure how fast the digitalWrite function is (4 microseconds), which matters a great deal for this application. Nothing is instant with computers, and sometimes that matters. The relevant code is below. const int FREQ0 = 523; const int FREQ1 = 1046; void setup() { // configure input/output pins... // calculate the number of microseconds between HIGH/LOW flips in the waveform pause0 = 1000000 / (2 * FREQ0); pause1 = 1000000 / (2 * FREQ1); } Please forgive the following terse code. In this case making my code more readable might also make it slower, ruining the end result. I added comments to attempt to explain it. void loop() { unsigned long t = micros(); // advance to the next time we have to flip a bit int pause = min(pause0 - t % pause0, pause1 - t % pause1); delayMicroseconds(pause); t += pause; // if a button is being pressed, determine if its wave is HIGH or LOW based on an // odd or even number of elapsed pause delays bool wave0 = button0 && (t / pause0 % 2); bool wave1 = button1 && (t / pause1 % 2); // set digital pin 0 to HIGH if one of wave0 or wave1 is HIGH // set digital pin 1 to HIGH if both wave0 and wave1 is HIGH // Note: HIGH == true // Note: digital pin 0 and 1 are never set to HIGH at the same time, but they could be. digitalWrite(SPEAKER_PIN_0, wave0 ^ wave1); digitalWrite(SPEAKER_PIN_1, wave0 & wave1); // read both button pins and compare values to HIGH to see if they are being pressed
2019-01-22 23:39:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9723830819129944, "perplexity": 927.8833093484251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583875448.71/warc/CC-MAIN-20190122223011-20190123005011-00028.warc.gz"}
https://mathoverflow.net/questions/304186/how-many-simple-closed-geodesics-in-a-given-primitive-homology-class
# How many simple closed geodesics in a given primitive homology class? It is well-known that an essential closed curve on a hyperbolic surface (possibly with boundary) is homotopic to a unique closed geodesic. Moreover, if the curve under consideration is simple, then so is the geodesic homotopic to it. A reference is "A primer on mapping class groups" of Farb and Margalit (propositions 1.3 and 1.6). It is proved here that for a torus with 1 puncture $\Sigma_{1, 1}$ (endowed with a complete hyperbolic metric) every primitive homology class $h \in H_1(\Sigma_{1, 1}, \mathbb{Z})\approx \mathbb{Z}^2$ contains a unique simple closed geodesic. This can be surprising for a beginner like me since the preimage of $h$ under abelianization map $\mathrm{ab}:\pi_1(\Sigma_{1, 1})\approx F_2\rightarrow H_1(\Sigma_{1, 1}, \mathbb{Z})$ is infinite. Every homotopy class in this preimage contains a closed geodesic yet only one contains a simple closed geodesic. My question is: are there examples of hyperbolic surfaces of different topology such that every primitive homology class contains exactly one simple closed geodesic? What are the results/references in this general direction? The thrice punctured sphere has no simple closed geodesics. The four-times punctured sphere has a unique simple geodesic in each homology class. In general, it is a result of I. Rivin that the number of simple closed geodesics of length bounded above by $L$ grows like $L^{6g - 6 + 2 c},$ where $c$ is the number of punctures. We can restrict to closed surfaces for simplicity, so $c=0.$ Then, the number of homology classes where there is a simple closed geodesic of length $\leq L$ grows no faster than $L^{2g},$ which means that for uniqueness you have to have $g<2,$ which rules out every hyperbolic surface. • also, could you please explain why 'the number of homology classes where there is a simpel closed geodesic...' grows no faster than $L^{2g}$ for a closed surface? – user74900 Jul 3 '18 at 20:25 • @AknazarKazhymurat No, it does not matter if you have boundary (with the minor difference of whether or not you consider the boundary geodesic). As for the homology classes, it's because the minimal length of a homology class defines a norm on homology (and homology is a $2g$ dimensional vector space (if considered over \$\mathbb{R}),, but the minimal length is sometimes represented by a multi-curve (for a punctured torus, it's always a connected curve). – Igor Rivin Jul 3 '18 at 21:33
2020-11-26 07:40:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601458668708801, "perplexity": 188.73744727063266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186761.30/warc/CC-MAIN-20201126055652-20201126085652-00694.warc.gz"}
http://arxitics.com/articles/1905.05604
## arXiv Analytics ### arXiv:1905.05604 [cs.LG]AbstractReferencesReviewsResources #### Embeddings of Persistence Diagrams into Hilbert Spaces Published 2019-05-11Version 1 Since persistence diagrams do not admit an inner product structure, a map into a Hilbert space is needed in order to use kernel methods. It is natural to ask if such maps necessarily distort the metric on persistence diagrams. We show that persistence diagrams with the bottleneck distance do not admit a coarse embedding into a Hilbert space. As part of our proof, we show that any separable, bounded metric space isometrically embeds into the space of persistence diagrams with the bottleneck distance. As corollaries, we also calculate the generalized roundness, negative type, and asymptotic dimension of this space. Categories: cs.LG, math.AT, math.MG, stat.ML Subjects: 55N99, 46C05 Related articles: Most relevant | Search more arXiv:2002.05715 [cs.LG] (Published 2020-02-13) Self-Distillation Amplifies Regularization in Hilbert Space arXiv:1902.09722 [cs.LG] (Published 2019-02-26) Topological Bayesian Optimization with Persistence Diagrams arXiv:1910.06741 [cs.LG] (Published 2019-10-13) Adaptive template systems: Data-driven feature selection for learning with persistence diagrams
2020-04-04 21:44:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8105649352073669, "perplexity": 1841.4141618866138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370525223.55/warc/CC-MAIN-20200404200523-20200404230523-00546.warc.gz"}
http://blog.mobileink.com/2015/05/constructors-and-co-constructors-in-hott.html
## Tuesday, May 19, 2015 ### Constructors and Co-constructors in HoTT The HoTT book's treatment of constructors, eliminators, etc. deviates somewhat from the treatment commonly found in texts on constructive logic.  Of eliminators, it says that they say how to use elements of a type.  A computation rule "expresses how an eliminator acts on a constructor."  In all cases, the book usually presents these things as functions. The contrast is with the way intro and elim rules are usually presented in logic, as inference rules.  For example, &-intro licenses an inference from A, B to A&B.  This correponds cleanly to the HoTT pair constructor.  But &-intro is usually accompanied by two &-elim rules, call them &-elim1 and &-elim2, which are also inference rules. &-elim1:  A&B |- A &-elim2:  A&B |- B This could also be express more HoTTly: elim1:  (a,b):AxB |- a:A elim2:  (a,b):AxB |- b:B The HoTT book does not propose eliminators like this.  Instead it defines projection functions pr1 and pr2, which are like elim rules but are not basic to the type definition.  It offers an "elimination rule for products" that answers the question "How can we use pairs, i.e. how can we define functions out of a product type?"  The technique is to define a function on pairs f:AxB->C in terms of an iterated function g:A->B->C. So the contrast in this case is fairly clear: the logical tradition treats an eliminator as an inference that "undoes" a constructor; HoTT treats an eliminator as a function that "uses" a construction.  In the case of &/x (i.e. product types), that means HoTT must introduce derived functions (projectors) in order to "undo" the constructor. Note that this makes HoTT's terminology a bit abusive; after all a function on products does not necessarily eliminate anything: just look at the Identity function. Also note that in both cases the relation between constructor and eliminator is asymmetric; constructors are "more primitive" than eliminators.  This follows Gentzen's original treatment of intro and elim rules: intro rules are viewed as defining (in a sense) the logical operators, and elim rules are derived (in a sense) from intro rules. The HoTT book's strategy seems follow from the decision to treat these things as functions.  The projection functions cannot be primitive (to the product type) since we do not yet (in the middle of defining product types, so to speak) know how to apply a function to a product type.  So you have to start by defining how any such function f:AxB->C works, and then use that result to define the specific projection functions. The proposal here is to reconceptualize constructors and eliminators.  This involves several basic moves: first, foreground the notion of inference rather then functions; second, do not treat constructors and eliminators as functions, and third, treat eliminators as co-constructors rather than undoers or users of constructors. Foregrounding inference:  read "a:A" as "from token a infer type A" (there's a lot to be said about this approach but I'll leave that for another post) and treat the rules defining types like AxB as inference rules.  Not traditional inference rules that go from true premises to true conclusion, but inferency inference rules, that go from good inferences to good inferences.  For example, read pair constructor (intro rule) as something like "if the inference from a to A is good and the inference from b to B is good, then the inference from (a,b) to AxB is good". Constructors and co-constructors: the idea is pretty simple and easily illustrated for the case of product types: ()-ctor:   $$a:A,\ b:B \vdash (a,b):A\times B$$ pr1-co-ctor:   $$x:A\times B \vdash pr1(x):A$$ pr2-co-ctor:   $$x:A\times B \vdash pr2(x):B$$ In the case of ctors, the rule is not to be construed in functional terms: it is not a function that produces a "result" of type AxB, given inputs of types A and B.  It does not "produce" anything at all; it just licenses the inference from (a,b) to AxB, given inferences from a to A and from b to B.  Strictly speaking we should not even think of this as constructing anything either, except perhaps the syntactic form (a,b). In the case of co-constructors, the idea is that they, like constructors, license inferences.  As in the case of so-called constructors, we're not really constructing anything; $$pr1(x)$$ has type $$A$$ if $$x$$ has type $$A\times B$$.  That is all.  But the construction metaphor is convenient nonetheless: co-constructors may be viewed as "devices" that construct something, just like constructors. Notice that we could write these using a function type notation, but here (as in HoTT) there is no function definition for constructors and co-constructors. A critical contrast with the HoTT (and the traditional logical) approach is that co-constructors need not operate on constructors.  Constructors and co-constructors are perfectly symmetrical; nothing in the concept of types and tokens warrants (a priori) the primacy of constructors over co-constructors.  In other words it is not the case that one must first construct an element in order to have something that an eliminator can operate on. Also note that with this approach a HoTT-style definition of how functions on product types may be defined still works, but it is not to be construed as an elimination rule on products. It follows from the definitions that $$x:A\times B \vdash (pr1(x), pr2(x)) : A\times B$$.  This does not mean that every $$x:A\times B$$ is a pair, but it does mean that a pair can be inferred from every such $$x$$.  Which dovetails with the HoTT book's strategy: to define a function on a product type it suffices to define it for all pairs. What about the uniqueness principle uppt?  In the HoTT book, it depends on the induction principle for pairs.  But as just mentioned that follows directly from the constructor and co-constructors.
2023-02-07 05:50:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7960772514343262, "perplexity": 2084.178019008756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500384.17/warc/CC-MAIN-20230207035749-20230207065749-00535.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/nhm.2021004?viewType=html
# American Institute of Mathematical Sciences • Previous Article Influence of a slow moving vehicle on traffic: Well-posedness and approximation for a mildly nonlocal model • NHM Home • This Issue • Next Article Existence results and stability analysis for a nonlinear fractional boundary value problem on a circular ring with an attached edge : A study of fractional calculus on metric graph June  2021, 16(2): 187-219. doi: 10.3934/nhm.2021004 ## A multiclass Lighthill-Whitham-Richards traffic model with a discontinuous velocity function 1 CI2MA and Departamento de Ingeniería Matemática, Universidad de Concepción, Casilla 160-C, Concepción, Chile 2 Laboratoire de Mathématiques de Versailles, UVSQ, CNRS, Université Paris-Saclay, 78035 Versailles, France 3 GIMNAP-Departamento de Matemáticas, Universidad del Bío-Bío, Concepción, Chile, CI2MA, Universidad de Concepción, Casilla 160-C, Concepción, Chile * Corresponding author: R. Ordoñez Date: January 11, 2021. Received  October 2020 Revised  December 2020 Published  January 2021 The well-known Lighthill-Whitham-Richards (LWR) kinematic model of traffic flow models the evolution of the local density of cars by a nonlinear scalar conservation law. The transition between free and congested flow regimes can be described by a flux or velocity function that has a discontinuity at a determined density. A numerical scheme to handle the resulting LWR model with discontinuous velocity was proposed in [J.D. Towers, A splitting algorithm for LWR traffic models with flux discontinuities in the unknown, J. Comput. Phys., 421 (2020), article 109722]. A similar scheme is constructed by decomposing the discontinuous velocity function into a Lipschitz continuous function plus a Heaviside function and designing a corresponding splitting scheme. The part of the scheme related to the discontinuous flux is handled by a semi-implicit step that does, however, not involve the solution of systems of linear or nonlinear equations. It is proved that the whole scheme converges to a weak solution in the scalar case. The scheme can in a straightforward manner be extended to the multiclass LWR (MCLWR) model, which is defined by a hyperbolic system of $N$ conservation laws for $N$ driver classes that are distinguished by their preferential velocities. It is shown that the multiclass scheme satisfies an invariant region principle, that is, all densities are nonnegative and their sum does not exceed a maximum value. In the scalar and multiclass cases no flux regularization or Riemann solver is involved, and the CFL condition is not more restrictive than for an explicit scheme for the continuous part of the flux. Numerical tests for the scalar and multiclass cases are presented. Citation: Raimund Bürger, Christophe Chalons, Rafael Ordoñez, Luis Miguel Villada. A multiclass Lighthill-Whitham-Richards traffic model with a discontinuous velocity function. Networks & Heterogeneous Media, 2021, 16 (2) : 187-219. doi: 10.3934/nhm.2021004 ##### References: [1] S. Benzoni-Gavage and R. M. Colombo, An $n$-populations model for traffic flow, Eur. J. Appl. Math., 14 (2003), 587-612.  doi: 10.1017/S0956792503005266.  Google Scholar [2] S. Benzoni-Gavage, R. M. Colombo and P. Gwiazda, Measure valued solutions to conservation laws motivated by traffic modelling, Proc. Royal Soc. A, 462 (2006), 1791-1803.  doi: 10.1098/rspa.2005.1649.  Google Scholar [3] M. Bulíček, P. Gwiazda, J. Málek and A. Świerczewska-Gwiazda, On scalar hyperbolic conservation laws with a discontinuous flux, Math. Models Methods Appl. Sci., 21 (2011), 89-113.  doi: 10.1142/S021820251100499X.  Google Scholar [4] M. Bulíček, P. Gwiazda and A. Świerczewska-Gwiazda, Multi-dimensional scalar conservation laws with fluxes discontinuous in the unknown and the spatial variable, Math. Models Methods Appl. Sci., 23 (2013), 407-439.  doi: 10.1142/S0218202512500510.  Google Scholar [5] R. Bürger, C. Chalons and L. M. Villada, Anti-diffusive and random-sampling Lagrangian-remap schemes for the multi-class Lighthill-Whitham-Richards traffic model, SIAM J. Sci. Comput., 35 (2013), B1341–B1368. doi: 10.1137/130923877.  Google Scholar [6] R. Bürger, A. García, K. H. Karlsen and J. D. Towers, A family of numerical schemes for kinematic flows with discontinuous flux, J. Eng. Math., 60 (2008), 387-425.  doi: 10.1007/s10665-007-9148-4.  Google Scholar [7] R. Bürger, A. García, K. H. Karlsen and J. D. Towers, Difference schemes, entropy solutions, and speedup impulse for an inhomogeneous kinematic traffic flow model, Netw. Heterog. Media, 3 (2008), 1-41.  doi: 10.3934/nhm.2008.3.1.  Google Scholar [8] R. Bürger, K. H. Karlsen, H. Torres and J. D. Towers, Second-order schemes for conservation laws with discontinuous flux modelling clarifier-thickener units, Numer. Math., 116 (2010), 579-617.  doi: 10.1007/s00211-010-0325-4.  Google Scholar [9] R. Bürger, K. H. Karlsen and J. D. Towers, On some difference schemes and entropy conditions for a class of multi-species kinematic flow models with discontinuous flux, Netw. Heterog. Media, 5 (2010), 461-485.  doi: 10.3934/nhm.2010.5.461.  Google Scholar [10] R. Bürger, H. Torres and C. A. Vega, An entropy stable scheme for the multiclass Lighthill-Whitham-Richards traffic model, Adv. Appl. Math. Mech., 11 (2019), 1022-1047.  doi: 10.4208/aamm.OA-2018-0189.  Google Scholar [11] R. Bürger, P. Mulet and L. M. Villada, A diffusively corrected multiclass Lighthill-Whitham-Richards traffic model with anticipation lengths and reaction times, Adv. Appl. Math. Mech., 5 (2013), 728-758.  doi: 10.4208/aamm.2013.m135.  Google Scholar [12] J. Carrillo, Conservation law with discontinuous flux function and boundary condition, J. Evol. Equ., 3 (2003), 283-301.  doi: 10.1007/s00028-003-0095-x.  Google Scholar [13] C. Chalons and P. Goatin, Godunov scheme and sampling technique for computing phase transitions in traffic flow modeling, Interf. Free Bound., 10 (2008), 197-221.  doi: 10.4171/IFB/186.  Google Scholar [14] R. M. Colombo, Hyperbolic phase transitions in traffic flow, SIAM J. Appl. Math., 63 (2002), 708-721.  doi: 10.1137/S0036139901393184.  Google Scholar [15] J. P. Dias and M. Figueira, On the Riemann problem for some discontinuous systems of conservation laws describing phase transitions, Commun. Pure Appl. Anal., 3 (2004), 53-58.  doi: 10.3934/cpaa.2004.3.53.  Google Scholar [16] J. P. Dias and M. Figueira, On the approximation of the solutions of the Riemann problem for a discontinuous conservation law, Bull. Braz. Math. Soc. New Ser., 36 (2005), 115-125.  doi: 10.1007/s00574-005-0031-5.  Google Scholar [17] J. P. Dias, M. Figueira and J. F. Rodrigues, Solutions to a scalar discontinuous conservation law in a limit case of phase transitions, J. Math. Fluid Mech., 7 (2005), 153-163.  doi: 10.1007/s00021-004-0113-y.  Google Scholar [18] S. Diehl, A conservation law with point source and discontinuous flux function, SIAM J. Math. Anal., 56 (1996), 388-419.  doi: 10.1137/S0036139994242425.  Google Scholar [19] R. Donat and P. Mulet, Characteristic-based schemes for multi-class Lighthill-Whitham-Richards traffic models, J. Sci. Comput., 37 (2008), 233-250.  doi: 10.1007/s10915-008-9209-5.  Google Scholar [20] R. Donat and P. Mulet, A secular equation for the Jacobian matrix of certain multi-species kinematic flow models, Numer. Methods Partial Differential Equations, 26 (2010), 159-175.  doi: 10.1002/num.20423.  Google Scholar [21] T. Gimse, Conservation laws with discontinuous flux functions, SIAM J. Numer. Anal., 24 (1993), 279-289.  doi: 10.1137/0524018.  Google Scholar [22] T. Gimse and N. H. Risebro, Solution to the Cauchy problem for a conservation law with a discontinuous flux function, SIAM J. Math. Anal., 23 (1992), 635-648.  doi: 10.1137/0523032.  Google Scholar [23] M. Hilliges and W. Weidlich, A phenomenological model for dynamic traffic flow in networks, Transp. Res. B, 29 (1995), 407-431.  doi: 10.1016/0191-2615(95)00018-9.  Google Scholar [24] H. Holden and N. H. Risebro, Front Tracking for Hyperbolic Conservation Laws, 2$^{nd}$ edition, Springer-Verlag, Berlin, 2015. doi: 10.1007/978-3-662-47507-2.  Google Scholar [25] M. J. Lighthill and G. B. Whitham, On kinematic waves: II. A theory of traffic flow on long crowded roads, Proc. Royal Soc. A, 229 (1955), 317-345.  doi: 10.1098/rspa.1955.0089.  Google Scholar [26] Y. Lu, S. Wong, M. Zhang and C.-W. Shu, The entropy solutions for the Lighthill-Whitham-Richards traffic flow model with a discontinuous flow-density relationship, Transp. Sci., 43 (2009), 511-530.   Google Scholar [27] S. Martin and J. Vovelle, Convergence of the finite volume method for scalar conservation law with discontinuous flux function, ESAIM Math. Model. Numer. Anal., 42 (2008), 699-727.  doi: 10.1051/m2an:2008023.  Google Scholar [28] P. I. Richards, Shock waves on the highway, Oper. Res., 4 (1956), 42-51.  doi: 10.1287/opre.4.1.42.  Google Scholar [29] J. D. Towers, A splitting algorithm for LWR traffic models with flux discontinuities in the unknown, J. Comput. Phys., 421 (2020), 109722, 30 pp. doi: 10.1016/j.jcp.2020.109722.  Google Scholar [30] J. K. Wiens, J. M. Stockie and J. F. Williams, Riemann solver for a kinematic wave traffic model with discontinuous flux, J. Comput. Phys., 242 (2013), 1-23.  doi: 10.1016/j.jcp.2013.02.024.  Google Scholar [31] G. C. K. Wong and S. C. Wong, A multi-class traffic flow model–-an extension of LWR model with heterogeneous drivers, Transp. Res. A, 36 (2002), 827-841.  doi: 10.1016/S0965-8564(01)00042-8.  Google Scholar [32] P. Zhang, R. X. Liu, S. C. Wong and D. Q. Dai, Hyperbolicity and kinematic waves of a class of multi-population partial differential equations, Eur. J. Appl. Math., 17 (2006), 171-200.  doi: 10.1017/S095679250500642X.  Google Scholar [33] M. Zhang, C.-W. Shu, G. C. K. Wong and S. C. Wong, A weighted essentially non-oscillatory numerical scheme for a multi-class Lighthill-Whitham-Richards traffic flow model, J. Comput. Phys., 191 (2003), 639-659.  doi: 10.1016/j.jcp.2005.07.019.  Google Scholar [34] P. Zhang, S. C. Wong and C.-W. Shu, A weighted essentially non-oscillatory numerical scheme for a multi-class traffic flow model on an inhomogeneous highway, J. Comput. Phys., 212 (2006), 739-756.  doi: 10.1016/j.jcp.2005.07.019.  Google Scholar [35] P. Zhang, R.-X. Liu, S. C. Wong and S. Q. Dai, Hyperbolicity and kinematic waves of a class of multi-population partial differential equations, Eur. J. Appl. Math., 17 (2006), 171-200.  doi: 10.1017/S095679250500642X.  Google Scholar [36] P. Zhang, S. C. Wong and S. Q. Dai, A note on the weighted essentially non-oscillatory numerical scheme for a multi-class Lighthill-Whitham-Richards traffic flow model, Commun. Numer. Meth. Eng., 25 (2009), 1120-1126.  doi: 10.1002/cnm.1277.  Google Scholar [37] P. Zhang, S. C. Wong and Z. Xu, A hybrid scheme for solving a multi-class traffic flow model with complex wave breaking, Comput. Methods Appl. Mech. Engrg., 197 (2008), 3816-3827.  doi: 10.1016/j.cma.2008.03.003.  Google Scholar show all references ##### References: [1] S. Benzoni-Gavage and R. M. Colombo, An $n$-populations model for traffic flow, Eur. J. Appl. Math., 14 (2003), 587-612.  doi: 10.1017/S0956792503005266.  Google Scholar [2] S. Benzoni-Gavage, R. M. Colombo and P. Gwiazda, Measure valued solutions to conservation laws motivated by traffic modelling, Proc. Royal Soc. A, 462 (2006), 1791-1803.  doi: 10.1098/rspa.2005.1649.  Google Scholar [3] M. Bulíček, P. Gwiazda, J. Málek and A. Świerczewska-Gwiazda, On scalar hyperbolic conservation laws with a discontinuous flux, Math. Models Methods Appl. Sci., 21 (2011), 89-113.  doi: 10.1142/S021820251100499X.  Google Scholar [4] M. Bulíček, P. Gwiazda and A. Świerczewska-Gwiazda, Multi-dimensional scalar conservation laws with fluxes discontinuous in the unknown and the spatial variable, Math. Models Methods Appl. Sci., 23 (2013), 407-439.  doi: 10.1142/S0218202512500510.  Google Scholar [5] R. Bürger, C. Chalons and L. M. Villada, Anti-diffusive and random-sampling Lagrangian-remap schemes for the multi-class Lighthill-Whitham-Richards traffic model, SIAM J. Sci. Comput., 35 (2013), B1341–B1368. doi: 10.1137/130923877.  Google Scholar [6] R. Bürger, A. García, K. H. Karlsen and J. D. Towers, A family of numerical schemes for kinematic flows with discontinuous flux, J. Eng. Math., 60 (2008), 387-425.  doi: 10.1007/s10665-007-9148-4.  Google Scholar [7] R. Bürger, A. García, K. H. Karlsen and J. D. Towers, Difference schemes, entropy solutions, and speedup impulse for an inhomogeneous kinematic traffic flow model, Netw. Heterog. Media, 3 (2008), 1-41.  doi: 10.3934/nhm.2008.3.1.  Google Scholar [8] R. Bürger, K. H. Karlsen, H. Torres and J. D. Towers, Second-order schemes for conservation laws with discontinuous flux modelling clarifier-thickener units, Numer. Math., 116 (2010), 579-617.  doi: 10.1007/s00211-010-0325-4.  Google Scholar [9] R. Bürger, K. H. Karlsen and J. D. Towers, On some difference schemes and entropy conditions for a class of multi-species kinematic flow models with discontinuous flux, Netw. Heterog. Media, 5 (2010), 461-485.  doi: 10.3934/nhm.2010.5.461.  Google Scholar [10] R. Bürger, H. Torres and C. A. Vega, An entropy stable scheme for the multiclass Lighthill-Whitham-Richards traffic model, Adv. Appl. Math. Mech., 11 (2019), 1022-1047.  doi: 10.4208/aamm.OA-2018-0189.  Google Scholar [11] R. Bürger, P. Mulet and L. M. Villada, A diffusively corrected multiclass Lighthill-Whitham-Richards traffic model with anticipation lengths and reaction times, Adv. Appl. Math. Mech., 5 (2013), 728-758.  doi: 10.4208/aamm.2013.m135.  Google Scholar [12] J. Carrillo, Conservation law with discontinuous flux function and boundary condition, J. Evol. Equ., 3 (2003), 283-301.  doi: 10.1007/s00028-003-0095-x.  Google Scholar [13] C. Chalons and P. Goatin, Godunov scheme and sampling technique for computing phase transitions in traffic flow modeling, Interf. Free Bound., 10 (2008), 197-221.  doi: 10.4171/IFB/186.  Google Scholar [14] R. M. Colombo, Hyperbolic phase transitions in traffic flow, SIAM J. Appl. Math., 63 (2002), 708-721.  doi: 10.1137/S0036139901393184.  Google Scholar [15] J. P. Dias and M. Figueira, On the Riemann problem for some discontinuous systems of conservation laws describing phase transitions, Commun. Pure Appl. Anal., 3 (2004), 53-58.  doi: 10.3934/cpaa.2004.3.53.  Google Scholar [16] J. P. Dias and M. Figueira, On the approximation of the solutions of the Riemann problem for a discontinuous conservation law, Bull. Braz. Math. Soc. New Ser., 36 (2005), 115-125.  doi: 10.1007/s00574-005-0031-5.  Google Scholar [17] J. P. Dias, M. Figueira and J. F. Rodrigues, Solutions to a scalar discontinuous conservation law in a limit case of phase transitions, J. Math. Fluid Mech., 7 (2005), 153-163.  doi: 10.1007/s00021-004-0113-y.  Google Scholar [18] S. Diehl, A conservation law with point source and discontinuous flux function, SIAM J. Math. Anal., 56 (1996), 388-419.  doi: 10.1137/S0036139994242425.  Google Scholar [19] R. Donat and P. Mulet, Characteristic-based schemes for multi-class Lighthill-Whitham-Richards traffic models, J. Sci. Comput., 37 (2008), 233-250.  doi: 10.1007/s10915-008-9209-5.  Google Scholar [20] R. Donat and P. Mulet, A secular equation for the Jacobian matrix of certain multi-species kinematic flow models, Numer. Methods Partial Differential Equations, 26 (2010), 159-175.  doi: 10.1002/num.20423.  Google Scholar [21] T. Gimse, Conservation laws with discontinuous flux functions, SIAM J. Numer. Anal., 24 (1993), 279-289.  doi: 10.1137/0524018.  Google Scholar [22] T. Gimse and N. H. Risebro, Solution to the Cauchy problem for a conservation law with a discontinuous flux function, SIAM J. Math. Anal., 23 (1992), 635-648.  doi: 10.1137/0523032.  Google Scholar [23] M. Hilliges and W. Weidlich, A phenomenological model for dynamic traffic flow in networks, Transp. Res. B, 29 (1995), 407-431.  doi: 10.1016/0191-2615(95)00018-9.  Google Scholar [24] H. Holden and N. H. Risebro, Front Tracking for Hyperbolic Conservation Laws, 2$^{nd}$ edition, Springer-Verlag, Berlin, 2015. doi: 10.1007/978-3-662-47507-2.  Google Scholar [25] M. J. Lighthill and G. B. Whitham, On kinematic waves: II. A theory of traffic flow on long crowded roads, Proc. Royal Soc. A, 229 (1955), 317-345.  doi: 10.1098/rspa.1955.0089.  Google Scholar [26] Y. Lu, S. Wong, M. Zhang and C.-W. Shu, The entropy solutions for the Lighthill-Whitham-Richards traffic flow model with a discontinuous flow-density relationship, Transp. Sci., 43 (2009), 511-530.   Google Scholar [27] S. Martin and J. Vovelle, Convergence of the finite volume method for scalar conservation law with discontinuous flux function, ESAIM Math. Model. Numer. Anal., 42 (2008), 699-727.  doi: 10.1051/m2an:2008023.  Google Scholar [28] P. I. Richards, Shock waves on the highway, Oper. Res., 4 (1956), 42-51.  doi: 10.1287/opre.4.1.42.  Google Scholar [29] J. D. Towers, A splitting algorithm for LWR traffic models with flux discontinuities in the unknown, J. Comput. Phys., 421 (2020), 109722, 30 pp. doi: 10.1016/j.jcp.2020.109722.  Google Scholar [30] J. K. Wiens, J. M. Stockie and J. F. Williams, Riemann solver for a kinematic wave traffic model with discontinuous flux, J. Comput. Phys., 242 (2013), 1-23.  doi: 10.1016/j.jcp.2013.02.024.  Google Scholar [31] G. C. K. Wong and S. C. Wong, A multi-class traffic flow model–-an extension of LWR model with heterogeneous drivers, Transp. Res. A, 36 (2002), 827-841.  doi: 10.1016/S0965-8564(01)00042-8.  Google Scholar [32] P. Zhang, R. X. Liu, S. C. Wong and D. Q. Dai, Hyperbolicity and kinematic waves of a class of multi-population partial differential equations, Eur. J. Appl. Math., 17 (2006), 171-200.  doi: 10.1017/S095679250500642X.  Google Scholar [33] M. Zhang, C.-W. Shu, G. C. K. Wong and S. C. Wong, A weighted essentially non-oscillatory numerical scheme for a multi-class Lighthill-Whitham-Richards traffic flow model, J. Comput. Phys., 191 (2003), 639-659.  doi: 10.1016/j.jcp.2005.07.019.  Google Scholar [34] P. Zhang, S. C. Wong and C.-W. Shu, A weighted essentially non-oscillatory numerical scheme for a multi-class traffic flow model on an inhomogeneous highway, J. Comput. Phys., 212 (2006), 739-756.  doi: 10.1016/j.jcp.2005.07.019.  Google Scholar [35] P. Zhang, R.-X. Liu, S. C. Wong and S. Q. Dai, Hyperbolicity and kinematic waves of a class of multi-population partial differential equations, Eur. J. Appl. Math., 17 (2006), 171-200.  doi: 10.1017/S095679250500642X.  Google Scholar [36] P. Zhang, S. C. Wong and S. Q. Dai, A note on the weighted essentially non-oscillatory numerical scheme for a multi-class Lighthill-Whitham-Richards traffic flow model, Commun. Numer. Meth. Eng., 25 (2009), 1120-1126.  doi: 10.1002/cnm.1277.  Google Scholar [37] P. Zhang, S. C. Wong and Z. Xu, A hybrid scheme for solving a multi-class traffic flow model with complex wave breaking, Comput. Methods Appl. Mech. Engrg., 197 (2008), 3816-3827.  doi: 10.1016/j.cma.2008.03.003.  Google Scholar (a) Piecewise continuous velocity function $V(\phi)$ with discontinuity at $\phi = \phi^*$, (b) continuous and discontinuous portions $p_V(\phi)$ (solid line) and $g_V(\phi)$ (dashed line) (a) function $z \mapsto \smash{\tilde{G}_V}(z;\phi)$ given by (2.9a) with $\lambda v^{\max} = 1/2$, $\alpha_V = 0.3$, and $\phi = 0.8$, (b) its inverse $z \mapsto \smash{\tilde{G}_V^{-1}} (z;\phi)$ given by (2.9b) and 5 we label with 'Towers scheme' the scheme (1.7) proposed in [29] and by 'BCOV scheme' the scheme of Algorithm 2.1 advanced in the present work">Figure 3.  Example 1: numerical solution with $M = 800$ and comparison with the exact solution of the Riemann problem (a) with $\phi_{\mathrm{L}} = 0.3$ and $\phi_{\mathrm{R}} = 0.9$ at simulated time $T = 1.8$, (b) with $\phi_{\mathrm{L}} = 0.9$ and $\phi_{\mathrm{R}} = 0.3$ at simulated time $T = 1.5$. Here and in Figures 4 and 5 we label with 'Towers scheme' the scheme (1.7) proposed in [29] and by 'BCOV scheme' the scheme of Algorithm 2.1 advanced in the present work Example 2: numerical solutions for $M = 100$ at simulated times (a) $T = 0.1$, (b) $T = 0.3$ Example 3: numerical solutions depending on the boundary conditions $\mathcal{F}(t)\in\smash{\tilde{f}}(\phi^*)$ with $M = 1600$ at simulated time $T = 0.5$, with (a) $\smash{\mathcal{F}(t)\in\tilde{f}}(\phi^*-)$ (free flow), (b) $\mathcal{F}(t)\in\smash{\tilde{f}} (\phi^*+)$ (congested flow) Example 4: density profiles simulated with $M = 1600$ at (a) $T = 0.2$, (b) $T = 0.4$, (c) $T = 0.6$ Example 5: numerical solution for a free-flow regime ($\mathcal{G}(t) = \alpha_V$): (a) initial condition, (b, c) density profiles with $M = 1600$ at simulated times (b) $T = 0.1,$ (c) $T = 0.2$ Example 5: simulated total density computed with BCOV scheme with $N = 3$ and $M = 1600$: (a) free flow ($\mathcal{G}(t) = \alpha_V$), (b) congested flow ($\mathcal{G}(t) = 0$) ">Figure 9.  Example 5: numerical solution for a congested flow regime ($\mathcal{G}(t) = 0$): density profiles with $M = 1600$ at simulated times (a) $T = 0.1,$ (b) $T = 0.2$. The initial condition is the same as in Figure 7(a) Example 6: numerical solutions obtained with BCOV scheme with $N = 5$ and $M = 1600$ at simulated times (a) $T = 0.02$, (b) $T = 0.12$ Example 6: simulated total density obtained with BCOV scheme with $N = 5$ and $M = 1600$: (a) discontinuous problem, (b) continuous problem Example 6: comparison of reference solution ($M_{\text{ref}} = 12800$) with approximate solutions computed by BCOV scheme with $M = 100$ at simulated time $T = 0.02$ Example 6: comparison of reference solution ($M_{\text{ref}} = 12800$) with approximate solutions computed by BCOV scheme with $M = 100$ at simulated time $T = 0.02$ Example 7: numerical solution computed with BCOV scheme with $N = 5$ and $M = 12800$ at simulated times (a) $T = 0.1$, (b) $T = 0.2$ and (c) $T = 0.3$ Example 7: simulated total density computed with BCOV scheme with $N = 5$ and $M = 1600$ Example 2: approximate $L^1$ errors $e_{M}(u)$ with $\Delta x = 2/M$ $T=0.1$ $T=0.3$ Towers BCOV Towers BCOV $M$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ 100 1.32e-2 1.76e-2 1.63e-2 2.39e-2 200 6.55e-3 9.22e-3 8.59e-3 1.31e-2 400 3.29e-3 4.46e-3 4.25e-3 6.46e-3 800 1.72e-3 2.403-3 2.12e-3 3.31e-3 1600 8.00e-4 1.18e-3 9.29e-4 1.563-3 $T=0.1$ $T=0.3$ Towers BCOV Towers BCOV $M$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ 100 1.32e-2 1.76e-2 1.63e-2 2.39e-2 200 6.55e-3 9.22e-3 8.59e-3 1.31e-2 400 3.29e-3 4.46e-3 4.25e-3 6.46e-3 800 1.72e-3 2.403-3 2.12e-3 3.31e-3 1600 8.00e-4 1.18e-3 9.29e-4 1.563-3 Example 6: approximate $L^1$ errors $e_{M}(u)$ with $\Delta x = 2/M$ $T=0.02$ $T=0.12$ $M$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ 100 1.39e-2 3.87e-2 200 7.90e-3 2.47e-2 400 4.20e-3 1.55e-2 800 2.00e-3 9.20e-3 1600 1.00e-3 5.10e-3 $T=0.02$ $T=0.12$ $M$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ 100 1.39e-2 3.87e-2 200 7.90e-3 2.47e-2 400 4.20e-3 1.55e-2 800 2.00e-3 9.20e-3 1600 1.00e-3 5.10e-3 Example 7: Approximate $L^1$ errors $e_{M}(u)$ with $\Delta x = 5/M$ $T=0.1$ $T=0.2$ $T=0.3$ $M$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ 100 7.42e-2 9.50e-2 1.06e-1 200 4.12e-2 5.50e-2 6.49e-2 400 2.27e-2 3.34e-2 3.88e-2 800 1.24e-2 1.97-2 2.35e-2 1600 6.50e-3 1.10e-2 1.35e-2 $T=0.1$ $T=0.2$ $T=0.3$ $M$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ $e_{M}(\phi^{\Delta})$ 100 7.42e-2 9.50e-2 1.06e-1 200 4.12e-2 5.50e-2 6.49e-2 400 2.27e-2 3.34e-2 3.88e-2 800 1.24e-2 1.97-2 2.35e-2 1600 6.50e-3 1.10e-2 1.35e-2 [1] Helge Holden, Nils Henrik Risebro. Follow-the-Leader models can be viewed as a numerical approximation to the Lighthill-Whitham-Richards model for traffic flow. Networks & Heterogeneous Media, 2018, 13 (3) : 409-421. doi: 10.3934/nhm.2018018 [2] Mauro Garavello, Roberto Natalini, Benedetto Piccoli, Andrea Terracina. Conservation laws with discontinuous flux. Networks & Heterogeneous Media, 2007, 2 (1) : 159-179. doi: 10.3934/nhm.2007.2.159 [3] Wen Shen. Traveling waves for conservation laws with nonlocal flux for traffic flow on rough roads. Networks & Heterogeneous Media, 2019, 14 (4) : 709-732. doi: 10.3934/nhm.2019028 [4] Boris Andreianov, Kenneth H. Karlsen, Nils H. Risebro. On vanishing viscosity approximation of conservation laws with discontinuous flux. Networks & Heterogeneous Media, 2010, 5 (3) : 617-633. doi: 10.3934/nhm.2010.5.617 [5] Darko Mitrovic. New entropy conditions for scalar conservation laws with discontinuous flux. Discrete & Continuous Dynamical Systems, 2011, 30 (4) : 1191-1210. doi: 10.3934/dcds.2011.30.1191 [6] Mauro Garavello. The LWR traffic model at a junction with multibuffers. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 463-482. doi: 10.3934/dcdss.2014.7.463 [7] Oliver Kolb, Simone Göttlich, Paola Goatin. Capacity drop and traffic control for a second order traffic model. Networks & Heterogeneous Media, 2017, 12 (4) : 663-681. doi: 10.3934/nhm.2017027 [8] Guillaume Costeseque, Jean-Patrick Lebacque. Discussion about traffic junction modelling: Conservation laws VS Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 411-433. doi: 10.3934/dcdss.2014.7.411 [9] Gabriella Bretti, Roberto Natalini, Benedetto Piccoli. Numerical approximations of a traffic flow model on networks. Networks & Heterogeneous Media, 2006, 1 (1) : 57-84. doi: 10.3934/nhm.2006.1.57 [10] Gabriella Bretti, Roberto Natalini, Benedetto Piccoli. Fast algorithms for the approximation of a traffic flow model on networks. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 427-448. doi: 10.3934/dcdsb.2006.6.427 [11] Florent Berthelin, Damien Broizat. A model for the evolution of traffic jams in multi-lane. Kinetic & Related Models, 2012, 5 (4) : 697-728. doi: 10.3934/krm.2012.5.697 [12] Fabio Della Rossa, Carlo D’Angelo, Alfio Quarteroni. A distributed model of traffic flows on extended regions. Networks & Heterogeneous Media, 2010, 5 (3) : 525-544. doi: 10.3934/nhm.2010.5.525 [13] Wen Shen, Karim Shikh-Khalil. Traveling waves for a microscopic model of traffic flow. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2571-2589. doi: 10.3934/dcds.2018108 [14] Michael Herty, J.-P. Lebacque, S. Moutari. A novel model for intersections of vehicular traffic flow. Networks & Heterogeneous Media, 2009, 4 (4) : 813-826. doi: 10.3934/nhm.2009.4.813 [15] Mauro Garavello, Francesca Marcellini. The Riemann Problem at a Junction for a Phase Transition Traffic Model. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5191-5209. doi: 10.3934/dcds.2017225 [16] Ángela Jiménez-Casas, Aníbal Rodríguez-Bernal. Linear model of traffic flow in an isolated network. Conference Publications, 2015, 2015 (special) : 670-677. doi: 10.3934/proc.2015.0670 [17] Julien Jimenez. Scalar conservation law with discontinuous flux in a bounded domain. Conference Publications, 2007, 2007 (Special) : 520-530. doi: 10.3934/proc.2007.2007.520 [18] Anupam Sen, T. Raja Sekhar. Structural stability of the Riemann solution for a strictly hyperbolic system of conservation laws with flux approximation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 931-942. doi: 10.3934/cpaa.2019045 [19] Alberto Bressan, Khai T. Nguyen. Conservation law models for traffic flow on a network of roads. Networks & Heterogeneous Media, 2015, 10 (2) : 255-293. doi: 10.3934/nhm.2015.10.255 [20] C. M. Khalique, G. S. Pai. Conservation laws and invariant solutions for soil water equations. Conference Publications, 2003, 2003 (Special) : 477-481. doi: 10.3934/proc.2003.2003.477 2019 Impact Factor: 1.053 ## Tools Article outline Figures and Tables
2021-06-15 12:47:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7207558155059814, "perplexity": 4569.681468169611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00279.warc.gz"}
https://laurentjacques.gitlab.io/publication/comment-fast-directional-correlation-on-the-sphere-with-steerable-filters/
# Fast directional correlation on the sphere with steerable filters Type Publication Astrophys. J. 652, 820 Abstract: A fast algorithm is developed for the directional correlation of scalar band-limited signals and band-limited steerable filters on the sphere. The asymptotic complexity associated to it through simple quadrature is of order $$O(L^5)$$, where $$2L$$ stands for the square-root of the number of sampling points on the sphere, also setting a band limit L for the signals and filters considered. The filter steerability allows to compute the directional correlation uniquely in terms of direct and inverse scalar spherical harmonics transforms, which drive the overall asymptotic complexity. The separation of variables technique for the scalar spherical harmonics transform produces an $$O(L^3)$$ algorithm independently of the pixelization. On equi-angular pixelizations, a sampling theorem introduced by Driscoll and Healy implies the exactness of the algorithm. The equi-angular and HEALPix implementations are compared in terms of memory requirements, computation times, and numerical stability. The computation times for the scalar transform, and hence for the directional correlation, of maps of several megapixels on the sphere ($$L\sim 10^3$$) are reduced from years to tens of seconds in both implementations on a single standard computer. These generic results for the scale-space signal processing on the sphere are specifically developed in the perspective of the wavelet analysis of the cosmic microwave background (CMB) temperature (T) and polarization (E and B) maps of the WMAP and Planck experiments. As an illustration, we consider the computation of the wavelet coefficients of a simulated temperature map of several megapixels with the second Gaussian derivative wavelet.
2022-10-04 00:07:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6409119367599487, "perplexity": 629.3403594985232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00104.warc.gz"}
https://blog.myrank.co.in/triangle-law-of-vector-addition-of-two-vectors/
# Triangle Law of Vector Addition of Two Vectors ## Triangle Law of Vector Addition of Two Vectors The physical quantities may be broadly classified into the vectors and the scalars. The quantities with magnitude and direction both are known as vector quantities, it means a vector is a physical quantity that has both magnitude and direction. If two non-zero vectors are represented by the two sides of a triangle taken in same order then the resultant is given in opposite order. i.e.., $$\overrightarrow{R}=\overrightarrow{A}+\overrightarrow{B}$$ $$\because \overrightarrow{OB}=\overrightarrow{OA}+\overrightarrow{AB}$$ 1) Magnitude of Resultant Vector:  In ΔABN, $$\cos \theta =\frac{AN}{B}$$ ⇒ AN = B cos θ $$\sin \theta =\frac{BN}{B}$$ ⇒ BN = B sinθ; ΔOBN, we have OB² = ON² + BN². R² = (A + B cosθ)² + (B sinθ)² R² = A² + B² cos²θ + 2 AB cosθ + B² sin² θ R² = A² + B² (cos²θ + sin²θ) + 2AB cosθ = A² + B² (1) + 2 AB cosθ $$R=\sqrt{{{A}^{2}}+{{B}^{2}}+2AB\cos \theta }$$. 2) Direction of resultant vectors: If θ is angle between $$\overrightarrow{A}$$ and $$\overrightarrow{B}$$, then $$|\overrightarrow{A}+\overrightarrow{B}|\,=\,\sqrt{{{A}^{2}}+{{B}^{2}}+2AB\cos \theta }$$ ; If $$\overrightarrow{R}$$ makes an angle α with$$\overrightarrow{A}$$, then in ΔOBN, $$\tan \alpha =\frac{BN}{ON}=\frac{BN}{OA+AN}=\frac{B\sin \theta }{A+B\cos \theta }$$.
2019-11-17 17:28:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.916531503200531, "perplexity": 1321.4405172892314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00247.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=141&t=28090&p=86029
## 14.27 $\Delta G^{\circ} = -nFE_{cell}^{\circ}$ Posts: 58 Joined: Fri Sep 29, 2017 7:06 am Been upvoted: 1 time ### 14.27 Using data in Appendix 2B, calculate the standard potential for the half-reaction U4+ + 4e ---> U I understand that we need to use the standard potential from appendix 2B, but why do we first need to calculate delta G? Is it because the standard potential is an intensive property? Ramya Lakkaraju 1B Posts: 67 Joined: Fri Sep 29, 2017 7:03 am ### Re: 14.27 I think it might have to do with the fact that delta G is a state function so you can add each step's delta G to get the total. Posts: 88 Joined: Fri Sep 29, 2017 7:03 am ### Re: 14.27 I also do not understand why we would need deltaG values to calculate the potential of a half-reaction Phillip Winters 2F Posts: 50 Joined: Fri Sep 29, 2017 7:05 am ### Re: 14.27 If you add the deltaG values together, then you can use the formula deltaG=-nFE to calculate the cell potential Jenny Cheng 2K Posts: 30 Joined: Fri Sep 29, 2017 7:05 am Been upvoted: 1 time ### Re: 14.27 Gibbs Free Energy is a state function, so you can add intermediate components to reach your desired final value. Cell potential is NOT a state function, so you can't use the same addition method. However, you can relate Gibbs Free Energy and cell potential through ΔG=-nFE. Alexandria Weinberger Posts: 51 Joined: Fri Sep 29, 2017 7:03 am ### Re: 14.27 Gibb's Free Energy is a state function, so the values of individual reactions can be added together to find the free energy of the overall reaction. Standard potential is not a state function, so you cannot add potentials together or take away to find the potentials of intermediate reactions. Clarisse Wikstrom 1H Posts: 63 Joined: Fri Sep 29, 2017 7:05 am ### Re: 14.27 It would seem easier to just add up the E values for each half reaction, correct? However, you can only do this when solving for the potential of an entire reaction, not a half reaction. To solve for the potential of a HALF-REACTION, you must find the delta G values of each one, add them together, then convert to E. This is because deltaG is a state function, as well as the fact that E values correspond to a specific half reaction based on experimentation and cannot just be added together unless solving for the E value for the total rxn.
2019-10-14 18:14:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6446018218994141, "perplexity": 1397.7630330403736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986654086.1/warc/CC-MAIN-20191014173924-20191014201424-00278.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-6-polygons-and-quadrilaterals-6-2-properties-of-parallelograms-practice-and-problem-solving-exercises-page-366/52
# Chapter 6 - Polygons and Quadrilaterals - 6-2 Properties of Parallelograms - Practice and Problem-Solving Exercises - Page 366: 52 $m$ of the interior angles in a $40$-gon = $6840^{\circ}$ #### Work Step by Step According to the Polygon Angle-Sum Theorem, the sum of all the measures of the interior angles of a polygon is $(n - 2)180$, where $n$ is the number of sides of the polygon. We have a polygon that has $40$ sides. $m$ of the interior angles in a $40$-gon = $(40 - 2)180$ Evaluate what is in parentheses first, according to order of operations: $m$ of the interior angles in a $40$-gon = $(38)180$ Multiply to solve: $m$ of the interior angles in a $40$-gon = $6840^{\circ}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-09-30 06:54:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24607819318771362, "perplexity": 350.63381527720975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402118004.92/warc/CC-MAIN-20200930044533-20200930074533-00331.warc.gz"}
https://www.datacamp.com/community/tutorials/recommender-systems-python
Tutorials python +1 # Recommender Systems in Python: Beginner Tutorial Learn how to build your own recommendation engine with the help of Python, from basic models to content-based and collaborative filtering recommender systems. Recommender systems are among the most popular applications of data science today. They are used to predict the "rating" or "preference" that a user would give to an item. Almost every major tech company has applied them in some form or the other: Amazon uses it to suggest products to customers, YouTube uses it to decide which video to play next on autoplay, and Facebook uses it to recommend pages to like and people to follow. What's more, for some companies -think Netflix and Spotify-, the business model and its success revolves around the potency of their recommendations. In fact, Netflix even offered a million dollars in 2009 to anyone who could improve its system by 10%. In this tutorial, you will see how to build a basic model of simple as well as content-based recommender systems. While these models will be nowhere close to the industry standard in terms of complexity, quality or accuracy, it will help you to get started with building more complex models that produce even better results. But what are these recommender systems? Broadly, recommender systems can be classified into 3 types: • Simple recommenders: offer generalized recommendations to every user, based on movie popularity and/or genre. The basic idea behind this system is that movies that are more popular and critically acclaimed will have a higher probability of being liked by the average audience. IMDB Top 250 is an example of this system. • Content-based recommenders: suggest similar items based on a particular item. This system uses item metadata, such as genre, director, description, actors, etc. for movies, to make these recommendations. The general idea behind these recommender systems is that if a person liked a particular item, he or she will also like an item that is similar to it. • Collaborative filtering engines: these systems try to predict the rating or preference that a user would give an item-based on past ratings and preferences of other users. Collaborative filters do not require item metadata like its content-based counterparts. ## Simple Recommenders As described in the previous section, simple recommenders are basic systems that recommends the top items based on a certain metric or score. In this section, you will build a simplified clone of IMDB Top 250 Movies using metadata collected from IMDB. The following are the steps involved: • Decide on the metric or score to rate movies on. • Calculate the score for every movie. • Sort the movies based on the score and output the top results. Before you perform any of the above steps, load your movies metadata dataset into a pandas DataFrame: # Import Pandas import pandas as pd # Print the first three rows adult belongs_to_collection budget genres homepage id imdb_id original_language original_title overview ... release_date revenue runtime spoken_languages status tagline title video vote_average vote_count 0 False {'id': 10194, 'name': 'Toy Story Collection', ... 30000000 [{'id': 16, 'name': 'Animation'}, {'id': 35, '... http://toystory.disney.com/toy-story 862 tt0114709 en Toy Story Led by Woody, Andy's toys live happily in his ... ... 1995-10-30 373554033.0 81.0 [{'iso_639_1': 'en', 'name': 'English'}] Released NaN Toy Story False 7.7 5415.0 1 False NaN 65000000 [{'id': 12, 'name': 'Adventure'}, {'id': 14, '... NaN 8844 tt0113497 en Jumanji When siblings Judy and Peter discover an encha... ... 1995-12-15 262797249.0 104.0 [{'iso_639_1': 'en', 'name': 'English'}, {'iso... Released Roll the dice and unleash the excitement! Jumanji False 6.9 2413.0 2 False {'id': 119050, 'name': 'Grumpy Old Men Collect... 0 [{'id': 10749, 'name': 'Romance'}, {'id': 35, ... NaN 15602 tt0113228 en Grumpier Old Men A family wedding reignites the ancient feud be... ... 1995-12-22 0.0 101.0 [{'iso_639_1': 'en', 'name': 'English'}] Released Still Yelling. Still Fighting. Still Ready for... Grumpier Old Men False 6.5 92.0 3 rows × 24 columns One of the most basic metrics you can think of is the rating. However, using this metric has a few caveats. For one, it does not take into consideration the popularity of a movie. Therefore, a movie with a rating of 9 from 10 voters will be considered 'better' than a movie with a rating of 8.9 from 10,000 voters. On a related note, this metric will also tend to favor movies with smaller number of voters with skewed and/or extremely high ratings. As the number of voters increase, the rating of a movie regularizes and approaches towards a value that is reflective of the movie's quality. It is more difficult to discern the quality of a movie with extremely few voters. Taking these shortcomings into consideration, it is necessary that you come up with a weighted rating that takes into account the average rating and the number of votes it has garnered. Such a system will make sure that a movie with a 9 rating from 100,000 voters gets a (far) higher score than a YouTube Web Series with the same rating but a few hundred voters. Since you are trying to build a clone of IMDB's Top 250, you will use its weighted rating formula as your metric/score. Mathematically, it is represented as follows: Weighted Rating (WR) = $(\frac{v}{v + m} . R) + (\frac{m}{v + m} . C)$ where, • v is the number of votes for the movie; • m is the minimum votes required to be listed in the chart; • R is the average rating of the movie; And • C is the mean vote across the whole report You already have the values to v (vote_count) and R (vote_average) for each movie in the dataset. It is also possible to directly calculate C from this data. What you need to determine is an appropriate value for m, the minimum votes required to be listed in the chart. There is no right value for m. You can view it as a preliminary negative filter that ignores movies which have less than a certain number of votes. The selectivity of your filter is up to your discretion. In this case, you will use the 90th percentile as your cutoff. In other words, for a movie to feature in the charts, it must have more votes than at least 90% of the movies in the list. (On the other hand, if you had chosen the 75th percentile, you would have considered the top 25% of the movies in terms of the number of votes garnered. As the percentile decreases, the number of movies considered increases. Feel free to play with this value and observe the changes in your final chart). As a first step, let's calculate the value of C, the mean rating across all movies: # Calculate C print(C) 5.61820721513 The average rating of a movie on IMDB is around 5.6, on a scale of 10. Next, let's calculate the number of votes, m, received by a movie in the 90th percentile. The pandas library makes this task extremely trivial using the .quantile() method of a pandas Series: # Calculate the minimum number of votes required to be in the chart, m print(m) 160.0 Next, you can filter the movies that qualify for the chart, based on their vote counts: # Filter out all qualified movies into a new DataFrame q_movies.shape (4555, 24) You use the .copy() method to ensure that the new q_movies DataFrame created is independent of your original metadata DataFrame. In other words, any changes made to the q_movies DataFrame does not affect the metadata. You see that there are 4555 movies which qualify to be in this list. Now, you need to calculate your metric for each qualified movie. To do this, you will define a function, weighted_rating() and define a new feature score, of which you'll calculate the value by applying this function to your DataFrame of qualified movies: # Function that computes the weighted rating of each movie def weighted_rating(x, m=m, C=C): v = x['vote_count'] R = x['vote_average'] # Calculation based on the IMDB formula return (v/(v+m) * R) + (m/(m+v) * C) # Define a new feature 'score' and calculate its value with weighted_rating() q_movies['score'] = q_movies.apply(weighted_rating, axis=1) Finally, let's sort the DataFrame based on the score feature and output the title, vote count, vote average and weighted rating or score of the top 15 movies. #Sort movies based on score calculated above q_movies = q_movies.sort_values('score', ascending=False) #Print the top 15 movies title vote_count vote_average score 314 The Shawshank Redemption 8358.0 8.5 8.445869 834 The Godfather 6024.0 8.5 8.425439 10309 Dilwale Dulhania Le Jayenge 661.0 9.1 8.421453 12481 The Dark Knight 12269.0 8.3 8.265477 2843 Fight Club 9678.0 8.3 8.256385 292 Pulp Fiction 8670.0 8.3 8.251406 522 Schindler's List 4436.0 8.3 8.206639 23673 Whiplash 4376.0 8.3 8.205404 5481 Spirited Away 3968.0 8.3 8.196055 2211 Life Is Beautiful 3643.0 8.3 8.187171 1178 The Godfather: Part II 3418.0 8.3 8.180076 1152 One Flew Over the Cuckoo's Nest 3001.0 8.3 8.164256 351 Forrest Gump 8147.0 8.2 8.150272 1154 The Empire Strikes Back 5998.0 8.2 8.132919 1176 Psycho 2405.0 8.3 8.132715 You see that the chart has a lot of movies in common with the IMDB Top 250 chart: for example, your top two movies, "Shawshank Redemption" and "The Godfather", are the same as IMDB. ## Content-Based Recommender in Python ### Plot Description Based Recommender In this section, you will try to build a system that recommends movies that are similar to a particular movie. More specifically, you will compute pairwise similarity scores for all movies based on their plot descriptions and recommend movies based on that similarity score. The plot description is available to you as the overview feature in your metadata dataset. Let's inspect the plots of a few movies: #Print plot overviews of the first 5 movies. 0 Led by Woody, Andy's toys live happily in his ... 1 When siblings Judy and Peter discover an encha... 2 A family wedding reignites the ancient feud be... 3 Cheated on, mistreated and stepped on, the wom... 4 Just when George Banks has recovered from his ... Name: overview, dtype: object In its current form, it is not possible to compute the similarity between any two overviews. To do this, you need to compute the word vectors of each overview or document, as it will be called from now on. You will compute Term Frequency-Inverse Document Frequency (TF-IDF) vectors for each document. This will give you a matrix where each column represents a word in the overview vocabulary (all the words that appear in at least one document) and each column represents a movie, as before. In its essence, the TF-IDF score is the frequency of a word occurring in a document, down-weighted by the number of documents in which it occurs. This is done to reduce the importance of words that occur frequently in plot overviews and therefore, their significance in computing the final similarity score. Fortunately, scikit-learn gives you a built-in TfIdfVectorizer class that produces the TF-IDF matrix in a couple of lines. #Import TfIdfVectorizer from scikit-learn from sklearn.feature_extraction.text import TfidfVectorizer #Define a TF-IDF Vectorizer Object. Remove all english stop words such as 'the', 'a' tfidf = TfidfVectorizer(stop_words='english') #Replace NaN with an empty string #Construct the required TF-IDF matrix by fitting and transforming the data #Output the shape of tfidf_matrix tfidf_matrix.shape (45466, 75827) You see that over 75,000 different words were used to describe the 45,000 movies in your dataset. With this matrix in hand, you can now compute a similarity score. There are several candidates for this; such as the euclidean, the Pearson and the cosine similarity scores. Again, there is no right answer to which score is the best. Different scores work well in different scenarios and it is often a good idea to experiment with different metrics. You will be using the cosine similarity to calculate a numeric quantity that denotes the similarity between two movies. You use the cosine similarity score since it is independent of magnitude and is relatively easy and fast to calculate (especially when used in conjunction with TF-IDF scores, which will be explained later). Mathematically, it is defined as follows: $cosine(x,y) = \frac{x. y^\intercal}{||x||.||y||}$ Since you have used the TF-IDF vectorizer, calculating the dot product will directly give you the cosine similarity score. Therefore, you will use sklearn's linear_kernel() instead of cosine_similarities() since it is faster. # Import linear_kernel from sklearn.metrics.pairwise import linear_kernel # Compute the cosine similarity matrix cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix) You're going to define a function that takes in a movie title as an input and outputs a list of the 10 most similar movies. Firstly, for this, you need a reverse mapping of movie titles and DataFrame indices. In other words, you need a mechanism to identify the index of a movie in your metadata DataFrame, given its title. #Construct a reverse map of indices and movie titles You are now in a good position to define your recommendation function. These are the following steps you'll follow: • Get the index of the movie given its title. • Get the list of cosine similarity scores for that particular movie with all movies. Convert it into a list of tuples where the first element is its position and the second is the similarity score. • Sort the aforementioned list of tuples based on the similarity scores; that is, the second element. • Get the top 10 elements of this list. Ignore the first element as it refers to self (the movie most similar to a particular movie is the movie itself). • Return the titles corresponding to the indices of the top elements. # Function that takes in movie title as input and outputs most similar movies def get_recommendations(title, cosine_sim=cosine_sim): # Get the index of the movie that matches the title idx = indices[title] # Get the pairwsie similarity scores of all movies with that movie sim_scores = list(enumerate(cosine_sim[idx])) # Sort the movies based on the similarity scores sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True) # Get the scores of the 10 most similar movies sim_scores = sim_scores[1:11] # Get the movie indices movie_indices = [i[0] for i in sim_scores] # Return the top 10 most similar movies get_recommendations('The Dark Knight Rises') 12481 The Dark Knight 150 Batman Forever 1328 Batman Returns 15511 Batman: Under the Red Hood 585 Batman 21194 Batman Unmasked: The Psychology of the Dark Kn... 9230 Batman Beyond: Return of the Joker 18035 Batman: Year One 19792 Batman: The Dark Knight Returns, Part 1 3095 Batman: Mask of the Phantasm Name: title, dtype: object get_recommendations('The Godfather') 1178 The Godfather: Part II 44030 The Godfather Trilogy: 1972-1990 1914 The Godfather: Part III 23126 Blood Ties 11297 Household Saints 34717 Start Liquidation 10821 Election 38030 A Mother Should Be Loved 17729 Short Sharp Shock 26293 Beck 28 - Familjen Name: title, dtype: object You see that, while your system has done a decent job of finding movies with similar plot descriptions, the quality of recommendations is not that great. "The Dark Knight Rises" returns all Batman movies while it more likely that the people who liked that movie are more inclined to enjoy other Christopher Nolan movies. This is something that cannot be captured by your present system. ### Credits, Genres and Keywords Based Recommender It goes without saying that the quality of your recommender would be increased with the usage of better metadata. That is exactly what you are going to do in this section. You are going to build a recommender based on the following metadata: the 3 top actors, the director, related genres and the movie plot keywords. The keywords, cast and crew data is not available in your current dataset so the first step would be to load and merge them into your main DataFrame. # Load keywords and credits # Remove rows with bad IDs. # Convert IDs to int. Required for merging keywords['id'] = keywords['id'].astype('int') credits['id'] = credits['id'].astype('int') # Print the first two movies of your newly merged metadata adult belongs_to_collection budget genres homepage id imdb_id original_language original_title overview ... spoken_languages status tagline title video vote_average vote_count cast crew keywords 0 False {'id': 10194, 'name': 'Toy Story Collection', ... 30000000 [{'id': 16, 'name': 'Animation'}, {'id': 35, '... http://toystory.disney.com/toy-story 862 tt0114709 en Toy Story Led by Woody, Andy's toys live happily in his ... ... [{'iso_639_1': 'en', 'name': 'English'}] Released NaN Toy Story False 7.7 5415.0 [{'cast_id': 14, 'character': 'Woody (voice)',... [{'credit_id': '52fe4284c3a36847f8024f49', 'de... [{'id': 931, 'name': 'jealousy'}, {'id': 4290,... 1 False NaN 65000000 [{'id': 12, 'name': 'Adventure'}, {'id': 14, '... NaN 8844 tt0113497 en Jumanji When siblings Judy and Peter discover an encha... ... [{'iso_639_1': 'en', 'name': 'English'}, {'iso... Released Roll the dice and unleash the excitement! Jumanji False 6.9 2413.0 [{'cast_id': 1, 'character': 'Alan Parrish', '... [{'credit_id': '52fe44bfc3a36847f80a7cd1', 'de... [{'id': 10090, 'name': 'board game'}, {'id': 1... 2 rows × 27 columns From your new features, cast, crew and keywords, you need to extract the three most important actors, the director and the keywords associated with that movie. Right now, your data is present in the form of "stringified" lists. You need to convert them into a form that is usable for you. # Parse the stringified features into their corresponding python objects from ast import literal_eval features = ['cast', 'crew', 'keywords', 'genres'] for feature in features: Next, you write functions that will help you to extract the required information from each feature. First, you'll import the NumPy package to get access to its NaN constant. Next, you can use it to write the get_director() function: # Import Numpy import numpy as np # Get the director's name from the crew feature. If director is not listed, return NaN def get_director(x): for i in x: if i['job'] == 'Director': return i['name'] return np.nan # Returns the list top 3 elements or entire list; whichever is more. def get_list(x): if isinstance(x, list): names = [i['name'] for i in x] #Check if more than 3 elements exist. If yes, return only first three. If no, return entire list. if len(names) > 3: names = names[:3] return names #Return empty list in case of missing/malformed data return [] # Define new director, cast, genres and keywords features that are in a suitable form. features = ['cast', 'keywords', 'genres'] for feature in features: # Print the new features of the first 3 films title cast director keywords genres 0 Toy Story [Tom Hanks, Tim Allen, Don Rickles] John Lasseter [jealousy, toy, boy] [Animation, Comedy, Family] 1 Jumanji [Robin Williams, Jonathan Hyde, Kirsten Dunst] Joe Johnston [board game, disappearance, based on children'... [Adventure, Fantasy, Family] 2 Grumpier Old Men [Walter Matthau, Jack Lemmon, Ann-Margret] Howard Deutch [fishing, best friend, duringcreditsstinger] [Romance, Comedy] The next step would be to convert the names and keyword instances into lowercase and strip all the spaces between them. This is done so that your vectorizer doesn't count the Johnny of "Johnny Depp" and "Johnny Galecki" as the same. After this processing step, the aforementioned actors will be represented as "johnnydepp" and "johnnygalecki" and will be distinct to your vectorizer. # Function to convert all strings to lower case and strip names of spaces def clean_data(x): if isinstance(x, list): return [str.lower(i.replace(" ", "")) for i in x] else: #Check if director exists. If not, return empty string if isinstance(x, str): return str.lower(x.replace(" ", "")) else: return '' # Apply clean_data function to your features. features = ['cast', 'keywords', 'director', 'genres'] for feature in features: You are now in a position to create your "metadata soup", which is a string that contains all the metadata that you want to feed to your vectorizer (namely actors, director and keywords). def create_soup(x): return ' '.join(x['keywords']) + ' ' + ' '.join(x['cast']) + ' ' + x['director'] + ' ' + ' '.join(x['genres']) # Create a new soup feature The next steps are the same as what you did with your plot description based recommender. One important difference is that you use the CountVectorizer() instead of TF-IDF. This is because you do not want to down-weight the presence of an actor/director if he or she has acted or directed in relatively more movies. It doesn't make much intuitive sense. # Import CountVectorizer and create the count matrix from sklearn.feature_extraction.text import CountVectorizer count = CountVectorizer(stop_words='english') # Compute the Cosine Similarity matrix based on the count_matrix from sklearn.metrics.pairwise import cosine_similarity cosine_sim2 = cosine_similarity(count_matrix, count_matrix) # Reset index of your main DataFrame and construct reverse mapping as before You can now reuse your get_recommendations() function by passing in the new cosine_sim2 matrix as your second argument. get_recommendations('The Dark Knight Rises', cosine_sim2) 12589 The Dark Knight 10210 Batman Begins 9311 Shiner 9874 Amongst Friends 7772 Mitchell 516 Romeo Is Bleeding 11463 The Prestige 24090 Quicksand 41063 Sara Name: title, dtype: object get_recommendations('The Godfather', cosine_sim2) 1934 The Godfather: Part III 1199 The Godfather: Part II 15609 The Rain People 18940 Last Exit 34488 Rege 35802 Manuscripts Don't Burn 35803 Manuscripts Don't Burn 8001 The Night of the Following Day 18261 The Son of No One 28683 In the Name of the Law Name: title, dtype: object You see that your recommender has been successful in capturing more information due to more metadata and has given you (arguably) better recommendations. There are, of course, numerous ways of playing with this system in order to improve recommendations. Some suggestions: • Introduce a popularity filter: this recommender would take the list of the 30 most similar movies, calculate the weighted ratings (using the IMDB formula from above), sort movies based on this rating and return the top 10 movies. • Other crew members: other crew member names, such as screenwriters and producers, could also be included. • Increasing weight of the director: to give more weight to the director, he or she could be mentioned multiple times in the soup to increase the similarity scores of movies with the same director. You can find these ideas implemented in this notebook. ## Collaborative Filtering with Python In this tutorial, you have learnt how to build your very own Simple and Content Based Movie Recommender Systems. There is also another extremely popular type of recommender known as collaborative filters. Collaborative filters can further be classified into two types: • User-based Filtering: these systems recommend products to a user that similar users have liked. For example, let's say Alice and Bob have a similar interest in books (that is, they largely like and dislike the same books). Now, let's say a new book has been launched into the market and Alice has read and loved it. It is therefore, highly likely that Bob will like it too and therefore, the system recommends this book to Bob. • Item-based Filtering: these systems are extremely similar to the content recommendation engine that you built. These systems identify similar items based on how people have rated it in the past. For example, if Alice, Bob and Eve have given 5 stars to The Lord of the Rings and The Hobbit, the system identifies the items as similar. Therefore, if someone buys The Lord of the Rings, the system also recommends The Hobbit to him or her. You will not be building these systems in this tutorial but you are already familiar with most of the ideas required to do so. A good place to start with collaborative filters is by examining the MovieLens dataset, which can be found here. ## Conclusion In this tutorial, you have covered how to build simple as well as content-based recommenders. Hopefully, you are now in a good position to make improvements on the basic systems you built and experiment with other kinds of recommenders (such as collaborative filters). I hope you had as much fun reading this as I had writing. Happy recommending! Related posts python +3
2018-11-18 09:57:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21745245158672333, "perplexity": 4117.91956575853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744348.50/warc/CC-MAIN-20181118093845-20181118115845-00499.warc.gz"}
http://www.dfg-spp1324.de/nuhagtools/talks/talks_details.php?id=1456&nl=Y
# T:A:L:K:S close this window title: On a p(t,\omega,x)-Laplace evolution equation with a stochastic force name: Zimmermann first name: Aleksandra location/conference: RDSN14 abstract: We are interested in the existence and uniqueness of the following nonlinear parabolic problem (P) of p(t,x,\omega)-Laplace type: du-div (|Du|^{p(t,\omega,x)-2}Du)dt=h(.,u)dW in \Omega\times (0,T)\times D u=0 in \Omega\times(0,T) \partial D u(0,.)=u_0 in L^2(D) When p is a fixed exponent, the problem is a classical one and can be solved by monotonicity arguments and Minty's trick comes from the possibility to write an Itô Formula in this setting. Since the Lebesgue and Sobolev spaces with variable exponent of variables t, \omega and x are Orlicz type spaces and do not fit into the classical framework of Bochner spaces, the arguments have to be adapted to the new setting. In particular, a proof of the Itô formula by a time discretization is out of range and since the variable exponent Lebesgue spaces are not stable by partial convolution in only the variable x, a proof of Itô formula using convolution fails as well. Our steps are the following ones: First, we consider a singular perturbation of our problem (P) with a 'nice' function h independent of u and we obtain a stability result of the solution with respect to h. Passing to the limit with respect to the singular perturbation we prove that the problem (P) is well posed for an additive, 'nice' noise fuction h. Then we prove the result for any h \in N^2_W(0,T;L^2(D)) by a density argument. In the last step we solve problem (P) for a multiplicative noise by a fixed-point argument.
2018-01-23 08:21:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302541613578796, "perplexity": 422.0181256198319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891791.95/warc/CC-MAIN-20180123072105-20180123092105-00161.warc.gz"}
http://www.ck12.org/book/CK-12-Concept-Middle-School-Math-Grade-6/r4/section/6.1/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 6.1: Fraction Rounding to the Nearest Half Difficulty Level: At Grade Created by: CK-12 Estimated2 minsto complete % Progress Practice Fraction Rounding to the Nearest Half MEMORY METER This indicates how strong in your memory this concept is Progress Estimated2 minsto complete % Estimated2 minsto complete % MEMORY METER This indicates how strong in your memory this concept is Have you ever measured something? Was the measurement perfect or was it too small? Well, if you were working on a construction site, you would be doing a lot of measuring so rounding up or down if things weren't perfect would be very important. These part measurements are also fractions, just like you have learned about in other Concepts. Think about how you would round: three - fourths, one - sixth or five - tenths as you work through this Concept. In this Concept, you will learn how to round a fraction to the nearest half. ### Guidance We use fractions in everyday life all the time. Remember that when we talk about a fraction, we are talking about a part of a whole. Often times, we need to use an exact fraction, but sometimes, we can use an estimate. If you think back to our earlier work on estimation, you will remember that an estimate is an approximate value that makes sense or is reasonable given the problem. What fraction does this picture represent? If we wanted to be exact about this fraction, we could say that there are \begin{align*}\frac{12}{20}\end{align*} shaded boxes. However, it might be simpler to say that about half of the boxes are shaded. We call this rounding to the nearest half. How do we round to the nearest half? To round a fraction to the nearest half, we need to think in terms of halves. We often think in terms of wholes, so this is definitely a change in our thinking. There are three main values to round to when we round a fraction to the nearest half. The first is zero. We can think of 0 as \begin{align*}\frac{0}{2}\end{align*}, or zero halves. The second value is \begin{align*}\frac{1}{2}\end{align*}, or one half. The third value is 1, which can be thought of as \begin{align*}\frac{2}{2}\end{align*}, or two halves. When rounding to the nearest half, we round the fraction to whichever half the fraction is closest to on the number line 0, \begin{align*}\frac{1}{2}\end{align*}, or 1. If a fraction is equally close to two different halves, we round the fraction up. \begin{align*}\frac{5}{6}\end{align*} To figure out which value five-sixths is closest to, we must first think in terms of sixths. Since the denominator is six, that means that the whole is divided into six parts. The fraction \begin{align*}\frac{0}{6}\end{align*} would be the value of zero, \begin{align*}\frac{3}{6}\end{align*} would be the value of \begin{align*}\frac{1}{2}\end{align*}, and \begin{align*}\frac{6}{6}\end{align*} is the same as 1. The fraction \begin{align*}\frac{5}{6}\end{align*} is closest to \begin{align*}\frac{6}{6}\end{align*}, so rounding to the nearest half would be rounding to 1. Try a few of these on your own. Round each fraction to the nearest half. #### Example A \begin{align*}\frac{1}{5}\end{align*} Solution: 0 #### Example B \begin{align*}\frac{3}{8}\end{align*} Solution:\begin{align*}\frac{1}{2}\end{align*} #### Example C \begin{align*}\frac{7}{9}\end{align*} Solution:\begin{align*}1\end{align*} Now let's go back and think about those measurements from the very beginning of the Concept. Think about how you would round: three - fourths, one - sixth or five - tenths as you work through this Concept. We can round three - fourths to 1. We can round one - sixth down to 0. We can round five - tenths to one - half. ### Vocabulary Here are the vocabulary words in this Concept. Fraction a part of a whole written with a fraction bar, a numerator and a denominator. Estimate to find an approximate answer that is reasonable and makes sense given the problem. ### Guided Practice Here is one for you to try on your own. Jessica discovered that \begin{align*}\frac{4}{9}\end{align*} of a pan of brownies had been eaten. Is the amount of brownies left closer to one - half or one whole? If \begin{align*}\frac{4}{9}\end{align*} of the pan had been eaten, then that means that \begin{align*}\frac{5}{9}\end{align*} of the pan had not been eaten. This is closer to one - half of the pan of brownies. ### Video Review Here are videos for review. Estimating with fractions - This video is a secondary skill to rounding fractions. It involves estimating with fractions. ### Practice Directions: Round each fraction to the nearest half. 1. \begin{align*}\frac{2}{15}\end{align*} 2. \begin{align*}\frac{1}{7}\end{align*} 3. \begin{align*}\frac{8}{9}\end{align*} 4. \begin{align*}\frac{7}{15}\end{align*} 5. \begin{align*}\frac{6}{13}\end{align*} 6. \begin{align*}\frac{10}{11}\end{align*} 7. \begin{align*}\frac{7}{8}\end{align*} 8. \begin{align*}\frac{4}{7}\end{align*} 9. \begin{align*}\frac{3}{7}\end{align*} 10. \begin{align*}\frac{1}{19}\end{align*} 11. \begin{align*}\frac{2}{10}\end{align*} 12. \begin{align*}\frac{4}{5}\end{align*} 13. \begin{align*}\frac{2}{3}\end{align*} 14. \begin{align*}\frac{2}{11}\end{align*} 15. \begin{align*}\frac{1}{9}\end{align*} ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English Estimate To estimate is to find an approximate answer that is reasonable or makes sense given the problem. fraction A fraction is a part of a whole. A fraction is written mathematically as one value on top of another, separated by a fraction bar. It is also called a rational number. Show Hide Details Description Difficulty Level: Authors: Tags: Subjects:
2016-10-25 01:54:56
{"extraction_info": {"found_math": true, "script_math_tex": 35, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6455397009849548, "perplexity": 1528.6183932580332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719843.44/warc/CC-MAIN-20161020183839-00172-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.speedsolving.com/threads/nano-timer-android-speedcubing-timer.49164/page-3
# Nano Timer, Android speedcubing timer #### Chenkar ##### Member Maybe you can add in the option to make a 'solve type' for multibld/relays, so that you can apply penaltys to the individual cubes (which would be the steps) #### NanoTimer ##### Member Maybe you can add in the option to make a 'solve type' for multibld/relays, so that you can apply penaltys to the individual cubes (which would be the steps) I have a multibld point in my todo list but it's low priority for now compared to others. I don't know exactly how i would implement it yet but it would generate a number of scrambles based on the selected number of cubes, adjust the total time and make a sound when the time has passed. But i still have to think about how it would fit in the app as it's quite different from the rest, with penalties handling among a "set" of cubes etc. #### Chenkar ##### Member I have a multibld point in my todo list but it's low priority for now compared to others. I don't know exactly how i would implement it yet but it would generate a number of scrambles based on the selected number of cubes, adjust the total time and make a sound when the time has passed. But i still have to think about how it would fit in the app as it's quite different from the rest, with penalties handling among a "set" of cubes etc. Ok I'll look forward to the multibld timer (but I first have to relearn bld Lol) #### NanoTimer ##### Member Version 1.0.2 of Nano Timer is now online! The following functionalities/changes have been added: - Scrambles for Square-1 and Clock - Average of 50 - Shortcut to solve types editing from solve types list - Improved scrambles format based on type and orientation - Displayed averages in history details - Option to choose big cubes notation system - Set Skewb and Pyraminx scramble sizes to 15 - Minor bug fixes and improvements #### Tacito ##### Member When I'm using the hold to inspec feature, if I turn on the auto-rotation in my phone the inspection screen glitches if the phone rotates while inspecting. Just a bug I've found, but it's not a major issue. Edit: It only happens if you change the rotation from horizontal to vertical. Vertical to horizontal is fine. Last edited: #### NanoTimer ##### Member When I'm using the hold to inspec feature, if I turn on the auto-rotation in my phone the inspection screen glitches if the phone rotates while inspecting. Just a bug I've found, but it's not a major issue. Edit: It only happens if you change the rotation from horizontal to vertical. Vertical to horizontal is fine. Yes it is indeed the case when switching from horizontal to vertical during inspection. There is a problem with the way android handles its views to handle orientation changes when the screen is pressed. That is the reason why i prevented to switch orientation during inspection, but it doesn't seem to have any effect when starting from landscape mode. I'll look into it for next release. Thanks for commenting! #### UnsolvedCypher ##### Member I really love the features on this timer, but the interface seems kind of ... dated. Could you update this to meet Holo or Material design guidelines? If this timer looked a little nicer, it would be one of the best! #### NanoTimer ##### Member I really love the features on this timer, but the interface seems kind of ... dated. Could you update this to meet Holo or Material design guidelines? If this timer looked a little nicer, it would be one of the best! I have indeed received multiple feedbacks about the oldish interface. I will work on it to try to make it nicer and more in line with the current android look-and-feel. This should be up either in the next version or in the one after, depending on the time it takes to finish implementing the other changes. #### NanoTimer ##### Member Version 1.1.0 is now online with the following changes: - Improved graphical interface: Updated to the recent Android GUI standards to make it prettier and more convenient to use. - New "Session details" timer window: Accessible from the timer screen menu, this new dialog window allows you to see the details of the ongoing session (requires to have started a session for that solve type). You'll be able to see the session average, mean, solves count and also the detailed list of solve times with best/worst times. - Total solves count now shown in history: I received multiple requests to add the solve type total solves count. This is now directly displayed in the history screen. Tell me what you guys think about the new GUI, and if you have any comments/suggestions. Last edited: #### Chenkar ##### Member I really like your timer. I got a now pb yesterday, 13.06, and I noticed that in the menu you can barely see it. Maybe you could add a graph feature to see stats differently #### NanoTimer ##### Member I really like your timer. I got a now pb yesterday, 13.06, and I noticed that in the menu you can barely see it. Maybe you could add a graph feature to see stats differently I am not sure I understand what you mean. Are you talking about the screen with the history, or the timer screen? When you get a new record, the "Lifetime best" text will become yellow and you'll also see "New record!" displayed in the top bar. But it will become normal once you leave the screen or if you start a new solve. #### Chenkar ##### Member I am not sure I understand what you mean. Are you talking about the screen with the history, or the timer screen? When you get a new record, the "Lifetime best" text will become yellow and you'll also see "New record!" displayed in the top bar. But it will become normal once you leave the screen or if you start a new solve. I mean with the history. so that you can visualize your progress #### NanoTimer ##### Member It is in my plans to have graphs to be able to see the progression. It will be implemented but not yet in the next release because some changes are more urgent. I might also add something to the history items to make it clear when there is a personal best (either coloring or an icon next to the time). The priorities for the next version are random-state scrambles for 3x3x3 (maybe also for 2x2x2) with possibility to pre-generate them, but also timer screen adaptation to display last/best averages instead of means, and add more details in the session details window. I added a "graph" point in my todo list to make sure to not forget about it. #### Chenkar ##### Member It is in my plans to have graphs to be able to see the progression. It will be implemented but not yet in the next release because some changes are more urgent. I might also add something to the history items to make it clear when there is a personal best (either coloring or an icon next to the time). The priorities for the next version are random-state scrambles for 3x3x3 (maybe also for 2x2x2) with possibility to pre-generate them, but also timer screen adaptation to display last/best averages instead of means, and add more details in the session details window. I added a "graph" point in my todo list to make sure to not forget about it. Okay. I can't wait for all these updates! #### NanoTimer ##### Member Looking for beta testers I am working on a major update of Nano Timer and I am looking for beta testers who are interested in testing it out. This new version will include: • Random-state scrambles for 3x3x3 and 2x2x2 • Option to choose different scramble qualities (= max number of moves) to suit both new and old devices • Pre-generation of scrambles to allow you to generate a quantity of scrambles when you decide it to preserve your battery (by default, a cache of 50 scrambles is kept in memory, and new ones are generated when that cache gets smaller than 25) • Timer screen redesign: the bottom table will now display last/best averages instead of means. The two current "Avg of" fields will now be "Solves count" and "Mean of 3". • A new solve type mode made specially for blind. You will be able to create a special blind solve type to have a timer screen adapted for it. That screen displays Last/Best means of 3, but also the global accuracy (success rate percentage), the averages of your last 12/50/100 successes, and the accuracy for the last 12/50/100 solves. Everyone is welcome to beta test, it would also be nice if some blind solvers would apply to test the new blind mode. There is currently a beta version with random-state scrambles for 3x3x3. The other changes are implemented for the most part, but I am still working on the last changes. The version with the above features should be up within a week or so. If you are interested in beta testing, you can send me the mail address you are using in the Play Store (send me a MP). I'll add you to the beta testers list and you'll receive the update through Play Store like any normal update. I will be happy to hear what you guys think of it and to hear your suggestions! Last edited: #### NanoTimer ##### Member Version 1.1.1 is now released! This update contains some major changes: • Random-state scrambles for 3x3x3 and 2x2x2 • Scrambles pre-generation • Timer screen fields changes (means to averages, mean of 3, solves count) • New blind solve mode with additional blind-related statistics (accuracy, last/best mean of 3, average of successes) • Session details window improvements • Various GUI-related fixes and improvements #### NanoTimer ##### Member Nano Timer Pro is now available, with a startup discount from 1.40$to 0.99$ valid for one week (until the 16th of January 2015) Nano Timer Pro unlocks the following features from the free app: • Export times to CSV • To come up next: progression graphs In addition to that, the version 1.1.2 of the free app is also released with the following features/changes: • Added random-state scrambles customization options • Option to automatically generate scrambles when device is charging • Added support for Pro version • Various small fixes and improvements #### Chenkar ##### Member Nano Timer Pro is now available, with a startup discount from 1.40$to 0.99$ valid for one week (until the 16th of January 2015) Nano Timer Pro unlocks the following features from the free app: • Export times to CSV • To come up next: progression graphs In addition to that, the version 1.1.2 of the free app is also released with the following features/changes: • Added random-state scrambles customization options • Option to automatically generate scrambles when device is charging • Added support for Pro version • Various small fixes and improvements Sweet! Already bought and realised that I don't know what CSV is lol. suggestion: maybe add a way to see the individual sessions after you've started a new one #### NanoTimer ##### Member Hi chenkar, thank you! CSV is just a simple file format that you can read by opening it with a file reader, or with Excel. The first line is the columns headers while the others contain the data itself. Next step is to add graphics, but i'll take your suggestion into account! #### BboyArchon ##### Member Loved it. Bought the pro version a few days ago and I just gave you 5 stars on the store. Keep improving it!
2019-09-15 14:31:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2576298713684082, "perplexity": 2467.1899376076394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571506.61/warc/CC-MAIN-20190915134729-20190915160729-00016.warc.gz"}
http://crypto.stackexchange.com/questions?page=88&sort=votes
# All Questions 124 views ### What kind of cryptography should i use? I have a trusted third part A that issues an access token (xml file) to an untrusted client C that uses this token to log into an untrusted server S and access to the authorized files. I want only ... 353 views ### Smart Card Basics I want to implement some of the basic encryption algorithms on smart card, could any body guide me how to program a smart card, which tools (hardware and software) I should have, and if these tools ... 378 views ### When making public key fingerprints - is a sha1 hash still a good idea? I'm thinking about trying to save some space (and readability) when referencing 2k and 4k public keys (millions of them) by storing the fingerprint in some places instead of the full public key. ... 65 views ### Why does the server in S/KEY authentication only store a single password? I've been reading about the S/KEY One-Time Password system on wikipedia here and was wondering why the server only stores a single password and not the list of one-time passwords like the client does. ... 253 views ### Key sizes for discrete logarithm based methods I have a question regarding the key generation process of methods that are based on the discrete logarithm problem. This site gives some good insights, but I don't fully grasp it I think: ... 158 views ### Elliptic Curve is DH function or PKI? can we reuse same ECC key on TLS for long terms or it must be used just once? (i mean can we use ECC like RSA?) is there patent free ECC implementation ? 78 views I am read this paragraph and I have a doubt. "An adversary to PKC $\Pi$ is given by two probabilistic polynomial time algorithms, $A = (A1; A2)$. In the first stage, the "find" stage, the ... 72 views ### What does “securely realize” mean? I was wondering what "securely realizes" means. I see this in some cryptographic papers but I don't know what it means for a protocol to "securely realize" a function $F$. Is it just a fancy way of ... 1k views Looking at the first step of AES encryption I see that we XOR the key with the plaintext block. Why is the actual key involved at all, why not just use the round keys derived from the key schedule? 171 views ### Is there an efficient way to hide the encrypted plaintext length with a block cipher? In block cipher modes of operation for encryption on input of a plaintext of $N$ blocks (We assume that the input size is always a multiple of the blockcipher mode: $N·16$ bytes) the size of the ... 116 views ### Security relevance of random factor in Paillier In the Paillier cryptosystem [1] the encryption of $m \in \mathbb{Z}_N$ with randomness $r \in \mathbb{Z}_n^*$ is $c = g^m r^n \bmod{n^2}$. The additive-homomorphic property of the system shows that ... 623 views ### What goals is homomorphic encryption aiming to solve? As I understand from this article about homomorphic encryption, it mainly aims to enhance the security of cloud computing. We should be able to encrypt data and send it to the cloud. After it is send, ... 821 views ### Fastest multiplication algorithm for efficient exponentiation in C++? First I apologize if this question is better tailored for SO, but since I'm using the method for crypto stuff, I thought I'd ask anybody that might know here. Karatsuba algorithm does a pretty good ... 177 views ### Is the following key stretching algorithm as memory hard as I think it is? I'm having some fun designing a key stretching algorithm that can be implemented in pure Python. It's built entirely out of the standard library's hash functions in an attempt to at least wrest some ... 289 views ### Measuring Shannon's diffusion Shannon's idea of diffusion is fundamental to cryptography. Besides being a descriptive idea, is there any work on measuring or expressing it? Saying something like "System A has more diffusion than ... 183 views ### Request for 1024-bit primes $p$ , subgroup $q$ and subgroup generator $g$ I need to find a prime $p$ of $1024$ bits with a $160$ bit sub group size $q$, such that $q|p-1$ , and $g$ is the generator of the sub group size $q$. I'm looking for the numeric values of $p$ , $q$ ... 288 views ### What are some different cryptography methods? Some of the most effective cryptography methods and algorithms are based of factoring large prime numbers (e.g. RSA). I'm curious whether there are some other cryptography methods. Somethings that is ... 581 views ### Kryptos : K2. What is the origin of the “abscissa” keyword? I'm studying the Kryptos sculpture with its cryptographic puzzles K1 to K4. Similar to the "palimsest" keyword for K1, the keyword "abscissa" for K2 was determined by brute-force. To better ... 247 views ### Understanding padding oracles - is an attack plausible in my scenario? I have a scheme that, long story short, uses AES in CBC mode to encrypt third-party credentials for user accounts with a password-derived key. It's been mentioned that the use of CBC mode is a ... 113 views ### Polynomials and efficient computability In public key crypto, the popular definitions of security (CPA, CCA1,2) depend on PPT adversaries. I'm trying to understand why adversaries should be PPT. It's clear that adversaries should be at ... 144 views ### inverse element in Paillier cryptosystem As I know, in Paillier cryptosystem, the encryption $c$ of a message $m$ is calculated as $c=g^m r^n \bmod n^2$. Now, I am wondering if I can derive $g^m \bmod n^2$ given that I know $c$, $r$, and ... 295 views ### Non-Interactive Zero-Knowledge-Proof for discret Logarithm? In a Non-Interactive $Zero-Knowledge-Proof$, the challenge is chosen by the Prover. I am trying to find a Non-Interactive Zero-Knowledge-Proof based on the following problem: DISCRETE LOGARITHM ... 661 views ### Weakness in using only one RSA key pair for two-way communication? In Alice/Bob/Cindy terms (EDIT: and with a little more detail): Alice and Bob have each securely obtained one key of an RSA keypair from a trusted third party. Alice has one key ($e$ and $n$), Bob ... 1k views ### How does a client verify a server certificate? As far as I know, when I request a certificate from Verisign (for example), and after they approved that me is me, they create a certificate (for me) which contains the digital signature and public ... 1k views ### Blowfish: hex digits of pi used for s-boxes? Preface: here is the official site for the Blowfish algorithm: http://www.schneier.com/blowfish.html The Blowfish algorithm uses an s-box, which consists of hex digits of pi (found here: ... 284 views ### How can I prove in zero knowldege that an ElGamal shuffle is correct for a special setting? [closed] In a special ElGamal encryption scheme, every user has an ElGamal encryption key-pair using the same cyclic group $G$ and generator $g$. The system has a special function : ... 251 views ### Real world use cases of Multi Party Computation Most of the research papers give imaginary applications of multi party computation. Either they talk about millionare's problem or two or more corporates willing to compute some Intrustion detection ... 103 views ### Is secure remote snap possible? Scenario: We have a central server $S$. We have a number of peripheral servers $P_i$ We have some individuals $U_j$ A given individual may be "known" to one or more peripheral servers. Each ... 102 views ### Conditional entropy Suppose I have a file with random binary strings of the same length in each line. If I'm computing the conditional entropy $H(Y|X)$ where $Y$ is the variable string of fixed length $l$ which is ... 101 views ### How to hide bit frequency of the inter-packet delays covert channel? I encode a hidden message as different delays between packets of another data stream. My implementation now is that for each bit, if the bit is 0, a inter-packet ... 366 views ### Can we construct fully homomorphic encryption scheme based on non-circuit approach? At present, all FHE scheme are be constructed based on circuit approach. Can we construct fully homomorphic encryption scheme based on non-circuit approach? Is Polly cracker non-circuit approach ? 332 views ### Is there a method to break an EC curve for all key-pairs (Q,d) such that (Q=d*G) faster than breaking every single key-pair? Related to this question: Is there any memory trade-off that helps such attack? Obviously if the field size is very small (say 40 bits) it´s possible, but what if the field size is 160 bits long? or ... 295 views ### Can iterated key expansion in Blowfish slow down bruteforce attacks on small key sizes? Suppose I have to use 64-bit keys for encryption (e.g. to comply with export restrictions). For this question, assume this key is truly random, and the encryption algorithm is Blowfish. Blowfish key ... 535 views ### Cryptanalysing Affine cipher I am trying to cryptanalyse a cipher–text encrypted by Affine cipher. The encryption formula is: $c = f(x) = (ax+b)\bmod m$, where $a$ and $b$ are unknown constants; $x$ is a plain-text symbol, and ... 123 views ### Random decomposition of symmetric key Considering a symmetric cipher (i.e : AES in counter mode), is it possible for any given key to be randomly decomposed into other keys without knowing the message or the ciphertext, such as : ... 530 views ### What would be the most effective way to brute force a 16 char AES key? I have a file that is encrypted in AES using a 16 char string. The string is (a-zA-Z0-9) and .,?!. Also, it only contains words ... 363 views ### Other than brute force, are there any attacks on Threefish-512 using only a single known plaintext block? As per title, other than brute force, are there any attacks on Threefish-512 using only a single plaintext block? Are there any attacks like this in any other cipher? 346 views ### RC4 S-Box and Keystream I'm studying the RC4 algorithm and I have the following questions: On all questions assume that an expanded (2048-bit) key is used, and that the first 4096 bytes of the KeystreamIm are discarded. ... 508 views ### OpenPGP Public-Key Encrypted Session Key Packet Key ID generation Im probably just not reading something again, but: RFC 4880 says that a OpenPGP Public-Key Encrypted Session Key Packet (tag 1) is made up of ... 445 views ### Does the MixColumns step come before or after AddRoundKey in AES decryption? I found these images depicting the AES decryption process: In the first image, the MixColumns step comes before the AddRoundKey step, while in the second image, the AddRoundKey will come before ... 101 views ### Determine complexity of a SAT problem Is there a standard way to determine a complexity of the specified SAT problem? I'm researching algebraic cryptanalysis and came to solving multivariate quadratic equation systems using CryptoMiniSat. ... 149 views ### Is it possible take a piece of data in secret? I want something like this, but in a digital sense: You and others walk into a room. Everyone knows who each of you are and everyone is doing their best to figure out what piece of paper each ... 286 views ### Which encodings have |encoding key| >> |decoding key|? I'm looking for an encoding scheme that requires a very large encoding key E (>10MB) and suffices with a relatively small decoding key ... 201 views ### Block ordering and security in a MAC? To authenticate a message $m = m_1 \,\|\, \dots \,\|\,m_n$ the tag $t := F_k(r) \oplus F_k(m_1) \oplus \dotsb \oplus F_k(m_n)$ is used, where r is uniform random number $(0,1)^n$ and $m=(0,1)^n$. Even ... 33 views ### Secure multiparty computation of conjunction Suppose Alice and Bob each have bits a and b, respectively. How can Alice and Bob compute the function ... 15 views ### How does SafeNet MobilePASS generate passwords? (TOTP Variant) This question is similar to How does SafeNet MobilePASS generate passwords? The question and answer there is related to the HMAC Variant of SafeNet Mobilepass. I'm looking for details on the TOTP ... 34 views ### How do we find the subkey out of a differential cryptanalysis? I understand the differential cryptanalysis up to the "finding the last subkey" part. If XORing with the key doesn't change the differentials, how can testing different key affect the equations we ... 36 views ### Polynomial division hardware implementation I am beginning the implementation of the polynomial binary division algorithm now as I understood i will be checking the MSB bit if 1 to XOR and shift the sum if 0 i will only shift. What I am not ... I'm trying to get a grip on how Schnorr signature works. Suppose Alice sends Trent a tuple $(P, M)$, which contains a payload and a message to be signed by him. She then passes the certificate to Bob ...
2014-12-18 09:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7066997289657593, "perplexity": 2134.199837902548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765722.114/warc/CC-MAIN-20141217075245-00049-ip-10-231-17-201.ec2.internal.warc.gz"}
http://physicshelpforum.com/thermodynamics-fluid-mechanics/13383-density-driven-flow.html
Physics Help Forum Density driven flow Thermodynamics and Fluid Mechanics Thermodynamics and Fluid Mechanics Physics Help Forum Jul 18th 2017, 02:00 AM #1 Junior Member   Join Date: Jul 2017 Posts: 1 Density driven flow I'm doing some work on density induced flow in porous media. My problem contains a single phase fluid with 2 components (water and a solute). I'm solving the continuity equation along with the advection-diffusion/dispersion eq., Darcy, the equation of state (links between the concentration and density) and a viscosity function. Currently I'm trying to figure out whether the Boussinesq approximation is valid in my case. I've scaled the equations and while doing so, revealed a gap in my Understanding of the physics involved. The continuity-mass conservation for the fluid phase: $$φ\frac{∂ ρ}{∂t}+∇\cdot(ρq)=0$$ I've decided to scale my system of equations using a diffusive time scale $x_0^2/D$. The other relevant scaling factors are the flux $q_0$ e and the density is scaled as $${ρ^*}=({ρ}-{ρ_0})/(ρ_{max}-ρ_0)$$ The scaling produced: $$\frac{ε}{Pe}\frac{∂ρ}{∂t}+ε\cdot ρ∇\cdot q+∇\cdot q+ε\cdot q∇ρ=0$$ where $ε\ll1$ and $Pe\ll ε$ For that case I can see that the accumulation term has the largest magnitude and therefore $∂ρ/∂t=0$.It does not make sense to me since density does change in respect to time due to diffusive processes. Clearly i'm missing something basic here. Any ideas? Thanks! Tags density, driven, flow, mass transport, porous media Thread Tools Display Modes Linear Mode Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post campbell9035 Thermodynamics and Fluid Mechanics 4 Feb 15th 2017 05:01 PM alexgeek Kinematics and Dynamics 2 Jan 20th 2011 02:40 PM simonsam86 Thermodynamics and Fluid Mechanics 5 Sep 1st 2009 03:39 AM physicsquest Kinematics and Dynamics 7 Apr 27th 2009 11:06 PM abaset Advanced Mechanics 0 Feb 18th 2009 10:49 AM
2017-10-16 23:58:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6002771854400635, "perplexity": 1205.0737856472242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00015.warc.gz"}
https://www.physicsforums.com/threads/sequence-of-discontinuous-functions.311973/
# Sequence of discontinuous functions ## Homework Statement Need an example of a sequence of functions that is discountinuous at every point on [0,1] but converges uniformly to a function that is continuous at every point ## The Attempt at a Solution I used the dirichlet's function as the template f_n(x) = 1/n if x is rational and 0 if x is irrational f_n(x) is discontinuous at every x in [0,1] and converges to f(x)=0 But this seems to be a erroneous analysis, because 1/n eventually goes to 0 so f_n(x) will be continuous as n->infinity Can i get help in constructing this? ## Answers and Replies Related Calculus and Beyond Homework Help News on Phys.org Dick Science Advisor Homework Helper You already have a good example. What do you mean "because 1/n eventually goes to 0 so f_n(x) will be continuous as n->infinity". Can you give me an example of a value of n where f_n is continuous? Can you give me an example of a value of n where f_n is continuous? Since lim n->inf (1/n)=0, as n-> infinity, f_n(x) will be 0 for rationals as well. which means that for any epsilon>0, if n is large enough, |f(x)-0|< epsilon for rational as well? Dick Science Advisor Homework Helper Less than epsilon, yes. Equal to zero, no. No f_n is equal to zero. Just because the limit is 0, that doesn't mean f_n becomes zero for any finite n.
2020-02-24 15:22:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903755784034729, "perplexity": 918.9721179219827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00016.warc.gz"}
https://en.wikipedia.org/wiki/Active_learning_(machine_learning)
# Active learning (machine learning) Active learning is a special case of semi-supervised machine learning in which a learning algorithm is able to interactively query the user (or some other information source) to obtain the desired outputs at new data points.[1] [2] In statistics literature it is sometimes also called optimal experimental design. [3] There are situations in which unlabeled data is abundant but manually labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm be overwhelmed by uninformative examples. Recent developments are dedicated to hybrid active learning[4] and active learning in a single-pass (on-line) context,[5] combining concepts from the field of Machine Learning (e.g., conflict and ignorance) with adaptive, incremental learning policies in the field of Online machine learning. ## Definitions Let ${\displaystyle T}$ be the total set of all data under consideration. For example, in a protein engineering problem, ${\displaystyle T}$ would include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity. During each iteration, ${\displaystyle i}$, ${\displaystyle T}$ is broken up into three subsets 1. ${\displaystyle \mathbf {T} _{K,i}}$: Data points where the label is known. 2. ${\displaystyle \mathbf {T} _{U,i}}$: Data points where the label is unknown. 3. ${\displaystyle \mathbf {T} _{C,i}}$: A subset of ${\displaystyle T_{U,i}}$ that is chosen to be labeled. Most of the current research in active learning involves the best method to choose the data points for ${\displaystyle T_{C,i}}$. ## Query strategies Algorithms for determining which data points should be labeled can be organized into a number of different categories:[1] • Uncertainty sampling: label those points for which the current model is least certain as to what the correct output should be • Query by committee: a variety of models are trained on the current labeled data, and vote on the output for unlabeled data; label those points for which the "committee" disagrees the most • Expected model change: label those points that would most change the current model • Expected error reduction: label those points that would most reduce the model's generalization error • Variance reduction: label those points that would minimize output variance, which is one of the components of error • Balance exploration and exploitation: the choice of examples to label is seen as a dilemma between the exploration and the exploitation over the data space representation. This strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. For example, Bouneffouf et at.[6] propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label. • Exponentiated Gradient Exploration for Active Learning:[7] In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration. A wide variety of algorithms have been studied that fall into these categories.[1][3] ## Minimum Marginal Hyperplane Some active learning algorithms are built upon Support vector machines (SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate the margin, ${\displaystyle W}$, of each unlabeled datum in ${\displaystyle T_{U,i}}$ and treat ${\displaystyle W}$ as an ${\displaystyle n}$-dimensional distance from that datum to the separating hyperplane. Minimum Marginal Hyperplane methods assume that the data with the smallest ${\displaystyle W}$ are those that the SVM is most uncertain about and therefore should be placed in ${\displaystyle T_{C,i}}$ to be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largest ${\displaystyle W}$. Tradeoff methods choose a mix of the smallest and largest ${\displaystyle W}$s. ## Meetings • 2016 "Workshop Active Learning: Applications, Foundations and Emerging Trends"[8] ## Notes 1. ^ a b c Settles, Burr (2010), "Active Learning Literature Survey" (PDF), Computer Sciences Technical Report 1648. University of Wisconsin–Madison, retrieved 2014-11-18 2. ^ Rubens, Neil; Elahi, Mehdi; Sugiyama, Masashi; Kaplan, Dain (2016). "Active Learning in Recommender Systems". In Ricci, Francesco; Rokach, Lior; Shapira, Bracha. Recommender Systems Handbook (2 ed.). Springer US. ISBN 978-1-4899-7637-6. 3. ^ a b 4. ^ E. Lughofer (2012), Hybrid Active Learning (HAL) for Reducing the Annotation Efforts of Operators in Classification Systems. Pattern Recognition, vol. 45 (2), pp. 884-896, 2012. 5. ^ E. Lughofer (2012), Single-Pass Active Learning with Conflict and Ignorance. Evolving Systems, vol. 3 (4), pp. 251-271, 2012. 6. ^ Bouneffouf et .al (2014), Contextual Bandit for Active Learning: Active Thompson Sampling. Neural Information Processing - 21st International Conference, ICONIP 2014 7. ^ Bouneffouf et .al (2016), Exponentiated Gradient Exploration for Active Learning. Computers, vol. 5 (1), 2016, pp. 1-12 8. ^ http://vincentlemaire-labs.fr/iknow2016/ ## Other references • N. Rubens, M. Elahi, M. Sugiyama, D. Kaplan. Recommender Systems Handbook: Active Learning in Recommender Systems (eds. F. Ricci, P.B. Kantor, L. Rokach,B. Shapira). Springer, 2015 [1], [2]. • Active Learning Tutorial, S. Dasgupta and J. Langford.
2016-09-26 23:22:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6013762950897217, "perplexity": 2278.148532044745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00279-ip-10-143-35-109.ec2.internal.warc.gz"}
http://crapworks.de/post/influxdb-clustering/
InfluxDB is a quite new timeseries database. I was having a look at it during my search for alternatives for Graphites carbon/whisper backend. Since it looks pretty promising, right now it need some effort to get it up and running, especially if you want to build up a cluster (one of the reasons I was searching for an alternative to carbon). Using version 0.6.5 I’m going to describe what you have to do to setup a 3-Node cluster with a replication level of two. Primarily for me as a reminder, but maybe someone will find this usefull. I assume you use the Debian package provided on the influxdb website. Install influxdb: $aptitude install influxdb At least in this version, the package starts the influxdb instantly after installing it. Which is bad, because you have to configure clustering BEFORE the first start of you shiny new timeseries database. So stop it again, and delete the data directory that has just been created: $ service influxdb stop $rm -rf$datadir Now you can configure the database for clustering. On the fist node (we will use this as our seeder host) DO NOT configure any seeders! Just set the replication level to the desired amount of replicas: [sharding] ... replication-factor = 3 If on your system hostname will not return a name that is resolvable from the other node (which can happen a lot, since it doesn’t return a FQDN, just the hostname portion), you need to set the hostname parameter to a value that is resolvable from the other nodes (you should do that on all three nodes!) hostname = "my.fqdn.net" Start the first node! $service influxdb start On the second and third node, configure the same replication level as on the first node. Additionally, set the first node as the seed server hostname = "my.fqdn.net" [cluster] ... seed-servers = ["first.node.fqdn.net:8090"] [sharding] ... replication-factor = 3 Start the second and third node! $ service influxdb start At this point, your cluster should be up and running and you should see all three nodes in the cluster section of the web gui.
2018-10-22 22:25:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5362669229507446, "perplexity": 1147.2698998797819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515555.58/warc/CC-MAIN-20181022222133-20181023003633-00418.warc.gz"}
https://osadajablonowiec.pl/asphalt/322/Wed_194/
# understanding the backward pass through batch ### Understanding the backward pass through Batch ... Feb 12, 2016  Understanding the backward pass through Batch Normalization Layer Posted on February 12, 2016 At the moment there is a wonderful course running at Standford University, called CS231n - Convolutional Neural Networks for Visual get price ### batch normalization 正向传播与反向传播_xiaojiajia007的博客 Feb 12, 2016  Understanding the backward pass through Batch Normalization Layer Feb 12, 2016 At the moment there is a wonderful course running at Standford University, called CS231n - Convolutional Neural Networks for Visual Recognition , held get price ### BatchNormalization的反向传播_Andy的博客-CSDN博客 Oct 31, 2018  说明:本文转自 Understanding the backward pass through Batch Normalization Layer. 推导过程清晰明了,计算图的使用也大大降低了BP求导的复杂性和难度,强烈推荐学习一下,下面的部分均为作者原文。. At the moment there is a wonderful course running at get price ### Understanding the backward pass through Batch ... Understanding the backward pass through Batch Normalization Layer. Close. 24. Posted by 4 years ago. Archived. Understanding the backward pass through Batch Normalization Layer. get price ### Flair of Machine Learning - A virtual proof that name is ... Understanding the backward pass through Batch Normalization Layer Posted on February 12, 2016 An explanation of gradient flow through BatchNorm-Layer following the circuit representation learned in Standfords class CS231n. get price ### Deriving the Gradient for the Backward Pass of Batch ... Sep 14, 2016  This version of the batchnorm backward pass can give you a significant boost in speed. I timed both versions and got a superb threefold increase in speed: Conclusion. In this blog post, we learned how to use the chain rule in a staged manner to derive the expression for the gradient of the batch norm layer. get price ### Understanding the backward pass through Batch ... Understanding the backward pass through Batch Normalization Layer. Close. 24. Posted by 4 years ago. Archived. Understanding the backward pass through Batch Normalization Layer. get price ### Deriving Batch-Norm Backprop Equations Chris Yeh Aug 28, 2017  Deriving the Gradient for the Backward Pass of Batch Normalization. another take on row-wise derivation of $$\frac{\partial J}{\partial X}$$ Understanding the backward pass through Batch Normalization Layer (slow) step-by-step backpropagation through the batch normalization layer; get price ### Forward pass and backward pass in project scheduling ... Forward pass is a technique to move forward through network diagram to determining project duration and finding the critical path or Free Float of the project. Whereas backward pass represents moving backward to the end result to calculate late start or to find if there is any slack in the activity. Let us try and understand few terms that as a ... get price ### Layers — ML Glossary documentation Understanding the backward pass through Batch Norm Convolution ¶ In CNN, a convolution is a linear operation that involves multiplication of weight (kernel/filter) with the get price ### [DeepLearning] Batch, Mini Batch, Batch Norm相关概念 - 知乎 Batch的概念很浅显易懂,但是对新生来说,老手经常讲Batch、Mini Batch、Batch Size可能就会搞糊涂了,我尝试描述一下相关的概念。 ... Understanding the backward pass through Batch Normalization Layer [2]:《Deep learning with python》 ... get price ### Forward and Back Propagation over a CNN... code from Scratch!! Jun 11, 2020  Understanding the backward pass through Batch Normalization Layer. Backpropagation in a Convolutional Neural Network. Hope this article helps you to understand the intuition behind the forward and ... get price ### 全连接神经网络(下) - 云+社区 - 腾讯云 Understanding the backward pass through Batch Normalization Layer 简单来说,Batch Normalization就是在每一层的wx+b和f(wx+b)之间加一个归一化。 什么是归一化,这里的归一化指的是:将wx+b归一化成:均值为0,方差为1! get price get price ### optimization - Pytorch - Should backward() function be in ... Dec 26, 2019  The first one is batch gradient descent, and the second one is gradient descent. In most of the problems we want to do batch gradient descent, so the first one is the right approach. It is also likely to train faster. You may use the second approach if you want to do Gradient descent (but it is seldom desired to do GD when you can do batch GD). get price get price get price ### Batch Normalizationの実験 - sanshonokiの日記 Jan 08, 2018  Batch Normalizationでの逆伝搬の仕組みについてはいろいろな記事で Understanding the backward pass through Batch Normalization Layer がよく引用されていました。 MNIST. モデルは以下のように定義しています。 get price get price ### Forward pass and backward pass in project scheduling ... Forward pass is a technique to move forward through network diagram to determining project duration and finding the critical path or Free Float of the project. Whereas backward pass represents moving backward to the end result to calculate late start or to find if there is any slack in the activity. Let us try and understand get price get price ### Layers — ML Glossary documentation Understanding the backward pass through Batch Norm Convolution ¶ In CNN, a convolution is a linear operation that involves multiplication of weight (kernel/filter) with the get price ### Forward and Back Propagation over a CNN... code from Scratch!! Jun 11, 2020  Understanding the backward pass through Batch Normalization Layer. Backpropagation in a Convolutional Neural Network. Hope this article helps you to understand the intuition behind the forward and ... get price ### [DeepLearning] Batch, Mini Batch, Batch Norm相关概念 - 知乎 Batch的概念很浅显易懂,但是对新生来说,老手经常讲Batch、Mini Batch、Batch Size可能就会搞糊涂了,我尝试描述一下相关的概念。 ... Understanding the backward pass through Batch Normalization Layer [2]:《Deep learning with python》 ... get price ### Tutorial: training on larger batches with less memory in ... Sep 08, 2020  Therefore, during the backward pass through the model, ... “Towards Theoretical Understanding of Large Batch Training in Stochastic Gradient Descent.” ArXiv abs/1812.00542 (2018) [5] ... get price ### optimization - Pytorch - Should backward() function be in ... Dec 26, 2019  The first one is batch gradient descent, and the second one is gradient descent. In most of the problems we want to do batch gradient descent, so the first one is the right approach. It is also likely to train faster. You may use the second approach if you want to do Gradient descent (but it is seldom desired to do GD when you can do batch GD). get price ### 李理:卷积神经网络之Batch Normalization的原理及实现 - 环信 Aug 18, 2017  我们实现一个更优化的方案。【注,我们前面的实现已经还比较优化了,这个作业的初衷是让我们用更”原始“的计算图分解,比如把np.mean分解成加法和除法,有兴趣的读者可以参考 Understanding the backward pass through Batch Normalization Layer ,然后再优化成我们的版本】 get price get price get price ### Matrix form of backpropagation with batch normalization In Python as explained in Understanding the backward pass through Batch Normalization Layer.. cs231n 2020 lecture 7 slide pdf; cs231n 2020 assignment 2 Batch Normalization; Forward def batchnorm_forward(x, gamma, beta, eps): N, D = x.shape #step1: calculate mean mu = 1./N * np.sum(x, axis = 0) #step2: subtract mean vector of every trainings example xmu = x - mu #step3: following the get price ### Homework 3 Part 1 this homework, you will develop a basic understanding of completing a forward and backward pass through a GRUCell. NOTE: Your GRU Cell will have a fundamentally di erent implementation in comparison to the RNN Cell (mainly in the backward method). This is a pedagogical decision to introduce you to a variety of get price ### Detailed explanation of batch normalization Develop Paper Jan 30, 2020  In practice, matrix or vector operations are usually used, such as element by element multiplication, sum along an axis, matrix multiplication, etc. for details, see understanding the backward pass through batch normalization layer and batchnorm in Caffe. Forecast phase of batch normalization get price ### Understanding Batch Normalization with Examples in Numpy ... Mar 27, 2018  Gif from here. So for today, I am going to explore batch normalization (Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift by Sergey Ioffe, and Christian Szegedy).However, to strengthen my understanding for data preprocessing, I get price
2021-07-28 06:42:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6901097297668457, "perplexity": 3217.1008426755516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00450.warc.gz"}
https://cinema.casinoplay.icu/post3786-kirutimi.php
WARNING! You will see nude photos. Please be discreet. Do you verify that you are 18 years of age or older? Watch Free Latex equations multiple lines HOT ♨ Videos Male celebrity breast cancer. My sisters hot friend sexy girls porn videos. Cat swinging arm mechanism. Free videos of cunnilingus. Jamaica white teen porn. Taiwan girl fucked hd free. Vintage 30s stetson straw boater hat. Asking Blue Flim. Watch Latex equations multiple lines PORN Movies This page outlines some more advanced uses of mathematics markup using LaTeX. To have the enumeration follow from your section or subsection heading, you must use the amsmath package or use AMS class documents. Then enter. To number subordinate equations in a numbered equation environment, place the part of document containing them in a subequations environment:. Maxwell's equations: Referencing subordinate equations can be done using either of two methods: It is possible to add both labels in case both types of references are needed. A problem often encountered with displayed environments displaymath and equation is the Latex equations multiple lines of any ability to span multiple lines. While it is Latex equations multiple lines to define lines individually, these will not be aligned. This can be particularly useful for creating new binary relations:. It is convenient to define a new operator that will set the equals sign with H and the provided fraction:. Sometimes the comments are longer than the formula being Latex equations multiple lines on, which can cause spacing problems. Yet again, the syntax is different: For more extensible arrows, you must use the mathtools package:. Watch XXX Movies Clara morgane porn free. Sasha nude pics. Often the second column consists mostly of normal text. There are also a few environments Latex equations multiple lines don't form a math environment by themselves and can be used as building blocks for more elaborate structures:. Really hot video girl in goodlife by kanye west Whereas 0 means "it is permissible to break here", 4 forces a break. No argument means the same as 4. It too can have an optional argument denoting the priority of page breaks in equations. Similarly, 1 means "allow page breaks but avoid them" and 4 means "break whenever you want". Its default means never break immediately before a display. Knuth TeXbook chapter 19 explains this as a printers' tradition not to have a displayed equation at the start click a page. It can be relaxed with. Sometimes an equation might look best kept together preceding text by Latex equations multiple lines higher penalty, for example a single-line paragraph read article a single-line equation, especially at the end of a section. Although many common operators are available in LaTeX, sometimes you will need to Latex equations multiple lines your own, e. However, if the operator is frequently used, it is preferable to define a new operator that can be used Latex equations multiple lines the entire document. Paulo Ferreira Paulo Ferreira 21 2. Not yet mentioned here, another choice is environment alignedagain from package amsmath: MattAllegro MattAllegro 2, 5 Latex equations multiple lines To solve this issue, I used the array environment inside the equation environment like this: Sign up or log in Sign up using Google. Post as a guest Name. Email Required, but never shown. Featured on Meta. Overview This section will cover how to typeset mathematics. Aligning equations with amsmath What is math mode? The following table lists three methods and their usage of declaring math mode. Now try the following LaTeX code. Sexy kathryn Watch XXX Movies Israel-Porn Cleaner. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. Featured on Meta. Announcing the arrival of Valued Associate Cesar Manara. Planned maintenance scheduled April 23, at Data science time! April and salary with experience. The Ask Question Wizard is Live! Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. There are defaults for placement of subscripts and superscripts. For example, limits for the lim operator are usually placed below the symbol:. To change the default placement of summation-type symbols to the side for every case, add the nosumlimits option to the amsmath package. To change the placement for integral symbols, add intlimits to the options. If you want to change its state, simply group it. You can make it another math operator if you want, and then you can have limits and then limits again. There may be a time when you would prefer to have some control over the font size. For example, using text-mode maths, by default a simple fraction will look like this: The following code provides an example. If you want to keep the size consistent, you could declare each fraction to use the display style instead; e. One potential downside is that this command sets the global maths sizes, as it can only be used in the document preamble. But it's fairly easy to use: The gather and align environments both give us the result we want, albeit in slightly different manners. Overview This section will cover how to typeset mathematics. What is math mode? The following table lists three methods and their usage of declaring math mode. Now try the following LaTeX code. You should also mention split , that does in a cleaner way what I used to do with align. I noticed that breqn does not align the second part broken part of the equation to the right by itself. This can be noticed when you have a short first term in which case it is center aligned. Is there a way to align it to the right explicitly? The package unfortunately has huge incompatibility issues. Why is it called multline instead of multiline. I misspelt it at first: Use multline to split equations without alignment first line left, last line right Use split to split equations with alignment Here are examples: The corresponding source code is as follows: Use equation: Why does multline give the equation number on the second line rather than in the middle like split does? For multline the tag placement is fixed: For split the placement is customizable. By default, the centertags option, tags are centered vertically. The tbtags option makes the placement the same as multline.. If we use the gather environment, every equation is centered. There is no "i" in "multline". I think that was at least the second time that I found this answer which I'd forgotten and was puzzled when I got an error from "mult i line". First line left, last line right—that is the multline environment: Philipp Philipp Environment multiline undefined. This seems to be one of the most awful choices of naming in all of latex, given how many people instinctively want to put that second i in there. If I wanted right-aligning, here's what I would try with white space liberally applied: Thus, the first line is not Latex equations multiple lines much left-aligned, as it is right-aligned with a fixed amount of white-space added at the end. Vary this Latex equations multiple lines to taste as well. Niel de Beaudrap Niel de Beaudrap Could you share why? Is my english confusing? Your English seemed fine to me, sorry if it seemed as though I were critiquing Latex equations multiple lines I meant the LaTeX code itself, which I had difficulty reading. From scanning it, it seemed to be mostly-different from what I was recommending; but as I was not totally certain, I was leaving open the possibility that I was posting essentially a duplicate answer. You can choose the layout that better suits your document, even if the equations are really long, or if you have to Latex equations multiple lines several equations in the same Latex equations multiple lines. The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. Looking for a man to marry We can surpass these difficulties with amsmath. Let's check an example:. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. Motivational hot women nude. This Latex equations multiple lines outlines some more advanced uses of mathematics markup using LaTeX. Asian chicks love black dick To have the enumeration follow from your section or subsection Latex equations multiple lines, you must use the amsmath package or use AMS class documents. Then enter. To number subordinate equations in a numbered equation environment, place the part of document containing them in a subequations environment:. Maxwell's equations: Referencing subordinate equations can be done using either of two methods: It is possible to add both labels in case both types of references are needed. A problem often encountered with displayed environments displaymath and equation is the lack of any ability to span multiple lines. While it is possible to define lines Latex equations multiple lines, these will not be aligned. This can be particularly useful for creating new binary relations:. Sexy thick college girls It is convenient to define a new operator that will set the equals sign with H and the provided fraction:. Sometimes the comments are longer than the formula Latex equations multiple lines commented on, which can cause spacing problems. Yet again, the syntax Latex equations multiple lines different: For more extensible arrows, you must use the mathtools package:. Note that the align environment must not be nested inside an equation or similar environment. Big saggy black boobs Instead, align is a replacement for such environments; the contents inside an align are automatically placed in math mode. Notice that we've added some indenting on the second line. The cases environment [1] allows the writing of piecewise functions:. Latex equations multiple lines casestext style math is used with results such as:. Display style may be used instead, by using the dcases environment [2] from mathtools:. Often the second column consists mostly of normal text. Link are also Latex equations multiple lines few environments that don't form a math environment by themselves and can be used as building blocks for more elaborate structures:. Whereas 0 means "it is permissible to break here", 4 forces a break. No argument means the same as 4. It too can have an optional argument denoting the priority of page breaks in here. Similarly, 1 means "allow page breaks but avoid them" and 4 means "break whenever you Latex equations multiple lines. • Game stores in florence kentucky • How to find a single black man • Ebony babe angel marie plays with bbd screenshot Latex equations multiple lines default means never break immediately before a display. Knuth TeXbook chapter 19 explains this as a printers' tradition not to have a displayed equation at the start of a page. It can be relaxed with. Sometimes an equation might look best Latex equations multiple lines together preceding text by a higher penalty, for example a single-line paragraph about a single-line equation, especially at the end of a section. Although many common operators are available in LaTeX, sometimes you will need to write your own, e. However, if the operator is frequently used, it is preferable to define a new operator that can be used throughout the entire document. There are defaults for placement of subscripts and superscripts. For example, limits for the lim operator are usually placed below the symbol:. To change the default placement of summation-type symbols to the side for every case, add the nosumlimits option to the amsmath package. To Latex equations multiple lines the placement for integral symbols, add intlimits to the options. If you want to change its state, simply group it. You can make it Latex equations multiple lines math operator if you want, and then you can have limits and then limits again. There may be a time when you would prefer to have some control over the font size. What does ddd mean in texting For example, using text-mode maths, by default a simple Latex equations multiple lines will look like this: The following code provides an example. If you want to keep the size consistent, you could declare each fraction to use the display style instead; e. One potential downside is that this command Latex equations multiple lines the global maths sizes, read article it can only be used in the document preamble. But it's fairly easy to use: The values you input are assumed to be point pt size. Note that the changes only take place if the value Latex equations multiple lines the first argument matches the current document text size. It is therefore common to see a set of declarations in the preamble, in the event of the main font being changed. Short skips are used if the preceding line ends, horizontally, before the formula. These parameters must be set after. Retrieved from " https: Namespaces Book Discussion. Views Latex equations multiple lines Edit View history. Policies and guidelines Contact us. This page was last edited on 21 Marchat By using this site, you agree to the Latex equations multiple lines of Use and Privacy Policy. Similar to alignbut left aligns first equation column, and right aligns last column. Takes an argument specifying number of columns. • Maid Pissing Licking • Party stanger • Michigan sex toy parties • True stories of swingers
2020-09-29 19:31:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464375138282776, "perplexity": 2087.061384087205}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00699.warc.gz"}
https://zbmath.org/?q=an%3A1059.49010
# zbMATH — the first resource for mathematics Minty variational inequalities, increase-along-rays property and optimization. (English) Zbl 1059.49010 Summary: Let $$E$$ be a linear space, let $$K\subseteq E$$ and $$f:K \to\mathbb{R}$$. We formulate in terms of the lower Dini directional derivative the problem of the generalized Minty variational inequality GMVI $$(f',K)$$, which can be considered as a generalization of MVI$$(f',K)$$, the Minty variational inequality of differential type. We investigate, in the case of $$K$$ star-shaped, the existence of a solution $$x^*$$ of GMVI$$(f',K)$$ and the property of $$f$$ to increase-along-rays starting at $$x^*$$ $$f\in\text{IAR}(K,x^*)$$. We prove that the GMVI$$(f',K)$$ with radially l.s.c. function $$f$$ has a solution $$x^*\in\text{ker}\,K$$ if and only if $$f\in\text{IAR}(K,x^*)$$. Further, we prove that the solution set of the GMVI $$(f',K)$$ is a convex and radially closed subset of $$\text{ker}\,K$$. We show also that, if the GMVT$$(f',K)$$ has a solution $$x^*\in K$$, then $$x^*$$ is a global minimizer of the problem $$\min f(x)$$, $$x\in K$$. Moreover, we observe that the set of the global minimizers of the related optimization problem, its kernel, and the solution set of the variational inequality can be different. Finally, we prove that, in the case of a quasiconvex function $$f$$, these sets coincide. ##### MSC: 49J40 Variational inequalities 47J20 Variational and other types of inequalities involving nonlinear operators (general) 47N10 Applications of operator theory in optimization, convex analysis, mathematical programming, economics Full Text: ##### References: [1] Baiocchi, C., and Capelo, A., Variational and Quasivariational Inequalities: Applications to Free-Boundary Problems, John Wiley and Sons, New York, NY, 1984. · Zbl 0551.49007 [2] Kinderlehrer, D., and Stampacchia, G., An Introduction to Variational Inequalities and Their Applications, Academic Press, New York, NY, 1980. · Zbl 0457.35001 [3] Giannessi, F., Theorems of the Alternative, Quadratic Programs, and Complementarity Problems, Variational Inequality and Complementarity Problems, Edited by R. W. Cottle, F. Giannessi, and J. L. Lions, John Wiley and Sons, New York, NY, pp. 151-186, 1980. · Zbl 0484.90081 [4] Giannessi, F., On Minty Variational Principle, New Trends in Mathematical Programming, Edited by F. Giannessi, S. Komlósi, and T. Rapcsak, Kluwer, Dordrecht, Holland, pp. 93-99, 1998. · Zbl 0909.90253 [5] Komlósi, S., On the Stampacchia and Minty Variational Inequalities, Generalized Convexity and Optimization for Economic and Financial Decisions, Edited by G. Giorgi and F. Rossi (Proceedings of the Workshop held in Verona, Italy, 28-29 May 1998), Pitagora Editrice, Bologna, Italy, pp. 231-260, 1999. [6] Lions, J. L., and Stampacchia, G., Variational Inequalities, Communications on Pure and Applied Mathematics, Vol. 20, pp. 493-512, 1967. · Zbl 0152.34601 [7] Minty, G. J., On the Generalization of a Direct Method of the Calculus of Variations, Bulletin of the American Mathematical Society, Vol. 73, pp. 314-321, 1967. · Zbl 0157.19103 [8] Stampacchia, G., Formes Bilin’eaires Coercives sur les Ensembles Convexes, Comptes Rendus de l’Acad’emie des Sciences de Paris, Groupe 1, Vol. 258, pp. 4413-4416, 1964. · Zbl 0124.06401 [9] Rubinov, A. M., Abstract Convexity and Global Optimization, Kluwer, Dordrecht, Holland, 2000. · Zbl 0985.90074 [10] Mastroeni, G., Some Remarks on the Role of Generalized Convexity in the Theory of Variational Inequalities, Generalized Convexity and Optimization for Economic and Financial Decisions, Edited by G. Giorgi and F. Rossi (Proceedings of the Workshop held in Verona, Italy, 28-29 May 1998), Pitagora Editrice, Bologna, Italy. pp. 271-281, 1999. [11] Mordukhovich, B., Stability Theory for Parametric Generalized Equations and Variational Inequalities via Nonsmooth Analysis, Transactions of the American Mathematical Society, Vol. 343, pp. 609-657, 1994. · Zbl 0826.49008 [12] Thach, P. T., and Kojima, M., A Generalized Convexity and Variational Inequality for Quasiconvex Minimization, SIAM Journal on Optimization, Vol. 6, pp. 212-226, 1996. · Zbl 0841.49019 [13] Yang, X. Q., Generalized Convex Functions and Vector Variational Inequalities, Journal of Optimization Theory and Applications, Vol. 79, pp. 563-580, 1993. · Zbl 0797.90085 [14] Crespi, G. P., Ginchev, I., and Rocca, M., Existence of Solutions and Star-Shapedness in Minty Variational Inequalities, Journal of Global Optimization (to appear). · Zbl 1097.49007 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-21 12:23:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.707434356212616, "perplexity": 1148.5623307721899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00003.warc.gz"}
http://the-user.org/post/tried-variadic-templates
Hi! #include <iostream> using namespace std; // a struct, because there are no partial specializations // for functions template<typename... args> struct Print { // checking, because you cannot explicitly construct // a parameter pack of strings static_assert((sizeof...(args)) == 0, "illegal arguments"); static void exec() { } }; template<typename... args> struct Print<string, args...> { static void exec(string x, args... rest) { cout << x << endl; // recursion, because there is no static-for (ammendment: unlike the foreach for tuples in D) print(rest...); } }; // that function works // but it would be probably three lines in D template<typename... T> void print(T... args) { Print<T...> ::exec(args...); } I think you see what I meant, variadic templates are nice for printf and tuples, but for any slidely different tasks they are still ugly. Yes, I used it, really (skip deleted files and minor edits), it was ugly, but it provides some abstraction making the code – probably not readable – at least less redundant (I do not like copy&paste programming, the reason for copy&paste programming the lack of nice meta-programming features). Some last words why I am not using D: in C++ I can achieve anything using templates and implicit casts, and I have real value-semantics, D did not want to be like Java, but it forbids some stuff which is responsible for a lot of flexibility in C++. And if I would switch the language, I would try to choose a language involving some new, innovative and consistent concepts, that is not the case for D, maybe for Scala, but it does not allow value-semantics, too. ### One Response to “Tried Variadic Templates” 1. Templates Says: Their are lot of defects in this Variadic Templates. Seems to be spam, but I had not noticed it till today (Oct. 3rd 2011), thus I will keep it. The User XHTML: Use <blockquote cite="name"> for quotations, <pre lang="text    ∨ cpp-qt ∨ cpp ∨ bash ∨ other language"> for code, [latex] for formulas and <em> for em. Contact me if the comment does not get published, it may have accidentally been marked as spam. Anti-Spam Quiz:
2018-11-20 08:20:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38190799951553345, "perplexity": 6142.486468783096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746301.92/warc/CC-MAIN-20181120071442-20181120093442-00332.warc.gz"}
https://cemc1.uwaterloo.ca/~math600/wp/m-6/
# M-6: Creating Procedures We already saw how to create a new mathematical function using the -> operator. This lesson focuses on Maple procedures. (In general, for programming, there is not a strong distinction between functions and procedures so be aware that we occasionally use different words for the same thing.) A procedure is a set of rules for taking some input, doing some steps, and then giving back (returning) an output. The new generality beyond ->-style functions comes from the fact that we will be able to declare multiple instructions as part of the procedure definition. Here’s a tiny example of a procedure that returns the square of its input: > square := proc(x); return x*x; end; As you might guess, after executing this line, square(5) will give back 25 and square(y^2) will give back y4. Try this yourself and be careful to use the shift-enter method introduced in the previous lesson. The general form of a procedure definition is > «procedure name» := proc(«sequence of arguments»); «body» end; where there can be 0, 1, 2, or any number of arguments. Arguments means the same as inputs for a procedure. (If you’re skeptical that any useful procedure could have 0 arguments, try running rand(). Other situations are when you want to interact with user input or output, access the printer or files, or if the procedure has some side effects on global variables.) ## return of the proc Wherever the statement return «value» is found inside of your procedure body, the procedure immediately quits and gives back the stated value to whatever called the procedure. So the following code is another way of finding the absolute value: > absValue := proc(x); if x < 0 then return -x; # 1st return statement end: return x; # 2nd return statement end; You don’t have to use an else clause because the procedure could only possibly get to the 2nd return statement if the body of the if statement (the 1st return) is not evaluated. Maple will allow you to leave out the return statement if it’s on the last line, but we strongly recommend using explicit returns like above to keep your style consistent, readable, and error-free. (A common cause of errors is forgetting to return. For example, what happens to the procedure above if we change the 3rd line to just -x;?) # Local Variables Sometimes you’ll want to write a complicated procedure that has some intermediate steps. As we have done many times before, we’ll use the quadratic equation as an example. Let’s create a procedure that takes inputs (arguments) a, b, and c and returns the set of all real zeroes to $$ax^2+bx+c=0$$. We know that computing the discriminant is an important intermediate step and so it makes sense to compute that separately. This gives > quadraticRootSet := proc(a, b, c); disc := b*b-4*a*c; if (disc < 0) then return {}; end; #... more stuff, square roots, etc end; By the time we finish writing out this procedure, Maple will warn us that disc is “implicitly declared local.” Accordingly, we will explicitly declare it as local to make the warning go away: > quadraticRootSet := proc(a, b, c) local disc; #... as before That is to say: immediately after the argument list and before any (semi)colon, type local and then the sequence of all variables you want to declare as local variables for that procedure. Maple complains about this because you could in principle also want to declare disc as global. (See ?global for details.) The point of a local variable is that it does not interfere with any other variables defined with the same name in other places. Consider the following set of commands: We define tmp to equal 4, and even though the procedure f also sets tmp to 10000, this is a different variable of the same name. So at the end, the tmp that we print still has the value of 4. Imagine a long Maple document with a dozen inter-related procedures, each with a loop counter variable i, and you can begin to appreciate why it makes sense to give each procedure its own local scope (a.k.a. local namespace). # Memoization Often, a procedure should have its output depending solely on its arguments. (Functions in the sense of a mathematical relation whose inverse is injective are like this; rand() is an example of one that is not.) In such a case, you can save a great deal of computational effort by remembering past input-output pairs. A typical example is the Fibonacci function: > fibonacci := proc(n): if (n <= 2) then return 1; end: return fibonacci(n-1) + fibonacci(n-2); end: Try this and see that it gives the correct first few values. (E.g., run seq(fibonacci(i), i=1..10);) Now, try to compute fibonacci(25) or fibonacci(30). There will be a noticeable delay in evaluating it! This is surprising, since you will see that these numbers are not so big, and only require doing about thirty additions — or do they? In fact, you’ve asked Maple to compute the Fibonacci numbers in quite a different way than what you would do by hand. By hand, you’d probably make a list starting from the first few values and adding two each time to get the next one. But you’re telling Maple (we’ll use F for short), the way to compute F(k) is to call F(k-1) and then call F(k-2) and then add these numbers up. Now think about what this means for F(30): it requires calling F(29) and F(28). In turn, we’ll eventually have to call F(28) and F(27) once for F(29), and F(27) and F(26) in order to compute F(28). (You can visualize all of the calls as a binary tree.) There is a great deal of redundancy going on and in fact computing F(k) in this way takes F(k) steps; and 0.8 million steps takes a long time, even for Maple. There are two general techniques called dynamic programming and memoization which can help ameliorate this kind of problem. We’ll only discuss memoization since it has an easy catch-all implementation in Maple; the Sieve of Eratosthenes for computing primes is an example of dynamic programming that you may already know, and the “write down 1, 1, 1+1=2, 1+2=3, …” method for the Fibonacci numbers is another example. Back to memoization. To apply memoization to a procedure means that 1. we attach a hidden lookup table to the procedure, mapping inputs to outputs 2. when the procedure is called on an input that has not been seen before, we run the procedure body but just before returning, we store the input and output in the table 3. when the procedure is called on an input that has been seen before, we skip the procedure body and just give the value in the table. Maple will take care of all of this for you! Add option remember; after the variable list (before or after the local variables, if they are present). This runs much faster: computing F(k) with memoization uses fewer than 2k calls, much less than the F(k) ~ $$1.6^k$$ many calls used by the non-memoized version. ## Procedures versus Functions As a final note, there are some things that are easy to do with ->-style functions which are not so easy with procedures. To give just one example, suppose we want to find the roots of a procedure such as absValue above. The solve procedure is unable to cope with this setting, and so is its floating-point cousin fsolve. Although it is beyond the scope of this course, you would probably want to use the bisection method (binary search) in this setting. These are course notes for the University of Waterloo's course Math 600: Mathematical Software. © 2012—. Written and developed by David Pritchard and Stephen Tosh. Contact (goes to the CEMC)
2022-10-04 01:28:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6423640251159668, "perplexity": 980.2543616932478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00568.warc.gz"}
http://mathbio.colorado.edu/index.php/MBW:Flocculation_Dynamics_-_a_PDE_Model
September 25, 2017, Monday # Executive Summary In "Klebsiella pneumoniae Flocculation Dynamics", Bortz, et. al [1] propose a nonlinear transport partial differential equation (PDE) as a model for the flocculation dynamics of bacterial aggregates in suspension. One goal for this Wiki page is to describe their model. Another goal is to provide enhanced detail on how the authors approached solving the PDE numerically. Finally, the most important goal is to present and explain a Matlab program that approximates a solution to the PDE. # Overview 1. Mathematics Used - The following model heavily relies on Partial Differential Equations to model aggregation, flocculation, and growth (respectively) within a closed system (i.e. a body). Using the Method of Lines (MOL) technique, the governing PDE is reduced -through discretization of the spatial dimensions- into a system on Ordinary Differential Equations. The system of ODE's can then be solved in numerous ways. For other examples of PDEs in mathematical biology, look at Traveling Waves in Excitable Media and Gravitational Effects on Blood Flow. 2. Type of Model - This model attempts to obtain numerical rates for aggregation, fragmentation and growth of clusters of bacteria. 3. Biological System Studied- Much research is being done to study the interaction of miscro-organisms, inside the body and outside. This research can have applications from epidemiology to water treatment to evolutionary biology and the origination of multicellular life. # Context/Biological Phenomenon under Consideration "Klebsiella pneumoniae" are a common source of infections in the bloodstream. How these bacteria behave in the bloodstream, and in particular, how they behave in suspension (as opposed to how they behave attached to a substrate), provides a focus for the paper mentioned above and for the study in this Wiki. We consider three factors in the flocculation dynamics of the bacteria, aggregation, fragmentation, and growth. Aggregation simply means an individual bacteria or group of bacteria joining together with another individual bacteria or group of bacteria to form one larger group (or aggregate). Fragmentation is the reverse process, where an aggregate of bacteria breaks into smaller aggregates, meaning each new aggregate consists of fewer bacteria than the original clump. Finally, growth refers to the increase in size (volume) of an aggregate. • For those of you food lovers out there: Believe it or not, flocculation has a large impact in Cheese Production # Mathematical Model with Variable/Parameter Definitions Bortz, et al., propose the following governing PDE as a model, and solving it provides a focus for this wiki $b_{t}+(G(x)b)_{x}=A(x,b)+F(x,b)$ with $G(\underline {x})b(t,\underline {x})=0$ and $b(0,x)=b_{0}(x)$. Figure 1 shows a representation of the processes included in this model. Figure 1: Aggregation, Fragmentation, and Growth Schematic. Variables • $x$ - volume of bacterial aggregate (in fL) • $\underline {x}=2fL$, $\overline {x}=1000fL$ - assumed minimum and maximum volumes • $t$ - time (in minutes unless otherwise stated) • $b(t,x)$ - density function representing the number of aggregates of size x per fL • $G(x)=\gamma _{G}x(1-{\frac {x}{\overline {x}}})$ - assumed logistic growth rate of bacterial aggregates of size x • $A(x,b)=A_{{in}}-A_{{out}}$ - net rate at which bacterial flocs aggregate • $A_{{in}}={\frac {1}{2}}\int _{{\underline {x}}}^{{x-\underline {x}}}K_{A}(y,x-y)b(t,y)b(t,x-y)dy$ for $x\in [2\underline {x},\overline {x}]$ • $A_{{out}}=b(t,x)\int _{{\underline {x}}}^{{\overline {x}-x}}K_{A}(x,y)b(t,y)dy$ for $x\in [\underline {x},\overline {x}-\underline {x}]$ • $K_{A}(x,y)=\gamma _{A}\left({\frac {\epsilon }{\nu }}\right)^{{1/2}}\left(x^{{1/3}}+y^{{1/3}}\right)^{3}$ - assumed turbulent mixing kernel describing the rate at which flocs of volumes x and y join to make a floc of size x + y • $F(x,b)=F_{{in}}-F_{{out}}$ - net rate at which bacterial flocs fragment • $F_{{in}}=\int _{x}^{{\overline {x}}}\Gamma (x,y)K_{F}(y)b(t,y)dy$ for $x\in [\underline {x},\overline {x}-\underline {x}]$ • $F_{{out}}={\frac {1}{2}}K_{F}(x)b(t,x)\triangle x$ for $x\in [2\underline {x},\overline {x}]$ • $K_{F}(x)=\gamma _{F}(x-\underline {x})^{{1/3}}$ - describes rate at which flocs fragment • $\Gamma (x,y)={\begin{cases}3/y;&{\frac {y}{3}} - describes probability density of daughter flocs for the fragmentation of a parent floc of size y Parameters • $\gamma _{G}=6.8\times 10^{{-4}}/{\textrm {min}}$ • $\gamma _{A}=2.7\times 10^{{-15}}/{\textrm {fL}}^{2}$ • $\gamma _{F}=6.6\times 10^{{-5}}/(\mu {\textrm {mfLmin)}}$ • $\epsilon =4.43\times 10^{{-4}}{\textrm {m}}^{2}/{\textrm {s}}^{3}$ • $\nu =1.99\times 10^{{-6}}{\textrm {m}}^{2}/{\textrm {s}}$ # Analysis (Analytical and Computational) • In order to solve the governing PDE, we will employ a strategy known as Method of Lines (MOL). In general, to execute the MOL technique, we discretize our PDE in its spatial variable(s), which will result in a system of ODEs. We can then use our favorite ODE solver to advance the PDE solution in discrete time steps. ## Discretization Scheme Bortz, et al., provide a detailed analysis of the discretization scheme and a proof for the convergence of the scheme including the allowable space of functions, $H$, we can use for our density function. An assumption we make for the rest of this Wiki is that of an equispaced mesh. To produce the scheme, we need a set of basis elements for $i=1,\dots ,N$, $\beta _{i}^{N}(x)={\begin{cases}1;\;x_{{i-1}}^{N}\leq x where $x_{i}$ represents a node in the mesh from the minimum volume $x_{0}$ to the maximum volume $x_{N}$. These functions form an orthogonal basis for $H^{N}=\left\{h\in H|\;h=\sum _{{i=1}}^{N}\alpha _{i}\beta _{i}^{N},\;\alpha \in {\mathbb {R}}\right\}$ with projections $\pi ^{N}:H\rightarrow H^{N}$ $\pi ^{N}h=\sum _{{j=1}}^{N}\alpha _{j}\beta _{j}^{N};{\textrm {where}}\alpha _{j}={\frac {1}{\triangle x}}\int _{{x_{{j-1}}^{N}}}^{{x_{j}^{N}}}h(x)dx.$ Finally, we need one more operator, ${\mathcal {G}}^{N}$, defined as $({\mathcal {G}}^{N}h)(x)=\sum _{{j=1}}^{N}{\frac {1}{\triangle x}}\left(G(x_{{j-1}})\alpha _{{j-1}}-G(x_{{j}})\alpha _{{j}}\right)\beta _{j}^{N}(x).$ All of this provides the spatial discretization needed for MOL to provide the resulting system of ODEs $b_{t}^{N}={\mathcal {G}}^{N}b^{N}+\pi ^{N}({\mathcal {A}}(b^{N})+{\mathcal {F}}(b^{N}))$ $b^{N}(0,x)=\pi ^{N}b_{0}(x).$ Each of the operators in the discretized system generate a matrix (or vector) that in combination provide the framework for an ODE solver. The following gives a detailed description of each term in the discretized system. We also include the code associated with each. ## Initial Conditions Initial Conditions and the left hand side vector, $[b^{N}]_{t}$, in our ODE system. Based on actual data taken five minutes after the biofilm colony was placed in the swirling flask, the authors determined a bi-exponential initial density function. The function used in this Wiki has been updated since the publishing of the paper. $b_{0}(x)=89.828e^{{-0.0035303x}}+1946.6\times 10^{6}e^{{-1.3159x}}$ The projection into $H^{N}$ of these initial conditions is derived as follows: $\pi ^{N}(b_{0}(x))=\sum _{{j=1}}^{N}\alpha _{j}\beta _{j}^{N}(x){\textrm {where}}\alpha _{j}={\frac {1}{\triangle x}}\int _{{x_{{j-1}}^{N}}}^{{x_{j}^{N}}}b_{0}(x)dx.$ So we have, $\alpha _{1}={\frac {1}{\triangle x}}\int _{{x_{0}^{N}}}^{{x_{1}^{N}}}b_{0}(x)dx,\;\alpha _{2}={\frac {1}{\triangle x}}\int _{{x_{1}^{N}}}^{{x_{2}^{N}}}b_{0}(x)dx,{\textrm {etc.}}$ Then to develop the left hand side of our system of differential equations recall that $\beta _{j}^{N}=1$ when $x_{{j-1}}\leq x. So for instance, $\beta _{1}(x_{0})=1$, but $\beta _{1}(x_{i})=0$ for any $i\neq 1$. Now we have $\pi ^{N}b_{0}(x_{0})=\alpha _{1}\beta _{1}^{N}(x_{0})+\alpha _{2}\beta _{2}^{N}(x_{0})+\dots +\alpha _{N}\beta _{N}^{N}(x_{0})=\alpha _{1}$ $\pi ^{N}b_{0}(x_{1})=\alpha _{1}\beta _{1}^{N}(x_{1})+\alpha _{2}\beta _{2}^{N}(x_{1})+\dots +\alpha _{N}\beta _{N}^{N}(x_{1})=\alpha _{2}$ $\vdots$ $\pi ^{N}b_{0}(x_{N})=\alpha _{1}\beta _{1}^{N}(x_{N})+\alpha _{2}\beta _{2}^{N}(x_{N})+\dots +\alpha _{N}\beta _{N}^{N}(x_{N})=\alpha _{N}$ In other words, $[b^{N}]_{t}={\begin{bmatrix}\alpha _{1}\\\alpha _{2}\\\vdots \\\alpha _{N}\end{bmatrix}}_{t}$ To implement this in Matlab and actually form our initial conditions on the mesh, we wrote the following function. Of note, the first iteration of this function used symbolic integration on the original $b_{0}$ defined above for each calculation on the mesh. On a suggestion from Dr. Bortz, we ran a symbolic integration on $b_{0}$ using symbolic variables $x_{1}$ and $x_{2}$ to generate the expression for the code below. With a mesh of 50 increments from $x_{0}$ to $x_{N}$, this technique sped up our code significantly. function c = intinitcond2(x1,x2) % This function will integrate the symbolic initial function from volume % x1 to volume x2. c = 19466000000000/(13159*exp((13159*x1)/10000)) - ... 19466000000000/(13159*exp((13159*x2)/10000)) + ... 809098694654873829376/(31798115529012125*... exp((254384924232097*x1)/72057594037927936)) - ... 809098694654873829376/(31798115529012125*... exp((254384924232097*x2)/72057594037927936)) ; Additionally, we can see the general pattern for the left hand side vector, $b^{N}]_{t}$, in our ODE system. ${\begin{bmatrix}\sum _{{i=1}}^{N}\alpha _{i}\beta _{i}^{N}(x_{0})\\\sum _{{i=1}}^{N}\alpha _{i}\beta _{i}^{N}(x_{1})\\\vdots \\\sum _{{i=1}}^{N}\alpha _{i}\beta _{i}^{N}(x_{{N-1}})\end{bmatrix}}_{t}$ ## Approximating generators ${\mathcal {G}}^{N}:H^{N}\rightarrow H^{N}$ of the infinitesimal generator This was defined as $({\mathcal {G}}^{N}h)(x)=\sum _{{j=1}}^{N}{\frac {1}{\triangle x}}\left(G(x_{{j-1}})\alpha _{{j-1}}-G(x_{{j}})\alpha _{{j}}\right)\beta _{J}^{N}(x)$, for $h\in H^{N}$. Recall, $G(x)$ represents an assumed logistical growth with maximum size of 1000 fL, where $G(x)=\gamma _{G}x(1-{\frac {x}{\overline {x}}})$. When the approximating growth generator is applied to the projected density function, $b^{N}$, in our system of ODEs, we get the relation ${\mathcal {G}}^{N}\beta _{J}^{N}={\frac {1}{\triangle x}}G(x_{j}^{N})\beta _{{j+1}}^{N}-{\frac {1}{\triangle x}}G(x_{j}^{N})\beta _{j}^{N}.$ To see why this is true, consider application of the approximating generator to $\beta _{2}^{N}$, and note the substitution, $\alpha _{j}={\frac {1}{\triangle x}}\int _{{x_{{j-1}}^{N}}}^{{x_{j}^{N}}}\beta _{2}^{N}(x)dx$. A key point to remember below is that $\beta _{2}^{N}(x)=1$ from $x_{1}$ to $x_{2}$ and it is zero everywhere else. So when we integrate it across those nodes we get $\triangle x$, whereas it integrates to zero across any other pair of nodes. ${\mathcal {G}}^{N}\beta _{2}^{N}(x)=\sum _{{j=1}}^{N}{\frac {1}{\triangle x}}\left(G(x_{{j-1}}){\frac {1}{\triangle x}}\int _{{x_{{j-2}}^{N}}}^{{x_{{j-1}}^{N}}}\beta _{2}^{N}(x)dx-G(x_{j}){\frac {1}{\triangle x}}\int _{{x_{{j-1}}^{N}}}^{{x_{j}^{N}}}\beta _{2}^{N}(x)dx\right)\beta _{J}^{N}(x)$ $={\frac {1}{\triangle x}}\left\{\left[G(x_{0}){\frac {1}{\triangle x}}\int _{{x_{{-1}}^{N}}}^{{x_{0}^{N}}}\beta _{2}^{N}(x)dx-G(x_{1}){\frac {1}{\triangle x}}\int _{{x_{0}^{N}}}^{{x_{1}^{N}}}\beta _{2}^{N}(x)dx\right]\beta _{1}^{N}(x)\right.$ $+\left[G(x_{1}){\frac {1}{\triangle x}}\int _{{x_{0}^{N}}}^{{x_{1}^{N}}}\beta _{2}^{N}(x)dx-G(x_{2}){\frac {1}{\triangle x}}\int _{{x_{1}^{N}}}^{{x_{2}^{N}}}\beta _{2}^{N}(x)dx\right]\beta _{2}^{N}(x)$ $\left.+\left[G(x_{2}){\frac {1}{\triangle x}}\int _{{x_{1}^{N}}}^{{x_{2}^{N}}}\beta _{2}^{N}(x)dx-G(x_{3}){\frac {1}{\triangle x}}\int _{{x_{2}^{N}}}^{{x_{3}^{N}}}\beta _{2}^{N}(x)dx\right]\beta _{3}^{N}(x)+\cdots \right\}$ $={\frac {1}{\triangle x}}\left\{0-0+0-G(x_{2}){\frac {\triangle x}{\triangle x}}\beta _{2}^{N}(x)+G(x_{2}){\frac {\triangle x}{\triangle x}}\beta _{3}^{N}(x)-0+0\cdots \right\}$ $={\frac {1}{\triangle x}}G(x_{2})\beta _{3}^{N}(x)-{\frac {1}{\triangle x}}G(x_{2})\beta _{2}^{N}(x)$ Once we have the general relationship above, we can determine the contributions to each component of the left hand side of our ODE system. To illustrate, consider ${\mathcal {G}}^{N}\beta _{1}^{N}(x)={\frac {1}{\triangle x}}G(x_{1})\beta _{2}^{N}(x)-{\frac {1}{\triangle x}}G(x_{1})\beta _{1}^{N}(x)$ therefore ${\mathcal {G}}^{N}\beta _{1}^{N}(x_{0})={\frac {1}{\triangle x}}G(x_{1})\beta _{2}^{N}(x_{0})-{\frac {1}{\triangle x}}G(x_{1})\beta _{1}^{N}(x_{0})=0-{\frac {1}{\triangle x}}G(x_{1})$ ${\mathcal {G}}^{N}\beta _{1}^{N}(x_{1})={\frac {1}{\triangle x}}G(x_{1})\beta _{2}^{N}(x_{1})-{\frac {1}{\triangle x}}G(x_{1})\beta _{1}^{N}(x_{1})={\frac {1}{\triangle x}}G(x_{1})-0$ ${\mathcal {G}}^{N}\beta _{1}^{N}(x_{2})={\frac {1}{\triangle x}}G(x_{1})\beta _{2}^{N}(x_{2})-{\frac {1}{\triangle x}}G(x_{1})\beta _{1}^{N}(x_{2})=0={\mathcal {G}}^{N}\beta _{1}^{N}(x_{3},x_{4},\dots x_{N})$ This shows that in row one of our vector for $\mathcal{G}^N b^N$ we have the negative contribution from $\beta _{1}^{N}$ of ${\frac {1}{\triangle x}}G(x_{1})$, and in row two, $\beta _{1}^{N}$ contributes ${\frac {1}{\triangle x}}G(x_{1})$. Let's consider the contributions from $\beta _{2}^{N}$ which should clearly illustrate the pattern for the rest of the vector for ${\mathcal {G}}^{N}b^{N}$. ${\mathcal {G}}^{N}\beta _{2}^{N}(x)={\frac {1}{\triangle x}}G(x_{2})\beta _{3}^{N}(x)-{\frac {1}{\triangle x}}G(x_{2})\beta _{2}^{N}(x)$ therefore ${\mathcal {G}}^{N}\beta _{2}^{N}(x_{0})={\frac {1}{\triangle x}}G(x_{2})\beta _{3}^{N}(x_{0})-{\frac {1}{\triangle x}}G(x_{2})\beta _{2}^{N}(x_{0})=0$ ${\mathcal {G}}^{N}\beta _{2}^{N}(x_{1})={\frac {1}{\triangle x}}G(x_{2})\beta _{3}^{N}(x_{1})-{\frac {1}{\triangle x}}G(x_{2})\beta _{2}^{N}(x_{1})=0-{\frac {1}{\triangle x}}G(x_{2})$ ${\mathcal {G}}^{N}\beta _{2}^{N}(x_{2})={\frac {1}{\triangle x}}G(x_{2})\beta _{3}^{N}(x_{2})-{\frac {1}{\triangle x}}G(x_{2})\beta _{2}^{N}(x_{2})={\frac {1}{\triangle x}}G(x_{2})-0$ ${\mathcal {G}}^{N}\beta _{2}^{N}(x_{3})={\frac {1}{\triangle x}}G(x_{2})\beta _{3}^{N}(x_{3})-{\frac {1}{\triangle x}}G(x_{2})\beta _{2}^{N}(x_{2})=0={\mathcal {G}}^{N}\beta _{2}^{N}(x_{4},x_{5},\dots x_{N})$ The pattern then emerges as follows, $[{\mathcal {G}}^{N}]b^{N}={\begin{bmatrix}-{\frac {1}{\triangle x}}G(x_{1})\ &0&\cdots &\cdots &0\\{\frac {1}{\triangle x}}G(x_{1})&-{\frac {1}{\triangle x}}G(x_{2})&0&\cdots &0\\\vdots \\0&\cdots &0&{\frac {1}{\triangle x}}G(x_{{N-1}})&-{\frac {1}{\triangle x}}G(x_{N})\end{bmatrix}}b^{N}$ The code to build this matrix follows. % Build Growth Matrix gammaG = 6.8e-4; % - parameter given G = @(xx) gammaG.*xx.*(1-(xx./xmax)); Growthvec = zeros(N,N); Growthvec(1,1) = -G(x(2)); for r = 2:N Growthvec(r,r-1) = G(x(r)); Growthvec(r,r) = - G(x(r+1)); end % The whole matrix has a factor of 1/delx Growthvec = Growthvec./delx; ## Fragmentation Projected fragmentation matrix, $\pi ^{N}({\mathcal {F}}(b^{N}))$ $K_{F}(x)$ represents the fragmentation kernel. For this analysis, we used the kernel described by the authors as $K_{F}(x)=\gamma _{F}(x-\underline {x})^{{1/3}}$. Furthermore $\Gamma (x,y)$ represents the authors' assumed uniform post-fragmetation distribution, centered around $y/2$, with width, $y/3$ $\Gamma (x,y)={\begin{cases}3/y;&{\frac {3}{y}} To generate the probability distribution we built the following MATLAB function function g = probdist(x1,x2) % Determines the value of the post frag distribution function % for inputs, x1 and x2, where x1 assumed less than x2 if (x1 > x2/3) && (x1 <= 2*x2/3) g = 3/x1; else g =0; end We can then generate $\pi ^{N}{\mathcal {F}}(b^{N})$ $[\pi ^{N}{\mathcal {F}}(b^{N})]={\begin{bmatrix}\sum _{{j=2}}^{N}\Gamma (x_{0},x_{{j-1}})K_{F}(x_{{j-1}})\alpha _{j}\triangle x\\-{\frac {1}{2}}K_{F}(x_{1})\alpha _{2}\triangle x+\sum _{{j=3}}^{N}\Gamma (x_{1},x_{{j-1}})K_{F}(x_{{j-1}})\alpha _{j}\triangle x\\\vdots \\-{\frac {1}{2}}K_{F}(x_{{N-2}})\alpha _{{N-1}}\triangle x+\Gamma (x_{{N-2}},x_{{N-1}})K_{F}(x_{{N-1}})\alpha _{N}\triangle x\\-{\frac {1}{2}}K_{F}(x_{{N-1}})\alpha _{N}\triangle x\end{bmatrix}}$ To see how this matrix develops from the actual model, we will demonstrate an example where we set $N=4$. Our partition then starts at $x_{0}=2$fL and ends at $x_{4}=1000$fL. The fragmentation operator acting on $b^{4}$ is ${\mathcal {F}}(b^{4})(x)=\int _{x}^{{x_{4}}}\Gamma (x,y)K_{F}(y)b^{4}(y)dy{\textrm {;}}\;x\in [x_{0},x_{4}-x_{0}]$ $-{\frac {1}{2}}K_{F}(x)b^{4}(x)\triangle x{\textrm {;}}\;x\in [2x_{0},x_{4}]$ and its projection is defined as $\pi ^{4}({\mathcal {F}}(b^{4}))=\sum _{{j=1}}^{4}\gamma _{j}\beta _{j}^{4}(x){\textrm {where}}\gamma _{j}=\int _{{x_{{j-1}}}}^{{x_{j}}}{\mathcal {F}}(b^{4})(x)dx$ Now consider row one of $[\pi ^{N}{\mathcal {F}}(b^{N})]$, so $\pi ^{N}{\mathcal {F}}(b^{N})(x_{0})=\sum _{{j=1}}^{4}\gamma _{j}\beta _{j}^{4}(x_{0})=\gamma _{1}$ where, $\gamma _{1}={\frac {1}{\triangle x}}\int _{{x_{0}}}^{{x_{1}}}\left[\int _{{x_{0}}}^{{x_{4}}}\Gamma (x_{0},y)K_{F}(y)b^{4}(y)dy{\textrm {;}}\;x\in [x_{0},x_{4}-x_{0}]\right]dx-0{\textrm {since}}x_{0}\notin [2x_{0},x_{4}]$ $={\frac {1}{\triangle x}}\int _{{x_{0}}}^{{x_{1}}}\left[\int _{{x_{0}}}^{{x_{1}}}\Gamma (x_{0},y)K_{F}(y)b^{4}(y)dy+\int _{{x_{1}}}^{{x_{2}}}\Gamma (x_{0},y)K_{F}(y)b^{4}(y)dy\right.$ $\left.+\int _{{x_{2}}}^{{x_{3}}}\Gamma (x_{0},y)K_{F}(y)b^{4}(y)dy+\int _{{x_{3}}}^{{x_{4}}}\Gamma (x_{0},y)K_{F}(y)b^{4}(y)dy\right]dx$ For each of the inter integrals we'll use a right hand estimate. Also, note that $b^{4}(x)=\sum _{{j=1}}^{4}\alpha _{j}\beta _{J}^{4}(x)$ and our basis elements $\beta _{J}^{4}(x)=1$ for the prior node, i.e., $\beta _{1}^{4}(x_{0})=1$ otherwise it is zero. Then $\gamma _{1}={\frac {1}{\triangle x}}\int _{{x_{0}}}^{{x_{1}}}\left[\Gamma (x_{0},x_{1})K_{F}(x_{1})\alpha _{2}\triangle x+\Gamma (x_{0},x_{2})K_{F}(x_{2})\alpha _{3}\triangle x\right.$ $\left.+\Gamma (x_{0},x_{3})K_{F}(x_{3})\alpha _{4}\triangle x+\Gamma (x_{0},x_{4})K_{F}(x_{4})\times 0\times \triangle x\right]dx$ $=\Gamma (x_{0},x_{1})K_{F}(x_{1})\alpha _{2}\triangle x+\Gamma (x_{0},x_{2})K_{F}(x_{2})\alpha _{3}\triangle x+\Gamma (x_{0},x_{3})K_{F}(x_{3})\alpha _{4}\triangle x$ $=\sum _{{j=2}}^{4}\Gamma (x_{0},x_{{j-1}})K_{F}(x_{{j-1}})\alpha _{j}\triangle x$ Now consider row two where $\pi ^{N}{\mathcal {F}}(b^{N})(x_{1})=\gamma _{2}$, and $\gamma _{2}={\frac {1}{\triangle x}}\int _{{x_{1}}}^{{x_{2}}}\left[\int _{{x_{1}}}^{{x_{4}}}\Gamma (x_{1},y)K_{F}(y)b^{4}(y)dy\right]dx-{\frac {1}{\triangle x}}\int _{{x_{1}}}^{{x_{2}}}{\frac {1}{2}}K_{F}(x_{1})b^{4}(x_{1})\triangle xdx$ $={\frac {1}{\triangle x}}\int _{{x_{1}}}^{{x_{2}}}\left[\int _{{x_{1}}}^{{x_{2}}}\Gamma (x_{1},y)K_{F}(y)b^{4}(y)dy+\int _{{x_{2}}}^{{x_{3}}}\Gamma (x_{1},y)K_{F}(y)b^{4}(y)dy\right.$ $\left.+\int _{{x_{3}}}^{{x_{4}}}\Gamma (x_{1},y)K_{F}(y)b^{4}(y)dy\right]dx-{\frac {1}{2}}K_{F}(x_{1})\alpha _{2}\triangle x$ $=-{\frac {1}{2}}K_{F}(x_{1})\alpha _{2}\triangle x+{\frac {1}{\triangle x}}\int _{{x_{1}}}^{{x_{2}}}\left[\Gamma (x_{1},x_{2})K_{F}(x_{2})b^{4}(x_{2})\triangle x+\Gamma (x_{1},x_{3})K_{F}(x_{3})b^{4}(x_{3})\triangle x\right.$ $\left.+\Gamma (x_{1},x_{4})K_{F}(x_{4})b^{4}(x_{4})\triangle x\right]dx$ $=-{\frac {1}{2}}K_{F}(x_{1})\alpha _{2}\triangle x+\Gamma (x_{1},x_{2})K_{F}(x_{2})\alpha _{3}\triangle x+\Gamma (x_{1},x_{3})K_{F}(x_{3})\alpha _{4}\triangle x+0$ $=-{\frac {1}{2}}K_{F}(x_{1})\alpha _{2}\triangle x+\sum _{{j=3}}^{4}\Gamma (x_{1},x_{{j-1}})K_{F}(x_{{j-1}})\alpha _{j}\triangle x$ Similarly, for the third row, $\pi ^{N}{\mathcal {F}}(b^{N})(x_{2})=\gamma _{3}$, and $\gamma _{3}={\frac {1}{\triangle x}}\int _{{x_{2}}}^{{x_{3}}}\left[\int _{{x_{2}}}^{{x_{4}}}\Gamma (x_{2},y)K_{F}(y)b^{4}(y)dy\right]dx-{\frac {1}{\triangle x}}\int _{{x_{2}}}^{{x_{3}}}{\frac {1}{2}}K_{F}(x_{2})b^{4}(x_{2})\triangle xdx$ $=-{\frac {1}{2}}K_{F}(x_{2})\alpha _{3}\triangle x+{\frac {1}{\triangle x}}\int _{{x_{2}}}^{{x_{3}}}\left[\int _{{x_{2}}}^{{x_{3}}}\Gamma (x_{2},y)K_{F}(y)b^{4}(y)dy+\int _{{x_{3}}}^{{x_{4}}}\Gamma (x_{2},y)K_{F}(y)b^{4}(y)dy\right]dx$ $=-{\frac {1}{2}}K_{F}(x_{2})\alpha _{3}\triangle x+{\frac {1}{\triangle x}}\int _{{x_{2}}}^{{x_{3}}}\left[\Gamma (x_{2},x_{3})K_{F}(x_{3})b^{4}(x_{3})\triangle x+\Gamma (x_{2},x_{4})K_{F}(x_{4})b^{4}(x_{4})\triangle x\right]dx$ $=-{\frac {1}{2}}K_{F}(x_{2})\alpha _{3}\triangle x+\Gamma (x_{2},x_{3})K_{F}(x_{3})\alpha _{4}\triangle x+0$ Finally, for the fourth row, $\pi ^{N}{\mathcal {F}}(b^{N})(x_{3})=\gamma _{4}$, and $\gamma _{4}={\frac {1}{\triangle x}}\int _{{x_{3}}}^{{x_{4}}}\left[\int _{{x_{3}}}^{{x_{4}}}\Gamma (x_{3},y)K_{F}(y)b^{4}(y)dy\right]dx-{\frac {1}{\triangle x}}\int _{{x_{3}}}^{{x_{4}}}{\frac {1}{2}}K_{F}(x_{3})b^{4}(x_{3})\triangle xdx$ $=-{\frac {1}{2}}K_{F}(x_{3})\alpha _{4}\triangle x+0$ For the MATLAB program, we turned the compact column vector above, $[\pi ^{N}{\mathcal {F}}(b^{N})]$, into an $n\times n$ matrix where the $-{\frac {1}{2}}K_{F}(x_{i})\triangle x$ terms sit nicely on the diagonal when we multiply by the density vector $b=[\alpha _{1}\cdots \alpha _{N}]'$, and each term in the summations gets multiplied by its respective density term. The code to build $\pi ^{N}{\mathcal {F}}(b^{N})$ follows. % Build Projection of Fragmentation Matrix % The logic here is to build a matrix 'Frag' such that Frag*b is % projection of Fragmentation as a function of the projected b gammaF = 6.6e-5; % - parameter given in paper K_F = @(xx) gammaF.*((xx - xmin).^(1/3)); Frag = zeros(N,N); for m = 1:N Frag(m,m) = -.5*K_F(x(m)); for k = m+1:N Frag(m,k) = probdist(x(m),x(k))*K_F(x(k)); end end Frag(1,1) = 0; % - only difference in the pattern for the diagonal % The whole matrix has a factor of delx Frag = Frag.*delx; ## Aggregation Projected aggregation matrix, $\pi^N(\mathcal{A}(b^N))$ $K_{A}(x,y)$ represents the aggregation kernel. For this analysis, we used the turbulent mixing kernel described by the authors as $K_{A}(x,y)=\gamma _{A}\left({\frac {\epsilon }{\nu }}\right)^{{1/2}}\left(x^{{1/3}}+y^{{1/3}}\right)^{3}$, and we determine the following: $\pi ^{N}{\mathcal {A}}(b^{N})]={\begin{bmatrix}-\alpha _{1}\sum _{{j=1}}^{{N-1}}K_{A}(x_{1},x_{j})\alpha _{j}\triangle x\\{\frac {1}{2}}K_{A}(x_{1},x_{1})\alpha _{1}\alpha _{1}\triangle x-\alpha _{2}\sum _{{j=1}}^{{N-2}}K_{A}(x_{2},x_{j})\alpha _{j}\triangle x\\\vdots \\{\frac {1}{2}}\sum _{{j=1}}^{{N-2}}K_{A}(x_{j},x_{{N-1-j}})\alpha _{j}\alpha _{{N-1-j}}\triangle x-\alpha _{{N-1}}K_{A}(x_{{N-1}},x_{1})\alpha _{1}\triangle x\\{\frac {1}{2}}\sum _{{j=1}}^{{N-1}}K_{A}(x_{j},x_{{N-j}})\alpha _{j}\alpha _{{N-j}}\triangle x\end{bmatrix}}$ For this matrix, we'll offer another perspective on how to understand the way in which we can derive it. In our discretization scheme, the first "bin" starts at $x_{0}=2$ fL and goes to $x_{1}$. We then take the approach that all aggregates in that bin have the discrete value infinitesimally close to $x_{1}$. Under that rationale, no two clumps can aggregate and produce one of size $x_{0}$, so in the first row, we can only reduce the density of size $x_{0}$ aggregates, hence the strictly negative term. Furthermore, any size $x_{0}$ group, which we represent in the matrix as size $x_{1}$ (since it is infinitesimally close to size $x_{1}$) can aggregate with any other size group up to size $x_{{N-2}}$ which we represent as size $x_{{N-1}}$ (the size of the group at the large end of the $x_{{N-2}}$ bin). Another subtlety to highlight is that for each size $x_{i}$ aggregate, we use the density for the $x_{i}$ bin. In other words, in terms of size, we represent the $x_{i}$ bin with size $x_{{i+1}}$, but we use the density of bin $x_{i}$ which is $\alpha _{i}$. To illustrate, when an aggregate from the size $x_{0}$ bin joins with an aggregate from the size $x_{1}$ bin, we describe the interaction as $\alpha _{1}K_{A}(x_{1},x_{2})\alpha _{2}$, which creates an aggregate of not quite size $x_{3}$ thereby increasing the density of aggregates of bin size $x_{2}$. Finally, in our summation of the interactions between aggregates that increase the density of certain bin, we multiply by 1/2 to account for double counting. For instance, consider row three which accounts for the change in density of the size $x_{2}$ bin. The terms that increase that density are represented by the sum, ${\frac {1}{2}}\sum _{{j=1}}^{2}K_{A}(x_{j},x_{{3-j}})\alpha _{j}\alpha _{{3-j}}\triangle x={\frac {1}{2}}\left(K_{A}(x_{1},x_{2})\alpha _{1}\alpha _{2}\triangle x+K_{A}(x_{2},x_{1})\alpha _{2}\alpha _{1}\triangle x\right)$ The code to build $\pi ^{N}{\mathcal {A}}(b^{N})$ follows. % Build Projection of Aggregation Matrix ee = 4.43e-4; % - Turbulent energy dissipation constant from paper nu = 1.99e-6; % - Kinematic viscosity constant from paper gammaA = 2.7e-15; % - parameter given in paper % The sqrt of the ratio ee/nu is in 1/s, so I multiply by 60 to get in % 1/minutes K_A = @(xx,yy) gammaA*sqrt(ee/nu)*60.*(xx.^(1/3) + yy.^(1/3)).^3; Agg2 = zeros(N,1); % This loop performs the second summation per my derivation for m = 1:N-1; ka_b1 = K_A(x(m+1),x(2:N+1-m))'.*b(1:N-m); Agg2(m) = -b(m)*delx*sum(ka_b1); end % This loop performs the first summation per my derivation for m = 2:N; for k = 2:m Agg2(m) = Agg2(m) + (.5*K_A(x(k),x(m-k+2))*b(k-1)*b(m-k+1)*delx); end end # Results Running our code from time, 0, to time, 240 minutes, we generate Figure 2. Figure 2: Density Distribution with Aggregation, Fragmentation, and Growth. We can also compare our results in Figure 3 with those of Figure 4 which were generated in the paper. Here we see similar shapes and orders of magnitude, but we must note that we are using a different initial density function. Figure 3: Bin Distributions from this Wiki's Simulation. Figure 4: Bin Distributions from the Paper. In a case like this, where we do not necessarily have a final "true" answer that verifies whether we have the correct results, we can run some "sanity checks." For instance in Figure 5, we ran our code using only growth (no aggregation or fragmentation). Figure 5: Density Distribution with Growth Only. Here we set the initial condition to $b_{0}(x_{0})=1$ and $b_{0}(x_{i})=0$ for $i=1,\dots ,N-1$. Notice this simulation runs for 20000 minutes to verify that starting with a density of for bacteria of size 2 fL, we finish with with a density of one for bacteria of maximum size 1000 fL. # Conclusions This Wiki sought to introduce the PDE model for the flocculation dynamics of bacteria in suspension that Bortz, et al. presented in their paper, "\textit{Klebsiella pneumoniae} Flocculation Dynamics", describe in detail a means for generating the necessary matrices for solving the discretized PDE, and most importantly generate a MATLAB code that actually solves the PDE. Based on our derivations in section 4, and our results along with our "sanity check" in section 5, we are confident we have met our goals. Additionally, Bortz, et al., provide a "Fit Sensitivity" Analysis which helps indicate where improvements in the experiment can be made. While not addressed in this Wiki, we highly recommend reading this section for a solid example of the mathematical analysis behind design of experiments. # Appendix A copy of the presentation for this wiki can be seen here, presentation . Full Code: close all; clear all; clc %%%%%%%%%%%%%%%%%%%% Set Up syms y N = 50; % - number of steps from x_min to x_max %% If this changes, update  % Flocdyn function as well %Assumed min and max volumes with units (fL) xmin = 2; xmax = 1000; delx = (xmax - xmin) / N; % - equispaced mesh size x = [xmin:delx:xmax]; % - spatial discretization %Inital density function with y representing continuous spatial variable  % Using updated coefficients from Dr. Bortz (i.e. not the function in  % the paper) % - if this changes, the expression in intinitcond2 needs to change accordingly b0 = 89.828*exp(-0.0035303*y) + 1946.6*1e6*exp(-1.3159*y); % initalize density function after pi^N application from x_min to the last % step before x_max b = zeros(1,N); for s = 1:N; b(s) = intinitcond2(x(s),x(s+1))/delx; end %%%%%%%%%%%%% Solve ODE system % timespan T = 240; tspan = [0,T]; % call the solver [t,B] = ode45(@Flocdyn4,tspan,b); % plot the results M = length(t); set(axes,'ZScale','log') set(gca,'XTick',[1:10:N-10+1,N]); set(gca,'XTickLabel',[x(1):10*delx:2+((N-10)*delx),2+((N-1)*delx)]); set(gca,'YTick',[1,M]); set(gca,'YTickLabel',[0,T]); view([65 30]); title('Aggregation, Fragmentation, and Growth') hold on mesh(B) xlabel('Volume (fL)');ylabel('Time (min)');zlabel('Density (number of particles/fL)'); % Find total number of aggregates to compare against Figure 5 in the paper % Initial time binI(1) = sum(B(1,1:ceil(48/delx)))*delx - 3*B(1,1); % - vol's 5 to last vol less than 50 binI(2) = sum(B(1,ceil(48/delx)+1:ceil(98/delx)))*delx; % - vol's first above 50 to last below 100 binI(3) = sum(B(1,ceil(98/delx)+1:ceil(198/delx)))*delx; % - vol's 1st above 100 to last below 200 binI(4) = sum(B(1,ceil(198/delx)+1:ceil(298/delx)))*delx; % - vol's 1st above 200 to last below 300 binI(5) = sum(B(1,ceil(298/delx)+1:ceil(398/delx)))*delx; % - vol's 1st above 300 to last below 400 % Final time binF(1) = sum(B(end,1:ceil(48/delx)))*delx - 3*B(end,1); % - vol's 5 to last vol less than 50 binF(2) = sum(B(end,ceil(48/delx)+1:ceil(98/delx)))*delx; % - vol's first above 50 to last below 100 binF(3) = sum(B(end,ceil(98/delx)+1:ceil(198/delx)))*delx; % - vol's 1st above 100 to last below 200 binF(4) = sum(B(end,ceil(198/delx)+1:ceil(298/delx)))*delx; % - vol's 1st above 200 to last below 300 binF(5) = sum(B(end,ceil(298/delx)+1:ceil(398/delx)))*delx; % - vol's 1st above 300 to last below 400 %Display the output formatspec = 'Initial time:\t%0.5g\t%0.5g\t\t%0.5g\t\t%0.5g\t\t%0.5g\nFinal time:\t\t%0.5g\t%0.5g\t%0.5g\t%0.5g\t%0.5g'; str1 = sprintf('Bin Sizes:\t\t5-50\t\t50-100\t\t100-200\t\t200-300\t\t300-400'); str2 = sprintf(formatspec,binI,binF); disp(str1) disp(str2) %plot these bins at initial and final times bins = [50 100 200 300 400]; fig2 = figure set(axes,'YScale','log') set(gca,'XTick',bins); set(gca,'XTickLabel','5-50|50-100|100-200|200-300|300-400'); hold on plot(bins,binI,'--o',bins,binF,'-.x'); axis([0 450 10^3 10^8]); title('Distributions'); xlabel('Volume Ranges (in fL)'); ylabel('Total Number of Aggregates'); legend('Model Outputs Initial Time', 'Model Outputs Final Time'); Called Functions • ODE Solver function [db_dt]= Flocdyn4(t,b) N = 50; % - number of steps from x_min to x_max %Assumed min and max volumes with units (fL) xmin = 2; xmax = 1000; delx = (xmax - xmin) / N; % - equispaced mesh size x = [xmin:delx:xmax]; % - spatial discretization %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Build Growth Matrix gammaG = 6.8e-4; % - parameter given G = @(xx) gammaG.*xx.*(1-(xx./xmax)); Growthvec = zeros(N,N); Growthvec(1,1) = -G(x(2)); for r = 2:N Growthvec(r,r-1) = G(x(r)); Growthvec(r,r) = - G(x(r+1)); end % The whole matrix has a factor of 1/delx Growthvec = Growthvec./delx; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Build Projection of Fragmentation Matrix % The logic here is to build a matrix 'Frag' such that Frag*b is % projection of Fragmentation as a function of the projected b gammaF = 6.6e-5; % - parameter given in paper K_F = @(xx) gammaF.*((xx - xmin).^(1/3)); Frag = zeros(N,N); for m = 1:N Frag(m,m) = -.5*K_F(x(m)); for k = m+1:N Frag(m,k) = probdist(x(m),x(k))*K_F(x(k)); end end Frag(1,1) = 0; % - only difference in the pattern for the diagonal % The whole matrix has a factor of delx Frag = Frag.*delx; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Build Projection of Aggregation Matrix ee = 4.43e-4; % - Turbulent energy dissipation constant from paper nu = 1.99e-6; % - Kinematic viscosity constant from paper gammaA = 2.7e-15; % - parameter given in paper % The sqrt of the ratio ee/nu is in 1/s, so I multiply by 60 to get in % 1/minutes K_A = @(xx,yy) gammaA*sqrt(ee/nu)*60.*(xx.^(1/3) + yy.^(1/3)).^3; Agg2 = zeros(N,1); % This loop performs the second summation per my derivation for m = 1:N-1; ka_b1 = K_A(x(m+1),x(2:N+1-m))'.*b(1:N-m); Agg2(m) = -b(m)*delx*sum(ka_b1); end % This loop performs the first summation per my derivation for m = 2:N; for k = 2:m Agg2(m) = Agg2(m) + (.5*K_A(x(k),x(m-k+2))*b(k-1)*b(m-k+1)*delx); end end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Form the system of ODEs db_dt = Growthvec*b + Agg2 + Frag*b; • Projecting Initial Conditions function c = intinitcond2(x1,x2) % This function will integrate the symbolic initial function from volume % x1 to volume x2. c = 19466000000000/(13159*exp((13159*x1)/10000)) - ... 19466000000000/(13159*exp((13159*x2)/10000)) + ... 809098694654873829376/(31798115529012125*... exp((254384924232097*x1)/72057594037927936)) - ... 809098694654873829376/(31798115529012125*... exp((254384924232097*x2)/72057594037927936)) ; # References 1. D. M. Bortz, T. L. Jackson, K. A. Taylor, A. P. Thompson,and J. G. Younger, Klebsiella pneumoniae flocculation dynamics. Bulletin of Mathematical Biology, 70(3):745-768, April 2008.
2017-09-25 13:26:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 178, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7658334374427795, "perplexity": 5472.774498772571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691830.33/warc/CC-MAIN-20170925130433-20170925150433-00224.warc.gz"}
http://hal.in2p3.fr/in2p3-00090280
# New insight into the low-energy $^{9}$He spectrum Abstract : The spectrum of $^9$He was studied by means of the $^8$He($d$,$p$)$^9$He reaction at a lab energy of 25 MeV/n and small center of mass (c.m.) angles. Energy and angular correlations were obtained for the $^9$He decay products by complete kinematical reconstruction. The data do not show narrow states at $\sim$1.3 and $\sim$2.4 MeV reported before for $^9$He. The lowest resonant state of $^9$He is found at about 2 MeV with a width of $\sim$2 MeV and is identified as $1/2^-$. The observed angular correlation pattern is uniquely explained by the interference of the $1/2^-$ resonance with a virtual state $1/2^+$ (limit on the scattering length is obtained as $a > -20$ fm), and with the $5/2^+$ resonance at energy $\geq 4.2$ MeV. Document type : Journal articles http://hal.in2p3.fr/in2p3-00090280 Contributor : Michel Lion <> Submitted on : Tuesday, August 29, 2006 - 3:39:24 PM Last modification on : Wednesday, September 16, 2020 - 4:08:31 PM ### Citation M.S. Golovkov, L.V. Grigorenko, A.S. Fomichev, A.V. Gorshkov, V.A. Gorshkov, et al.. New insight into the low-energy $^{9}$He spectrum. Physical Review C, American Physical Society, 2007, 76, pp.021605. ⟨10.1103/PhysRevC.76.021605⟩. ⟨in2p3-00090280⟩ Record views
2020-09-21 20:09:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2948302626609802, "perplexity": 4190.179550545241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00089.warc.gz"}
https://www.physicsforums.com/threads/does-qm-ever-violate-classical-probability-theory.528330/
# Does QM ever violate classical probability theory? 1. Sep 8, 2011 ### BWV reading this http://uk.arxiv.org/abs/1004.2529 about supposed parallels between the mathematical structure of probability in QM and some problems in economics question is that are there really any violations of classical probability theory, such as Pr(A) > Pr(A $\cup$ B) in QM? The supposed examples all seem to point to interference effects which to my thinking are not violations of Probability theory as you could construct a similar situation with classical waves 2. Sep 8, 2011 ### xts Reading that - haven't you got the imperssion, that it would be one more great example for Sokal and Bricmont? (http://en.wikipedia.org/wiki/Fashionable_Nonsense) I haven't spotted any. 3. Sep 8, 2011 ### BWV Thanks although to be fair the linked article is only attempting to use the mathematics of QM for other applications, rather than postulating some link between the actual physics and economics. But the whole formal structure of "disjunctions" in the paper does not make much sense to me - the whole phenomenon could just as easily be explained by cognitive errors of individuals responding to polls. 4. Sep 8, 2011 ### xts I would not call it 'attempting to use mathematics... for..." I would rather call it: 'attempting to impress social-science reader with mathematic symbols she do not understand' 5. Sep 8, 2011 ### BWV true enough, but not as egregious as using curvature tensors to explain arbitrage relationships http://arxiv.org/abs/hep-th/9710148 Have had a lot of fun at the office giving this paper to newly minted MBAs as "required study material" 6. Sep 8, 2011 ### Fredrik Staff Emeritus I haven't really thought about it, but the probability measures in QM are (generalized) probability measures on a non-distributive lattice, not probability measures on σ-algebras. You can probably derive some "violations" from the non-distributivity, but I'm not going to think about that tonight. I think your ">" should be a "≤". $P(A\cup B)=P(A\cup(B-A))=P(A)+P(B-A)\geq P(A)$ I haven't read the article, but the journal reference says "International Journal of Theoretical Physics", and one of the authors is Diederik Aerts, who I believe is a leading expert on quantum logic. 7. Sep 8, 2011 ### xts So try to read it! I am looking forward to see your defense of this babble: please try to use non-adjectival logic. 8. Sep 8, 2011 ### Dickfore $P(.): \mathcal{T} \rightarrow \mathbb{R}$ is a mapping from the field of all events $\mathcal{T}$ to the set of real numbers with the following properties (due to Kolmogorov): Non-negativity: $$\left(\forall A \in \mathcal{T}\right) \, P(A) \ge 0$$ $$A \cap B = \emptyset \Rightarrow P(A \cup B) = P(A) + P(B)$$ Normalization: $$P(\Omega) = 1$$ where $\Omega$ is the certain event. If you use the set relations: $$A \cup B = A \cup (B - A), \; A \cap (B - A) = \emptyset$$ and $$B = (A \cap B) \cup (B - A), \; (A \cap B) \cap (B - A) = \emptyset$$ and use the additivity of probability: $$P(A \cup B) = P(A) + P(B - A)$$ $$P(B) = P(A \cap B) + P(B - A)$$ to eliminate $P(B - A)$, we get: $$P(A \cup B) = P(A) + P(B) - P(A \cap B)$$ Then, if what you claim is true: it would imply: $$P(A \cap B) > P(B), \; \forall A, B \in \mathcal{T}$$ Then, taking any $A \subseteq B$, we would have $A \cap B = B$ and we would arrive at a contradiction: $$P(B) > P(B) \ \bot$$ 9. Sep 8, 2011 ### BWV I was posting a violation of classical prob listed in the link. The example given was polling data of categorizations - for example respondents were asked something like does x belong in category a or in categories (a or b). There are common cognitive biases that violate probability laws - the author seems to be saying that you need a "non-classical" statistics to deal with them, which seems like a dubious proposition to me 10. Sep 8, 2011 ### Dickfore How does the author measure $P(A)$ or $P(A \cup B)$? 11. Sep 9, 2011 ### Fra I didn't read the paper just skimmed the abstract and first page fast last night. Just some thouhgts on the topic, less specific to the paper. 1) Their conceptual analogy between QM and decision theory is to ve very intuitive and interesting. I myself use decision theory analogies alot since for me they are highly natural. We all make decisions everday, one deos not need to study it formally to understand it. 2) It's not right to say that they violate classical probability - of course they don't. That' just mathematics. What they do claim is this: The cognitive decisions made by test subjects FAILS to be MODELLED by classical probability (classical logic), and that it's better modelled by quantum logic. This is not surprising to me. This is exactly on par with that you can't explain quantum interference in terms of classical bayesian probability and classic logic. But this we know already. 3) I think the whole point is that whatever you call it "cognitve errors" or something else, the task is to model it. To predict how a subject responds to input, means to understand its' decision process. The point is that whatever happens in the decision process, that is due to nature, so it does not make much sense to call it "errors" if "errors" are integral part of the decision process. The analogy with physics is: Determine how a system (say a piece of matter) RESPONDS/ACTS on perturbation/input, and quantum mechanically we model this as the system "considering all paths etc" which are then "added" under interference effects, this is very analogous to a decision process! Determing how a subject responds/acts to input, which of course can be tought of as the subject choosing and action after some cognitive reflections to make a decision on how to rationally respond to input. Here a big difference that leads to a different insight is that decisions does not have the purpse of beeing what might be thought of as "logically correct", instead decisions are tactical decisions, that are expect to yield maximum return. This is why sometimes decisions that turn out "wrong" is retrospect, are still maximally rational. This is why one should be careful to make statements of what in a deciusion process that are "errors". /Fredrik 12. Sep 9, 2011 ### Fra As I understand it, from trials of test subjecst (humans) that get to answer verbal questions. Replacing logical operations with and and or. So the measurement = observing what human subjects "decide" to answer on given questions. So what the try to model is the human decision process. All they found is that contrary to what we think is "logical" the subjects fail to make decsions as per classical logic. That's their only point. If we call this an "error" or logical fallacy, that's partly right, but that's beyond the point. The quest is still to modell it, with "fallacies" and all. I think the onjceture from cognitive psychology is that there exists a rational explanation for the fallacies, which involves how the decisions process in the brain actually works - it apparently does not work with simple classical logic. /Fredrik 13. Sep 9, 2011 ### xts So maybe you understand why Bell's theorem and EPR paradox were mentioned in introduction to the paper as important to it? Look a bit later - how impressive mathematical formalism! If you find it interesting - you should follow references to check the methodology they used to obtain data. Actually, they didn't do any experiments - they reinterprete 20-years old data collected by J.Hampton (http://www.staff.city.ac.uk/hampton/PDF files/Hampton Disjunction1988b.pdf) on an impressive sample of 40 students, answering junctive questions and disjuntive ones a week later. Hampton somehow forgot to test, what would be a correlation between answers if exactly the same questions would be asked twice. Whole this "theory" is built upon inconsistency of answers given by two persons. Other data comes from the experiment, where 2 groups of 10 persons each were asked disjunctive question, why other group of 20 were asked junctive one. Quick excercise for you on "classical" probability theory: what is a probability that 25% or less of 20 people give one answer and 30% or more of 10 people gives the same answer if those are independent samples? That is the "evidence" (25% > 30%) upon which the whole "theory" is built. Again. 40 person sample with no estimation of expected errors, or comparison made on statistics: 5/20 vs. 3/10 I would be rather cautious using capitalics here. Great! But - again: - data they use do not justify rejection of simplest theory: "human cognition follows Boolean logic" - even if cognition do not follow classical logic, there is no relation with EPR, Bell, von Neumann, and Quantum Mechanics. But "Quantum" makes papers more sexy! Not only papers - Calgonit Quantum - the best dishwasher tabs! Last edited: Sep 9, 2011 14. Sep 9, 2011 ### Fra I'm not particularly interested nor impressed by that paper as such, but I interpreted the OP as questioning what possibly reasonable relation there exists between decision theory and physics, in their logic. I think the connection is there, but this was known to be before, that paper doesn't seem present anything new as I see. You are possible right that "it's sexy", that's possibly a factor :) But that doesn't make the connection wrong. I think it was mentioned since in physics bell's theorem and the inequalities more or less serves as the no-go theorem for locally realistic theories; which essentially means they are explained by classical logic. Not that I bother care much, but they seem to have some idea about some no-go teorem for decisions theory, which you could tell if it can be explained by classicla logic or not. Not the paper itself, but the general connection. But I surely don't need that paper to know that. In particular the connection between decision process and physical interactions is profound IMO. This you can even tell from most of my posts on here. But that papers isn't a physics paper so I don't care, I just wanted to stand up for the connection. /Fredrik 15. Sep 9, 2011 ### Dickfore The reason I was asking this is because if they made a statistical estimate of the probabilities, then they can only give confidence interval for the difference $P(A) - P(A \cup B)$. If this interval happens to contain the zero, then, any statistical test of the hypothesis for a particular sign of this difference will be rejected at the given level of confidence. As the sample size (40 students) is pretty small, I would assume the 95% confidence interval (standard in natural sciences) to be pretty wide. I do not intend to perform any calculation for this, however. 16. Sep 9, 2011 ### xts You should make! Data used to build the "theory" presented in this paper are not significant even on one standard deviation level. 17. Sep 9, 2011 ### xts Of course, I can't prove their idea is wrong. I may only say that the justification they provided is wrong. I may also say that the physical theories they quote are not related by any means to the subject they touch. Such 'parallels' are not more justified than claim about quantum nature of day and night (which is cyclic, but the light emitted by excited atom is also cyclic, and is described by quantum mechanics). There are millions crap ideas you can't prove to be wrong. What I (and every sceptical person, stick to Occam's rule) expect from scientific paper is not to present "theories", which (however not falsifable) are not justified by any evidence, especially not by the evidence cited by the particular paper. Last edited: Sep 9, 2011 18. Sep 9, 2011 ### Fra Very possible! I honestly didn't read it carefuly, and I don't even care to. I have other justifications that's more than good enough for me and I don't need more. However my htinking is different, rather than trying to apply QM formalism to decisions theory, I think that DEEPER insights into how decisions work, will help us find an instrinsic measurement theory that's helpful for QG and unification. This is why that paper itself doesn't interest med much. (Not enough to even read it properly) /Fredrik 19. Sep 9, 2011 ### xts Honestly: me too. I just browsed it to point out several absurds and followed the bib-link to see what data they are based on. Wow! Would you say few words more how the human decision theory is related to Quantum Gravity? 20. Sep 9, 2011 ### DrDu In the theory of weak measurements there appear probabilities >1 and <0, so they violate classical probability explicitly.
2017-11-24 12:00:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.532654345035553, "perplexity": 1498.2957069529798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807650.44/warc/CC-MAIN-20171124104142-20171124124142-00678.warc.gz"}
https://questions.examside.com/past-years/jee/question/a-cubical-block-of-side-30-cm-is-moving-with-velocity-2-ms-jee-main-physics-units-and-measurements-eepi1b94qwifbbjq
1 JEE Main 2016 (Online) 9th April Morning Slot A cubical block of side 30 cm is moving with velocity 2 ms−1 on a smooth horizontal surface. The surface has a bump at a point O as shown in figure. The angular velocity (in rad/s) of the block immediately after it hits the bump, is : A 5.0 B 6.7 C 9.4 D 13.3 Explanation Before hitting point 0, angular moment = mv $\times$ ${a \over 2}$ After hitting point 0, Angular momentum = ${\rm I}\omega$ $\therefore$    ${\rm I}\omega$ = ${{mva} \over 2}$ $\Rightarrow$   $\omega$ = ${{mva} \over {2{\rm I}}}$ ${\rm I}$ = moment of inertia about edge, =   ${{m{a^2}} \over 6} + m{\left( {{a \over {\sqrt 2 }}} \right)^2}$ =   ${{m{a^2}} \over 6} + {{m{a^2}} \over 2}$ =   ${{2m{a^2}} \over 3}$ $\therefore$   $\omega$ = ${{mva} \over {2 \times {{2m{a^2}} \over 3}}}$ = ${{3v} \over {4a}}$ = ${{3 \times 2} \over {4 \times 0.3}}$ = 5 rad/s 2 JEE Main 2016 (Online) 10th April Morning Slot Concrete mixture is made by mixing cement, stone and sand in a rotating cylindrical drum. If the drum rotates too fast, the ingredients remain stuck to the wall of the drum and proper mixing of ingredients does not take place. The maximum rotational speed of the drum in revolutions per minute(rpm) to ensure proper mixing is close to : (Take the radius of the drum to be 1.25 m and its axle to be horizontal) : A 0.4 B 1.3 C 8.0 D 27.0 3 JEE Main 2016 (Online) 10th April Morning Slot In the figure shown ABC is a uniform wire. If centre of mass of wire lies vertically below point A, then ${{BC} \over {AB}}$ is close to : A 1.85 B 1.37 C 1.5 D 3 Explanation Here   AB = x and   BC = y and   $\lambda$ = linear mass density. As centre of mass is below point A, so horizontal distance of the centre of mass from B is = xcos60o = ${x \over 2}$ $\therefore$   XCM = ${x \over 2}$ = ${{{m_1}{x_1} + {m_2}{x_2}} \over {{m_1} + {m_2}}}$ $\Rightarrow$   ${x \over 2}$ = ${{\left( {\lambda x} \right)\left( {{x \over 2}} \right)\cos {{60}^o} + \left( {\lambda y} \right)\left( {{y \over 2}} \right)} \over {\lambda \left( {x + y} \right)}}$ $\Rightarrow$   ${x \over 2}$ = ${{{{{x^2}} \over 4} + {{{y^2}} \over 2}} \over {x + y}}$ $\Rightarrow$   x2 + xy = ${{{x^2}} \over 2} + {y^2}$ $\Rightarrow$   x2 + 2xy $-$ 2y2 = 0 $\therefore$   x = ${{ - 2y \pm \sqrt {{{\left( {2y} \right)}^2} - 4.1\left( { - 2{y^2}} \right)} } \over {2.1}}$ =    ${{ - 2y \pm \sqrt {12{y^2}} } \over 2}$ =   $-$ y $\pm$ $\sqrt 3$y ${x \over y} \ne - \sqrt 3 - 1$  as  ${x \over y}$ = positive. $\therefore$   ${x \over y}$ = $\sqrt 3 - 1$ $\Rightarrow$   ${y \over x}$ = ${1 \over {\sqrt 3 - 1}} \times {{\sqrt 3 + 1} \over {\sqrt 3 + 1}}$ =   ${{\sqrt 3 + 1} \over 2}$ =   ${{2.732} \over 2}$ =   1.366 $\simeq$ 1.37 4 JEE Main 2017 (Offline) The moment of inertia of a uniform cylinder of length $l$ and radius R about its perpendicular bisector is $I$. What is the ratio ${l \over R}$ such that the moment of inertia is minimum? A ${3 \over {\sqrt 2 }}$ B $\sqrt {{3 \over 2}}$ C ${{\sqrt 3 } \over 2}$ D 1 Explanation The volume of the cylinder V = $\pi {R^2}l$ $\therefore$ ${R^2} = {V \over {\pi l}}$ We know, moment of inertia of a uniform cylinder of length $l$ and radius R about its perpendicular bisector is, $I = {{M{l^2}} \over {12}} + {{M{R^2}} \over 4}$ [ Putting ${R^2} = {V \over {\pi l}}$ in this equation] $\Rightarrow$ $I = {{M{l^2}} \over {12}} + {{MV} \over {4\pi l}}$ Here $I$ is a function of $l$ as M and V are constant. $I$ will be maximum or minimum when ${{{dI} \over {dl}}}$ = 0. $\Rightarrow {{Ml} \over 6} - {{MV} \over {4\pi {l^2}}} = 0$ $\Rightarrow {{Ml} \over 6} = {{MV} \over {4\pi {l^2}}}$ $\Rightarrow {l \over 6} = {{\pi {R^2}l} \over {4\pi {l^2}}}$ [ as ${V = \pi {R^2}l}$ ] $\Rightarrow {{{R^2}} \over {{l^2}}} = {4 \over 6}$ $\Rightarrow {l \over R} = \sqrt {{3 \over 2}}$
2021-10-17 03:35:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6020237803459167, "perplexity": 928.0415130810569}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00389.warc.gz"}
https://gitlab.lrde.epita.fr/spot/spot/-/blame/9b95b697a5a1618795b359139d39a5228037d110/spot/twaalgos/minimize.hh
minimize.hh 6.68 KB Alexandre Duret-Lutz committed Apr 30, 2012 1 // -*- coding: utf-8 -*- Alexandre Duret-Lutz committed Jan 28, 2016 2 3 // Copyright (C) 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016 // Laboratoire de Recherche et Développement de l'Epita (LRDE). Felix Abecassis committed Jan 05, 2011 4 5 6 7 8 // // This file is part of Spot, a model checking library. // // Spot is free software; you can redistribute it and/or modify it // under the terms of the GNU General Public License as published by Alexandre Duret-Lutz committed Oct 12, 2012 9 // the Free Software Foundation; either version 3 of the License, or Felix Abecassis committed Jan 05, 2011 10 11 12 13 14 15 16 17 // (at your option) any later version. // // Spot is distributed in the hope that it will be useful, but WITHOUT // ANY WARRANTY; without even the implied warranty of MERCHANTABILITY // or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public // License for more details. // // You should have received a copy of the GNU General Public License Alexandre Duret-Lutz committed Oct 12, 2012 18 // along with this program. If not, see . Felix Abecassis committed Jan 05, 2011 19 Etienne Renault committed Mar 23, 2015 20 #pragma once Felix Abecassis committed Jan 05, 2011 21 Alexandre Duret-Lutz committed Dec 04, 2015 22 23 #include #include Felix Abecassis committed Jan 05, 2011 24 25 26 namespace spot { Alexandre Duret-Lutz committed Apr 22, 2015 27 /// \addtogroup twa_reduction Alexandre Duret-Lutz committed Jan 05, 2011 28 29 30 /// @{ /// \brief Construct a minimal deterministic monitor. Alexandre Duret-Lutz committed Jan 05, 2011 31 /// Alexandre Duret-Lutz committed Jan 05, 2011 32 33 34 35 /// The automaton will be converted into minimal deterministic /// monitor. All useless SCCs should have been previously removed /// (using scc_filter() for instance). Then the automaton will be /// determinized and minimized using the standard DFA construction Alexandre Duret-Lutz committed Sep 24, 2012 36 /// as if all states were accepting states. Alexandre Duret-Lutz committed Jan 05, 2011 37 /// Alexandre Duret-Lutz committed Jan 05, 2011 38 /// For more detail about monitors, see the following paper: Alexandre Duret-Lutz committed Jun 09, 2013 39 40 41 42 43 /** \verbatim @InProceedings{ tabakov.10.rv, author = {Deian Tabakov and Moshe Y. Vardi}, title = {Optimized Temporal Monitors for SystemC{$^*$}}, booktitle = {Proceedings of the 10th International Conferance Alexandre Duret-Lutz committed Jan 28, 2016 44 on Runtime Verification}, Alexandre Duret-Lutz committed Jun 09, 2013 45 46 47 48 49 50 51 52 pages = {436--451}, year = 2010, volume = {6418}, series = {Lecture Notes in Computer Science}, month = nov, publisher = {Spring-Verlag} } \endverbatim */ Alexandre Duret-Lutz committed Jan 05, 2011 53 54 /// (Note: although the above paper uses Spot, this function did not /// exist in Spot at that time.) Alexandre Duret-Lutz committed Jan 05, 2011 55 /// Alexandre Duret-Lutz committed Jan 05, 2011 56 57 58 /// \param a the automaton to convert into a minimal deterministic monitor /// \pre Dead SCCs should have been removed from \a a before /// calling this function. Alexandre Duret-Lutz committed Apr 22, 2015 59 SPOT_API twa_graph_ptr minimize_monitor(const const_twa_graph_ptr& a); Alexandre Duret-Lutz committed Jan 05, 2011 60 Alexandre Duret-Lutz committed Apr 30, 2012 61 /// \brief Minimize a Büchi automaton in the WDBA class. Alexandre Duret-Lutz committed Jan 05, 2011 62 /// Alexandre Duret-Lutz committed Aug 28, 2012 63 64 65 66 67 /// This takes a TGBA whose language is representable by a Weak /// Deterministic Büchi Automaton, and construct a minimal WDBA for /// this language. This essentially chains three algorithms: /// determinization, acceptance adjustment (Löding's coloring /// algorithm), and minimization (using a Moore-like approache). Alexandre Duret-Lutz committed Jan 05, 2011 68 /// Alexandre Duret-Lutz committed Jan 05, 2011 69 70 71 72 73 74 75 76 /// If the input automaton does not represent a WDBA language, /// the resulting automaton is still a WDBA, but it will not /// be equivalent to the original automaton. Use the /// minimize_obligation() function if you are not sure whether /// it is safe to call this function. /// /// Please see the following paper for a discussion of this /// technique. Alexandre Duret-Lutz committed Jan 05, 2011 77 /// Alexandre Duret-Lutz committed Jun 09, 2013 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 /** \verbatim @InProceedings{ dax.07.atva, author = {Christian Dax and Jochen Eisinger and Felix Klaedtke}, title = {Mechanizing the Powerset Construction for Restricted Classes of {$\omega$}-Automata}, year = 2007, series = {Lecture Notes in Computer Science}, publisher = {Springer-Verlag}, volume = 4762, booktitle = {Proceedings of the 5th International Symposium on Automated Technology for Verification and Analysis (ATVA'07)}, editor = {Kedar S. Namjoshi and Tomohiro Yoneda and Teruo Higashino and Yoshio Okamura}, month = oct } \endverbatim */ Alexandre Duret-Lutz committed Apr 22, 2015 95 SPOT_API twa_graph_ptr minimize_wdba(const const_twa_graph_ptr& a); Alexandre Duret-Lutz committed Jan 05, 2011 96 97 98 /// \brief Minimize an automaton if it represents an obligation property. /// Alexandre Duret-Lutz committed Jan 05, 2011 99 100 /// This function attempts to minimize the automaton \a aut_f using the /// algorithm implemented in the minimize_wdba() function, and presented Alexandre Duret-Lutz committed Jan 05, 2011 101 102 /// by the following paper: /// Alexandre Duret-Lutz committed Jun 09, 2013 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 /** \verbatim @InProceedings{ dax.07.atva, author = {Christian Dax and Jochen Eisinger and Felix Klaedtke}, title = {Mechanizing the Powerset Construction for Restricted Classes of {$\omega$}-Automata}, year = 2007, series = {Lecture Notes in Computer Science}, publisher = {Springer-Verlag}, volume = 4762, booktitle = {Proceedings of the 5th International Symposium on Automated Technology for Verification and Analysis (ATVA'07)}, editor = {Kedar S. Namjoshi and Tomohiro Yoneda and Teruo Higashino and Yoshio Okamura}, month = oct } \endverbatim */ Alexandre Duret-Lutz committed Jan 05, 2011 120 /// Alexandre Duret-Lutz committed Jan 05, 2011 121 /// Because it is hard to determine if an automaton corresponds Alexandre Duret-Lutz committed Jan 05, 2011 122 123 124 125 126 127 128 /// to an obligation property, you should supply either the formula /// \a f expressed by the automaton \a aut_f, or \a aut_neg_f the negation /// of the automaton \a aut_neg_f. /// /// \param aut_f the automaton to minimize /// \param f the LTL formula represented by the automaton \a aut_f /// \param aut_neg_f an automaton representing the negation of \a aut_f Alexandre Duret-Lutz committed Jun 09, 2013 129 130 /// \param reject_bigger Whether the minimal WDBA should be discarded if /// it has more states than the input. Alexandre Duret-Lutz committed Jan 21, 2013 131 132 /// \return a new tgba if the automaton could be minimized, \a aut_f if /// the automaton cannot be minimized, 0 if we do not know if the Alexandre Duret-Lutz committed Jan 05, 2011 133 134 135 136 137 /// minimization is correct because neither \a f nor \a aut_neg_f /// were supplied. /// /// The function proceeds as follows. If the formula \a f or the /// automaton \a aut can easily be proved to represent an obligation Alexandre Duret-Lutz committed Jan 05, 2011 138 139 140 141 142 143 144 /// formula, then the result of minimize(aut) is /// returned. Otherwise, if \a aut_neg_f was not supplied but \a f /// was, \a aut_neg_f is built from the negation of \a f. Then we /// check that product(aut,!minimize(aut_f)) and /// product(aut_neg_f,minize(aut)) are both empty. If they /// are, the the minimization was sound. (See the paper for full /// details.) Alexandre Duret-Lutz committed Aug 28, 2012 145 146 147 148 149 150 151 /// /// If \a reject_bigger is set, this function will return the input /// automaton \a aut_f when the minimized WDBA has more states than /// the input automaton. (More states are possible because of /// determinization step during minimize_wdba().) Note that /// checking the size of the minimized WDBA occurs before ensuring /// that the minimized WDBA is correct. Alexandre Duret-Lutz committed Apr 22, 2015 152 153 SPOT_API twa_graph_ptr minimize_obligation(const const_twa_graph_ptr& aut_f, Alexandre Duret-Lutz committed Sep 28, 2015 154 formula f = nullptr, Alexandre Duret-Lutz committed Apr 22, 2015 155 const_twa_graph_ptr aut_neg_f = nullptr, Alexandre Duret-Lutz committed Aug 15, 2014 156 bool reject_bigger = false); Alexandre Duret-Lutz committed Jan 05, 2011 157 /// @} Felix Abecassis committed Jan 05, 2011 158 }
2022-06-27 06:11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472815990447998, "perplexity": 7354.749582760723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00756.warc.gz"}
http://www.niser.ac.in/sms/course/m303-differential-equations
# Course ## M303 - Differential Equations Course No: M303 Credit: 4 Prerequisites: M201, M205 Approval: 2014 UG-Core Syllabus: Classifications of Differential Equations: origin and applications, family of curves, isoclines. First order equations: separation of variable, exact equation, integrating factor, Bernoulli equation, separable equation, homogeneous equations, orthogonal trajectories, Picard’s existence and uniqueness theorems. Second order equations: variation of parameter, annihilator methods. Series solution: power series solutions about regular and singular points. Method of Frobenius, Bessel’s equation and Legendre equations. Wronskian determinant, Phase portrait analysis for 2nd order system, comparison and maximum principles for 2nd order equations. Linear system: general properties, fundamental matrix solution, constant coefficient system, asymptotic behavior, exact and adjoint equation, oscillatory equations, Green’s function. Sturm-Liouville theory. Partial Differential Equations: Classifications of PDE, method of separation of variables, characterstic method. Text Books: 1. S. L. Ross, “Differential Equations”, Wiley-India Edition, 2009. 2. E. A. Coddington, “An Introduction to Ordinary Differential Equations”, Prentice Hall of India, 2012 Reference Books: 1. G. F. Simmons, S. G. Krantz, “Differential Equations”, Tata Mcgraw-Hill Edition, 2007. 2. B. Rai, D. P. Choudhury, “A Course in Ordinary Differential Equation”, Narosa Publishing House, New Delhi, 2002. 3. R. P. Agarwal, D. O Regan, “Ordinary and Partial Differential Equations”, Univer sitext. Springer, 2009.
2020-01-17 15:45:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721179366111755, "perplexity": 5837.313760566962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00282.warc.gz"}
http://mathoverflow.net/questions/57656/standard-model-of-particle-physics-for-mathematicians/58252
# Standard model of particle physics for mathematicians If a mathematician who doesn't know much about the physicist's jargon and conventions had the curiosity to learn how the so called Standard Model (of particle physics, including SUSY) works, where should (s)he have a look to? References (if they exist!) written for a mathematical target (so can assume e.g. basic differential geometry, basic Lie group theory...) in a "mathematical style" with rigorous definitions, theorems and proofs would be appreciated. - Google says: Introduction-Elementary-Particles-David-Griffiths –  Niyazi Mar 7 '11 at 13:01 @Niyazi: thank you for noting about google. I dont't think the question is offtopic though: we can gather various references for the benefit of the readers. [oh, I should probably communitywikify the question!] –  Qfwfq Mar 7 '11 at 13:25 QFT for mathematicians by Ticciati: books.google.com/books?id=ZtthVxxc3SkC –  Steve Huntsman Mar 7 '11 at 14:38 I find Ticciati's book merely okay. He leaves an awful lot of loose mathematical ends lying around. –  Todd Trimble Mar 7 '11 at 21:58 This is similar to this question at the Physics Exchange: theoreticalphysics.stackexchange.com/questions/222/… –  aleph0 Oct 31 '11 at 19:41 For the standard model, and in particular for its representation-theoretic aspects (which are crucial), I would refer you to the excellent recent article by John Baez and John Huerta from the Bulletin of the American Mathematical Society which can be found here: There are also references to other articles and books here that could lead you further. If you are interested more generally in quantum field theory and its description for mathematicians (where differential geometry plays a big role, in addition to representation theory), then there is the infamous 2-volume "Quantum Field and Strings: A course for mathematicians" which is written by (mostly) mathematicians. It's not going to necessarily give you the correct physical insight, however. Here are the links: Volume 1 Volume 2 Other good possibilities are Freed-Uhlenbeck's "Geometry of Quantum Field Theory" from the PCMI (Park City) series, or the gargantuan "Mirror Symmetry" from the Clay Math monographs. - Presumably these two volumes correspond to the Autumn and Spring semesters of the programme in Quantum Field Theory for mathematicians held at the Institute for Advanced study during the academic year 1996--97. The notes are available at math.ias.edu/qft –  Chandan Singh Dalawat Mar 7 '11 at 14:52 @Chandan: Yes, they are exactly the lectures (probably greatly expanded) from that IAS program. –  Spiro Karigiannis Mar 7 '11 at 15:37 @Spiro: I don't think that the lectures have been greatly expanded, actually. The stuff Chandan has linked to is very close (if not identical) to the books. –  José Figueroa-O'Farrill Mar 7 '11 at 17:33 @José: Good to know. I wish I knew that before I bought the books. But still, PDFs onscreen cannot compare to physically holding a book in your hands. –  Spiro Karigiannis Mar 7 '11 at 17:37 Sorry, but I find those volumes fairly daunting, dispiriting, almost a cruel joke. A much gentler introduction would be most desirable. –  Todd Trimble Mar 7 '11 at 22:01 An excellent introduction for a mathematician without previous exposure to quantum field theory is the book by Gerald Folland: "Quantum field theory, a tourist guide for mathematicians", ISBN: 978-0-8218-4705-3. To understand the standard model, one first needs to learn about quantum field theory, since this is an example of QFT model, although a rather formidable one. I think you will have a hard time finding a more pedagogical introduction to this subject than Folland's book. - Another book in that vein which just came out is "quantum mechanics and quantum field theory, a mathematical primer" by Jon Dimock at Cambridge U. Press. Check it out! It does not go all the way to the standard model, but gets to QED at least. It is quite short so could be ideal as a first thing to read before going into the other more advanced references mentioned in this post. –  Abdelmalek Abdesselam Mar 22 '11 at 15:26 The Folland book mentioned here is quite good. One of the most straight-forward physics references might be Pierre Ramond's "Field Theory: A Modern Primer", but it's still a long ways from mathematical rigor. Some comments about the other books and topics discussed here: Weinberg's books are very good in their own way, but not really appropriate for mathematicians. The first one develops QFT not so much in terms of fundamental objects, but as a phenomenological framework forced upon us by principles such as special relativity and locality. The second one does gauge theory without using geometry, or coordinate-invariant notation, which is not a great idea for mathematicians. The third one is just about SUSY, concentrating on the parts of the subject not of much mathematical interest (the IAS volumes do the opposite). About the IAS volumes, one should keep in mind that the main point of that exercise was to try to explain to mathematicians Seiberg-Witten theory as understood by physicists in terms of N=2 supersymmetric QFT. This has nothing to do with the Standard Model, and from what I remember the Standard Model doesn't appear in those volumes. They do contain a truly spectacularly good set of lectures by Witten on QFT (but not written up by him...), aimed at getting to the Seiberg-Witten story. This involves some heavy-duty use of non-perturbative supersymmetric quantum field theory, of the sort that is of mathematical interest in building TQFTs. Besides not explaining the Standard Model, I don't think the IAS lectures really explain the use of supersymmetry to extend the Standard Model (the MSSM "minimal supersymmetric standard model"). This is a subject that has always been heavily advertised without much explanation of its significant problems, one of which is an extra 120 or so parameters. Initial results from the LHC rule out nearly half the most popular region in parameter space, chosen for simplicity and assuming that supersymmetry can be used to solve certain problems (dark matter particle, anomaly in measurement of muon magnetic moment). This still leaves the other half, as well as a lot of other less popular regions of parameter space. Over the next year or two I believe we'll see increasingly large regions of parameter space ruled out, but there is no way the LHC can rule out all of it. All it can do is change somewhat how physicists evaluate the likelihood of nature being described by conventional supersymmetric extensions of the Standard Model, a process which has started and will continue. - I'm neither a physicist nor a mathematical physicist but I've taken some recreational interest in learning about the subject and about QFT and the standard model in particular. What follows is the recommendations of a complete QFT novice and is admittedly somewhat off topic, but hopefully will be useful to someone. I found both Feynman's "The strange theory of light and matter" and Griffith's "Introduction to elementary particles" very helpful. These are not math books and Feynman's has essentially no details (or even equations). But the last half of Feynman's book (especially the last lecture) is good for giving an intuitive understanding of what the mathematics is trying to formalize (this is something I found maddening about the many mathematical accounts of QFT I've read). It is also appealing that you can read the book in a couple afternoons (probably no other book on this topic can boast this). Griffith's book fills in a number of blanks in Feynman's book. My main reaction to the mathematical treatments I've seen of QFT is that it is hard to gain intuition as to what the definitions and axioms are really intending to model. Both these books helped a lot in remedying this, at least for me. Read them first and then hunt down the mathematical treatments. - For a straightforward and quick intro to the standard model, try "Groups and Symmetries: From Finite Groups to Lie Groups" by Kosmann-Schwarzbach. It's rigorous and does a good job motivating the standard model in its later chapters. You'll learn what a quark is from the mathematical point of view. In addition, Griffith's textbook on elementary particle physics would be a good historical supplement. It took physicists many years and guesses to work out the standard model. The first few chapter of Griffith's book read like a good mystery novel. Plus, you'll be a little more familiar with weird concepts like isospin, strangeness and color. Finally, for more talk related to particle physics the classic text "Quarks and Leptons" by Halzen and Martin is really in-depth, but does assume a good grasp on physics. It does a good job of explaining concepts in the context of group theory. I would say, try to read the discussions in it rather then get bogged down in the physics. - +1 for pointing this book out. I didn't know it -- but it looks like a nice undergraduate level book for physicists. –  José Figueroa-O'Farrill Mar 7 '11 at 18:49 The title of the book makes me very optimistic: I understand finite groups and their representations rather well and have a halfway decent working knowledge of Lie groups / algebras / representations thereof. Am I really so close to being able to understand modern particle physics? Why did no one tell me sooner?? –  Pete L. Clark Mar 7 '11 at 19:27 I don't know, Pete: did you ask? :-) –  Todd Trimble Mar 7 '11 at 21:55 @Pete, Todd: Every physical theory (and QFT is no exception) has both "kinematical" and "dynamical" aspects. Kinematics is basically representation theory, but dynamics is not. (The exception to this rule might large chunks of two-dimensional conformal field theory; although not everyone might agree.) Hence with a good knowledge of representation theory, one can understand how to set up the QFT models. To extract physically meaningful results, though, requires studying the dynamics... and therein lies the rub, as they say. –  José Figueroa-O'Farrill Mar 7 '11 at 23:08 @Todd: well, not recently, no. I always thought that my "math: yes; physics: huh?" stance was pretty clear. –  Pete L. Clark Mar 9 '11 at 6:54 All of the above answers are good, but I think these might be closer to what you're looking for: Quantum Field Theory I: Basics in Mathematics and Physics: A Bridge between Mathematicians and Physicists (v. 1) http://www.amazon.com/Quantum-Field-Theory-Mathematics-Mathematicians/dp/3540347623/ref=sr_1_3?s=books&ie=UTF8&qid=1299924659&sr=1-3 Quantum Field Theory II: Quantum Electrodynamics: A Bridge between Mathematicians and Physicists http://www.amazon.com/Quantum-Field-Theory-Electrodynamics-Mathematicians/dp/3540853766/ref=sr_1_1?s=books&ie=UTF8&qid=1299924659&sr=1-1 - These are not good books to learn from. I find these two to be to long and drawn out. They lack focus. They are missing and overall arc or plot and feel more like an amalgam of thousands of snippets written by different people with little regard to what the others were writing. These books may contain everything, but they also contain everything. On the plus side, I enjoy the historical annotations and stories. –  aleph0 Oct 31 '11 at 20:01 I make no pretense at understanding the standard model myself, but would like to mention Connes and Marcolli's "Noncommutative Geometry, Quantum Fields and Motives" (see http://www.alainconnes.org/en/downloads.php) chapters 9-19. This quote is from the introduction to chapter 9: "Our main purpose is to show that the full Lagrangian of the Standard Model minimally coupled to gravity, in a version that accounts for neutrino mixing, can be derived entirely from a very simple mathematical input, using the tools of noncommutative geometry. This will hopefully contribute to providing a clearer conceptual understanding of the wealth of information contained in the Standard Model, in a form which is both palatable to mathematicians and that at the same time can be used to derive specific physical predictions and computations." - I thought that "Mathematical aspects of quantum field theory" by Edson de Faria and Welington de Melo was nicely written. Summary from the Publisher: "Over the last century quantum field theory has made a significant impact on the formulation and solution of mathematical problems and has inspired powerful advances in pure mathematics. However, most accounts are written by physicists, and mathematicians struggle to find clear definitions and statements of the concepts involved. This graduate-level introduction presents the basic ideas and tools from quantum field theory to a mathematical audience. Topics include classical and quantum mechanics, classical field theory, quantization of classical fields, perturbative quantum field theory, renormalization, and the standard model. The material is also accessible to physicists seeking a better understanding of the mathematical background, providing the necessary tools from differential geometry on such topics as connections and gauge fields, vector and spinor bundles, symmetries, and group representations" - Yes, although I only had a quick glance at it, it looks quite good. However, I was disappointed by section 8.1 about BPHZ renormalization. It does not even state what the theorem is. Just defining the renormalized amplitude of a diagram by Zimmermann's forest formula and saying that it converges does not foot the bill. –  Abdelmalek Abdesselam Mar 9 '11 at 19:36 A thin book that covers the basics of free and interacting fields in a mathematically rigorous way, at the level of formal power series in the interaction strength. Essentially proves the renomralizability of quantum Yang-Mills theories (a large part of the standard model) and the necessity for a Higgs field. Treats perturbative gravity as well. Since the emphasis is on the methods, not much time is spent exploring features specific to the standard model. But this book is really hard to beat when it comes to mathematical rigor among other books aimed at a similar audience. - Where did you find this book available as a PDF? A quick Google hunt didn't turn it up. –  L Spice Sep 24 '12 at 15:18
2015-03-05 16:33:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294442653656006, "perplexity": 624.7975883024552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464303.77/warc/CC-MAIN-20150226074104-00164-ip-10-28-5-156.ec2.internal.warc.gz"}
https://discusstest.codechef.com/t/how-to-solve-gtree-tree-gcd/16513
# How to solve GTREE - "TREE GCD" ? I tried to solve the above problem from LoC AUG 2017 and wrote a brute force solution (by doing a DFS from each node in its subtree) that fetched 30 pts but couldn’t come up with an approach to solve this problem completely. Can someone explain how to solve this problem? 2 Likes I didnt give any submissions on the problem… but I can write what i think is a correct solution. So, first… flatten the tree into an array. So now the problem is to solve range queries where each query returns the number of non co-prime numbers. First, compress the numbers so that they contain its prime factors only once. for example, 72 = 2 * 2 * 2 * 3 * 3 Write it as 2 * 3 – meaning that every unique prime factor is taken only once. So you can easily show that a number will contain at most 6 factors in this form. for that just take the product of the first 7 unique prime numbers and that will exceed 1e5, the max size of the number. Now, maintain list of all the possible factors and their multiples that are present. Now this may seem to be N^2 as you are maintaining 1e5 sized array of vectors, but each number can have atmost 2^6 factors, so in all there can be at most 2^6 * N elements. Now given a range, basically do a principle of inclusion and exclusion type thing, and binary search for the numbers present in the range, using the aforementioned lists that have been made. The whole factorization can be done in NlogN time using the smallest prime factor seiving technique, and the overall complexity should be about 2^6 * log^2 N per query approx.( this may be actually logN, i forgot ). This should work within the given time limit. I hope this was a correct solution. i had similar idea but stuck at the inclusion and exclusion thing…(: @rajashri_basu can you please explain the last paragraph a little more? I checked a few solutions and they seem to be using the Möbius function, which I don’t know about. I’m reading up on it and if I am able to understand the solution I’ll explain it here. 2 Likes How do we apply binary search in your last part , like what is the property that is monotonic here , I understood the property of inclusion and exclusion like suppose if our curr_subtree node has {2,3,5} as it’s unique prime divisor’s then we will use inclusion and exclusion principle to find like how many numbers are divisible in that range by atleast one of the numbers from our given list like we will find how_many_divisible_by(2)+how_many_divisible_by(3)+how_many_divisible_by(5)-how_many_divisible_by(2* 3)-how_many_divisible_by(2* 5)-how_many_divisible_by(3* 5)+how_many_divisible_by(2* 3* 5) in that range and u are saying that we can find the value of a how_many_divisible_by(x) by using binary search in that range but i don’t understand how to apply binary search . There are do not need array and binary search. Simple dfs and use array with number of multiples present in upper nodes. I have a better approach. The key problem is that: Given several integers, with integer x appears c_x times, and some fixed integer m. It is asked that how many integers that are co-prime to m, i.e. \sum_{i=1}^n c_i [ \gcd (i, m) = 1 ]. By Mobius inversion, we can get that \sum_{i=1}^n c_i [ \gcd (i, m) = 1 ] = \sum_{d|m} \mu(d) \sum_{i=1}^{\lfloor n/d \rfloor} c_{id}. Here we use some technique when we DFS the tree, for each vertex v, and every val_v's divisor d, maintain s(d) = \sum_{i=1}^{\lfloor n/d \rfloor} c_{id}, and thus we can get s(d) in O(1) each time. As above, an O(n \max_{k} \sigma(k)) algorithm is exhibited, where \sigma(k) means the number of k's divisors. For more details, please see my (https://www.codechef.com/viewsolution/15101950). 9 Likes @liouzhou_101 Can you explain how you converted the problem to: “Given several integers, with integer x appears cx times, and some fixed integer m. It is asked that how many integers that are co-prime to m” For every vertex x in the tree, it is asked that how many vertices y in the subtree of x such that \gcd (val_y, val_x) > 1. It can be converted to that how many vertices y in the subtree of x such that \gcd (val_y, val_x) = 1 (The original problem’s complement.), where val_x is the m, and val_y are the integers x. @liouzhou_101 How did you convert ∑d|mμ(d)∑i=1⌊n/d⌋cid I have already this and this but not able to figure out how you converted to mobius inversion. Can you please explain that formula 1 Like @skyhavoc \begin{aligned} & \quad \sum_{i=1}^n c_i [ \gcd(i, m) = 1 ] \\ & = \sum_{i=1}^n c_i \epsilon(\gcd(i, m)) \\ & = \sum_{i=1}^n c_i \sum_{d|\gcd(i, m)} \mu(d) \\ & = \sum_{i=1}^n c_i \sum_{d|m} \mu(d) [d|i] \\ & = \sum_{d|m} \mu(d) \sum_{i=1}^n c_i [d|i] \\ & = \sum_{d|m} \mu(d) \sum_{i=1}^{\lfloor n/d \rfloor} c_{id} \\ \end{aligned} 2 Likes @liouzhou_101 s(d) appears to be cnt[d] in your code, but s(d) is the number of times any multiple of d occurs whereas cnt[d] is just the number of times d appears. Am I getting something wrong? 1 Like @meooow cnt[d] is to calculate the number of times any multiples of d occurs, however, it is not that itself. In my code: void dfs(int x, int pre) { size[x] = 1; ans[x] = 0; for (auto d : divisor[a[x]]) { ++ cnt[d]; ans[x] -= u[d]*cnt[d]; } for (auto y : v[x]) { if (y == pre) continue; dfs(y, x); size[x] += size[y]; } for (auto d : divisor[a[x]]) ans[x] += u[d]*cnt[d]; ans[x] = size[x]-1-ans[x]; } Note that ans[x] = ∑ u[d]*( (cnt[d] after dfs the subtree of x) - (cnt[d] before dfs the subtree of x) ), which implies that ( (cnt[d] after dfs the subtree of x) - (cnt[d] before dfs the subtree of x) ) is to calculate s(d). Note that when we run dfs(x, pre), before looping “for (auto y : v[x])”, for each a[x]'s divisor d, what we do is “cnt[d] ++”. And the difference of cnt[d] before and after the loop is exactly what we want to calculate, i.e. s(d). If still confused, think of the DFS order of a tree and implement it by hand in order to understand what the procedure exactly does. 1 Like I did understand why you were subtracting first and adding up after the children were visited, because of the DFS order, but I guess I didn’t fully grasp what was happening. I thought about it a bit more and it’s clear to me now, thanks! 1 Like //
2021-08-02 08:43:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9144592881202698, "perplexity": 1162.5195635352582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154310.16/warc/CC-MAIN-20210802075003-20210802105003-00479.warc.gz"}
https://dmtcs.episciences.org/4031
## Bharathi, Arpitha P. and De, Minati and Lahiri, Abhiruk - Circular Separation Dimension of a Subclass of Planar Graphs dmtcs:4031 - Discrete Mathematics & Theoretical Computer Science, November 3, 2017, Vol. 19 no. 3 Circular Separation Dimension of a Subclass of Planar Graphs Authors: Bharathi, Arpitha P. and De, Minati and Lahiri, Abhiruk A pair of non-adjacent edges is said to be separated in a circular ordering of vertices, if the endpoints of the two edges do not alternate in the ordering. The circular separation dimension of a graph $G$, denoted by $\pi^\circ(G)$, is the minimum number of circular orderings of the vertices of $G$ such that every pair of non-adjacent edges is separated in at least one of the circular orderings. This notion is introduced by Loeb and West in their recent paper. In this article, we consider two subclasses of planar graphs, namely $2$-outerplanar graphs and series-parallel graphs. A $2$-outerplanar graph has a planar embedding such that the subgraph obtained by removal of the vertices of the exterior face is outerplanar. We prove that if $G$ is $2$-outerplanar then $\pi^\circ(G) = 2$. We also prove that if $G$ is a series-parallel graph then $\pi^\circ(G) \leq 2$. Source : oai:arXiv.org:1612.09436 DOI : 10.23638/DMTCS-19-3-8 Volume: Vol. 19 no. 3 Section: Graph Theory Published on: November 3, 2017 Submitted on: January 20, 2017 Keywords: Computer Science - Discrete Mathematics,Mathematics - Combinatorics
2018-03-19 10:56:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.557512640953064, "perplexity": 691.5141670143361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646875.28/warc/CC-MAIN-20180319101207-20180319121207-00457.warc.gz"}
https://transactions.ismir.net/articles/10.5334/tismir.87/
A- A+ Alt. Display # Evaluating an Analysis-by-Synthesis Model for Jazz Improvisation ## Abstract This paper pursues two goals. First, we present a generative model for (monophonic) jazz improvisation whose main purpose is testing hypotheses on creative processes during jazz improvisation. It uses a hierarchical Markov model based on mid-level units and the Weimar Bebop Alphabet, with statistics taken from the Weimar Jazz Database. A further ingredient is chord-scale theory to select pitches. Second, as there are several issues with Turing-like evaluation processes for generative models of jazz improvisation, we decided to conduct an exploratory online study to gain further insight while testing our algorithm in the context of a variety of human generated solos by eminent masters, jazz students, and non-professionals in various performance renditions. Results show that jazz experts (64.4% accuracy) but not non-experts (41.7% accuracy) are able to distinguish the computer-generated solos amongst a set of real solos, but with a large margin of error. The type of rendition is crucial when assessing artificial jazz solos because expressive and performative aspects (timbre, articulation, micro-timing and band-soloist interaction) seem to be equally if not more important than the syntactical (tone) content. Furthermore, the level of expertise of the solo performer does matter, as solos by non-professional humans were on average rated worse than the algorithmic ones. Accordingly, we found indications that assessments of origin of a solo are partly driven by aesthetic judgments. We propose three possible strategies to install a reliable evaluation process to mitigate some of the inherent problems. Keywords: How to Cite: Frieler, K. and Zaddach, W.-G., 2022. Evaluating an Analysis-by-Synthesis Model for Jazz Improvisation. Transactions of the International Society for Music Information Retrieval, 5(1), pp.20–34. DOI: http://doi.org/10.5334/tismir.87 Published on 03 Feb 2022 Accepted on 24 Nov 2021            Submitted on 26 Feb 2021 ## 1. Introduction Attempts to generate music with the computer are nearly as old as the computer itself, starting with Lejaren Hiller’s ILLIAC Suite in 1959 (Hiller and Isaacson, 1959). The underlying motivations are manifold, ranging from artistic experiments to proofs-of-concept, educational applications such as practising aids,1 and commercially viable products for royalty-free music.2 The model evaluated in this paper is based on another motivation, the development of a psychological model of jazz improvisation. There is vast literature on jazz improvisation research, but the model by Pressing (1984, 1988) developed already in the 1980s is still state-of-the-art even though it remains largely untested. Proving the adequacy of models for jazz improvisation is problematic as the required experimental procedures are difficult. The main reason is that jazz improvisers seem to have limited conscious access to their cognitive and motor processes during improvisation. Furthermore, experimental studies on jazz improvisation must rely on production paradigms which are hard to evaluate and suffer from a sampling problem, because it is seldom possible to have improvisers play a large number of solos in a controlled lab setting. The external validity of these experiments is further often limited, as these experiments very often have to use computer-generated (e. g., with Band-in-a-Box) or prerecorded backing tracks (e. g., Aebersold play-alongs), which offer no possibilities for interaction. Confronting improvisers with their generated solos and prompting self-reflective comments has proved to be one of the most fruitful approaches so far (Norgaard, 2011), but this still has limitations, as jazz improvisers often just do not know in retrospect why they played certain notes or phrases. Another promising and recent approach is corpus-based jazz studies (Owens, 1974; Pfleiderer et al., 2017) that aim at finding phenomenological patterns in jazz improvisations of (mostly eminent) jazz players. Psychological models of jazz improvisation seldom make sufficiently hard predictions to allow them to be tested in experiments and with corpora. Here, generative models can help, following a general analysis-by-synthesis approach. We decided that our generative model of jazz improvisation should explicitly incorporate high-level knowledge of jazz improvisation while being at the same time probabilistic in order to model inaccessible factors and genuinely probabilistic aspects of an improvisation. One obvious model choice is hierarchical Markov models, which are employed in the current study. The paper is organized as follows. After some general consideration about evaluation methods in Section 2 for generative models of jazz improvisation and an overview of related work in Section 3, we present in Section 4 our hierarchical Markov model which is based on mid-level analysis, the Weimar Bebop Alphabet, a simple rhythm model, and chord-scale theory. In Section 5, we report on our exploratory experiment to evaluate factors influencing Turing-like solo assessment by a group of experts and non-experts. Finally, we discuss in Section 6 our findings and propose ideas for the evaluation of generative models of jazz improvisation. ## 2. Evaluation of Generative Models Analysis-by-synthesis approaches are only fruitful if the models can be evaluated with respect to their adequacy. As the models serve as a “laboratory” to test various hypotheses about improvisational processes, there is a certain demand for testing often and reliably, in order to employ a continuous improvement process while ruling for or against certain modeling options. There are two main approaches to evaluation, which are complementary and not exclusive. First, generated solos can be evaluated with a Turing-like test. Second, they can be judged against a corpus of “accepted” jazz solos, e. g., in terms of melodic features, dramaturgy, or pattern content. In this study, we decided to pursue only the first approach, due to length constraints, and also because we think that the objective approach is principally less severe and powerful, as objective features are not likely to capture the full content of a solo with all the intricate correlations between musical dimensions. Simple first and second statistics for pitch and rhythm, such as the MGEval variant used in Madaghiele et al. (2021), will not suffice for our needs, as these are partly fulfilled already by construction. In order to evaluate a generative model using Turing-like tests with a panel of judges or raters, one has to solve several problems, which mostly relate to performance aspects. First to note is that a Turing-test needs to be designed as a signal-detection experiment, in which computer and human-generated solos are to be assessed as either human or computer-generated. This necessitates a standardization of the solos in order to minimize confounds. A judgement of computer-generated solos based only on absolute criteria is possible, but it would still need baseline measurements on human-generated solos for a complete picture. As our (and most other) algorithms generate score-like representations, one could either let expert raters judge the score directly, or the scores could be transferred into sounding music for assessment. The first approach is seldom used as it has the disadvantage that it demands considerable skills from the raters and that imagining music from scores will always be inferior to actual listening to music. Thus, in general, a listening experiment seems preferable. However, the transfer process introduces additional degrees of freedom in design and, most importantly, confounds judging the actual musical content and performative aspects, which results in assessment bias. To produce sounding music from generated scores, one can either use machine or human-generated renditions of solos, either with or without a musical context. As jazz solos without accompaniment rarely make musical sense, using an accompaniment seems mandatory. Letting human players perform the generated solo is an intriguing approach, but a very time and resource consuming method that does not scale well. The most common and most practical solution is to use machine-generated renditions over machine-generated (or prerecorded) backing tracks. A machine-generated solo rendition can be either deadpan MIDI (a score “as is”) or post-processed by humans (or further algorithms). Tweaking can be applied to performance parameters (e. g., microtiming, dynamics) and to the musical surface. The easiest solution of using deadpan MIDI data has the disadvantage that non-expressive performance might be strongly associated with a computerized, non-human performance which might likely result in rating bias as well. Another issue is the proper selection of generated solos, as creativity is mostly conditioned on selection processes. This applies to both human- and computer-generated solos. A fair evaluation process can only be based on some form of random selection, but there are still free parameters, e. g., the choice of underlying tunes. On the human-generated side, the question is whether solos from masters or professionals or solos from non-professionals or students should be used. The choices here are likely to influence the evaluation process and need careful considerations. Finally, there is also the question of whether to use expert or non-expert listeners for a human panel. Expert listeners are more likely to identify computer-generated jazz solos simply by having more exposure to jazz and its implicit rules. As such, an expert panel might provide a more severe test for the algorithmic model. On the other hand, jazz experts might also be (negatively) biased towards computer-generated jazz solos, as this goes against central points of ethics and aesthetics of jazz. Non-experts, on the other hand, while probably not being as sensitive to details as experts, might be less biased in this regard. In light of all these considerations, we thus felt the need that, before conducting a large scale evaluation of our algorithm, we had to address aspects of the evaluation procedure itself first, which will be the second focus of the paper. ## 3. Related Work ### 3.1 Generation of Jazz Solos There have been quite a few attempts to artificially create jazz solos, mostly of the monophonic type. The employed methods range from Markov models (Pachet, 2003, 2012) to rule-based models (Johnson-Laird, 1991, 2002; Quick and Thomas, 2019), (probabilistic) grammars (Keller et al., 2013; Keller and Morrison, 2007), genetic algorithms (Biles, 1994; Papadopoulos and Wiggins, 1998), agent-based approaches (Ramalho et al., 1999) and artificial neural networks (Toiviainen, 1995; Hung et al., 2019; Wu and Yang, 2020). Some of these models do not generate solos in the narrow sense, but, for instance, walking bass lines or lead sheets. Most systems work offline, whereas some are interactive and real-time. A standardized and rigorous evaluation of these models is, however, often lacking. The algorithms were most often only evaluated informally or qualitatively, either by the authors themselves or a small panel of experts. Recently, evaluation using objective features was also employed (Yang and Lerch, 2020). One notable exception is the recent work by Wu and Yang (2020), who used a rather extensive subjective listening test, which however was not the main focus of the study. The evaluation algorithm is similar to the proposed algorithm here, as it is also based on the Weimar Jazz Database and also incorporates mid-level unit annotations. However, as the model is a Transformer-variant, the implementation is quite different. The Impro-Visor program by Keller and co-workers3 is open source software and freely available. It seems that they moved recently from their original probabilistic grammars to RNN techniques such as LSTM and GAN-based networks for generation of solos (Trieu and Keller, 2018; Keller et al., 2013). The JIG system by Grachten (2001) has some similarities to the model proposed here, as it also uses some form of abstract pitch motif, derived from Narmour’s implication-realization model (Narmour, 1990). It has also two modes of note generation, called ‘melodying’ and ‘motif’, which are, however, more short-ranged than the mid-level units used in our model. The most recent additions are BebopNet (Haviv Hakimi et al., 2020) and MINGUS (Madaghiele et al., 2021), which are both deep learning Transformer models. BebopNet is based on a large collection of saxophone solos, whereas MINGUS is based on the Weimar Jazz Database and the Nottingham Database. The results of BebopNet can be listened to online and are partly convincing, particularly in longer lines, which might be due to the fact that some real bebop patterns are reproduced by the model. The authors of MINGUS found a similar level of performance of their system to BebopNet. ### 3.2 Evaluation of Artificial Jazz Solos To the best of our knowledge, no systematic work on how to set up musical Turing tests for artificial jazz solos has been undertaken so far. In a recent paper by Yang and Lerch (2020), a strong point was made about the quite sorry state of formal evaluation methods for generative models of music. They acknowledge the power of Turing-like tests, which are not without problems though, but mainly advocate methods of objective evaluation which boil down to comparing feature distributions between original (training) and generated sets of music, similar to the idea of a “critic” proposed by Wiggins and Pearce (2001). Objective evaluation of generated music has the advantage that it can be unequivocally defined and thus reproducibly measured, but in our view, it can only be a preliminary or auxiliary step. The problem is that it has to rely on an arbitrary (though often obvious) set of features, while the space of possible features is basically infinite. At least, good care and extended domain-knowledge is necessary to devise such a system. Because of the non-trivial but crucial interaction of musical features (pitch, harmony, rhythm, meter, articulation, dynamics and micro-timing), it is hardly to expect that conforming on single feature dimensions alone can guarantee the correct conditional distribution. On the other hand, this way, true Big-C Creativity (Kaufman and Beghetto, 2009) might be precluded. However, as the Pro-c creativity problem is not solved yet, this might not be a relevant problem for the near future. ## 4. The Model ### 4.1 Overview An overview of the model can be found in Figure 1. The general aim is to generate a note sequence over a given chord sequence for a prespecified number of choruses, i. e., cycles of chord sequences. This is achieved by using a hierarchical model with mid-level units (MLU) at the top level (Section 4.3). After selecting an MLU, a sequence of Weimar Bebop Alphabet (WBA) atoms (Section 4.4) is generated with a first-order Markov model conditioned on the selected mid-level unit. The pitches of these WBA atoms are then “realized” using the given chord context with the help of chord-scale theory (Section 4.6), whereas the rhythm is generated based on a first-order Markov model of duration classes likewise conditioned on the containing mid-level unit. The number of tones in an MLU is predetermined by drawing from the length distributions conditioned on the type of mid-level unit. Even though in the original mid-level annotation system a musical phrase can contain more than one MLU, we make here the simplifying assumption that an MLU always constitutes a single phrase. After a phrase is generated in this way, a short gap or break between phrases is inserted by randomly drawing from the gap duration distribution, whereupon the whole process is repeated up until the specified number of choruses is generated. All involved probability distributions are estimated by the corresponding empirical distributions from the Weimar Jazz Database. Figure 1 Overview of the generative model. ### 4.2 The Weimar Jazz Database The Weimar Jazz Database is a high-quality database of annotated transcriptions of monophonic jazz solos performed by eminent jazz performers from the US-American jazz canon. It covers nearly the entire history of jazz (1925–2009) and the most important tonal jazz styles, without claiming full representativeness. See Table 1 for a quick overview.4 Table 1 Short overview of the Weimar Jazz Database. The Weimar Jazz Database Solos 456 Performers 78 Top Performers Coltrane (20), Davis (19), Parker (17), Rollins (13), Liebman (11), Brecker (10), Shorter (10), S. Coleman (10) Styles Traditional (32), swing (66), bebop (56), cool (54), hardbop (76), postbop (147), free (5) Instruments ts (158), tp (101), as (80), tb (26), ss (23), other (68) Time range 1925–2009 Tone events 200,809 Phrases 11,802 Mid-level units 15,402 (containing 5.2 WBA atoms on average) WBA atoms 80,123 (average length: 2.3 intervals) The WJD contains an extensive set of annotations such as metrical annotations, articulation, loudness, chords, forms, and metadata, as well as manually annotated phrases and mid-level units. ### 4.3 Mid-level Units Mid-level analysis is a content-based qualitative annotation system based on the idea that performers use short-range action plans, which cover a duration of about 2–3 seconds. The system was originally developed for jazz piano improvisations (Lothwesen and Frieler, 2012) and then modified and extended for monophonic solos (Frieler et al., 2016). Interviews with pianists showed that professional jazz players indeed used similar action plans (Schütz, 2015). Based on an iterative qualitative analysis of solos, a system with nine main types (and 38 subtypes) of playing ideas or design units, called mid-level units, was devised and codified. Subsequently, the entire WJD was annotated manually, with a good inter-rater agreement on section boundaries and acceptable agreement on unit labels. It could further be shown that the different MLU types indeed differ statistically on various aspects and that styles and performers differ in their application of mid-level units (e. g., Frieler, 2018, 2020). The most common MLUs are line and lick MLUs, covering about 75 % of all MLUs as well as 75 % of solo durations (Frieler et al., 2016). Lick MLUs are shorter and rhythmically more diverse, whereas line MLUs are rhythmically more uniform and generally longer. For the present model, only lick and line MLUs are used. See Section S1.1 in the supplementary information for a more detailed description of the two main types. ### 4.4 Weimar Bebop Alphabet In an effort to find a more compact description of melodies, the Weimar Bebop Alphabet (WBA) was developed (Frieler, 2019). The guiding principle was based on identifying short melodic units that make sense in their own right, either by musical conventions or instrument rehearsal practices such as running scales and arpeggios. The system was devised based on expert knowledge and phenomenological intuition. These units are called WBA atoms and are thought to serve as basic building blocks for melodic construction. As such, they can be applied in principle to any melody, not only to jazz solos. It represents a classification system for interval sequences with six main and nine subcategories. See Table 2 for an overview. The most basic categories are repetitions, scales (diatonic and chromatic), arpeggios, and trills (short oscillations of two tones). More specific is the class of ‘approaches’, where a target tone is ‘encircled’ by two other tones, one higher and one lower. Finally, there is a miscellaneous category, dubbed ‘X’ atoms, with the subcategory of links, which are X atoms of length 1, e. g., only one interval. The current form of the WBA should be viewed as preliminary, as, for instance, the miscellaneous category is the largest. See Frieler (2019) for more details. Table 2 Overview of WBA atoms. Type Subtype/Symbol Description Scales Diatonic (D) Chromatic (C) Diatonic scale Chromatic scale Approaches F Two intervals approaching a target pitch with a direction change (e. g., –2 +1) Trills T Two alternating pitches Arpeggios Simple (A) Jump (J) Sequence of thirds Sequence of intervals larger than a third Repetitions R X atoms X Miscellaneous category X atoms of length 1 Using a priority list, an interval sequence can be segmented uniquely into a sequence of non-overlapping atoms of constant direction (see Frieler (2019) for de-tails). Hence, a WBA atom is unambiguously described by a short symbol for its category, a direction (ascending, descending, or horizontal), and a length (number of intervals). However, a single atom can have different realizations in terms of pitches, except for tone repetitions, which are the only unequivocal category. For instance, an ascending diatonic scale atom of length 3 can have the realizations [+2, +2, +2], [+2, +2, +1], [+2, +1, +2], [+1, +2, +2], [+1, +2, +1] (but not [+1,+1,+1] as this would be a chromatic scale atom). This under-determination is the main work-horse for the generative model. In Frieler (2019), it was shown that the WBA atoms empirically follow at most a first-order Markov model. This is an interesting result, which undermined one of the original goals of the WBA—to find a significantly more compact description of melodies—, but it simplifies the generative model. The average length of a WBA atom is, with 2.3 intervals, rather small. This means that every 2 to 3 intervals (3 to 4 tones) a change in character and/or direction takes place, which is indicative of the high variability and complexity of jazz improvisations and is an important part of jazz melodic construction. ### 4.5 Rhythm Model The rhythm model is currently a very simple one, basically just a first-order Markov model of inter-onset interval (IOI) classes. For this, inter-onset intervals are classified into five distinct categories (very short, short, medium, long and very long), which roughly correspond to metrical durations of sixteenth, eighth, quarter, half and whole notes. For the current model, IOI class transition probabilities for lick and line MLUs are sampled and used to generate inter-onset intervals. The current implementation of model uses only $\begin{array}{l}4\\ 4\end{array}$ time with a sixteenth note resolution, so the IOI classes are here, in fact, identical to the aforementioned metrical durations. In order to get better results, two further tweaks were applied. For line MLUs, the durations were fixed to be either sixteenth or eighth notes, with equal probability. Markov sampling from the true IOI distribution for line MLUs produced too many rhythmically inhomogeneous and syncopated rhythms, which shows that rhythm is not adequately modeled by this simple approach. One reason for this is the interaction of rhythm and metrical constraints, as well as the fact that micro-timing somewhat distorts the distribution, since the metrical annotation in the WJD was done algorithmically. Likewise, for lick MLUs, the generated onsets needed some smoothing. Here a simple resampling was used until the number of on-beat events was twice as high as the number of off-beat events. ### 4.6 Chord-Scale Theory Chord-scale theory was used to fit the WBA atoms to the current chord context. Chord-scale theory was initiated by Russell (1953) and became one of the most popular harmonic theories in jazz education (Cooke and Horn, 2002; Aebersold, 1967; Coker, 1987). Tonal jazz improvisation is based on chords, and style rules demand to select pitches that sufficiently match the underlying chords. In order to achieve this goal, chord-scale theory provides a simple mapping from chords to scales which can be used by a player. For instance, a Cmaj7 chord implies (amongst others) either an Ionian (major) or a Lydian scale, which differ only by the fourth scale degree which is raised in Lydian (♯11). Playing only the pitches from either scale over a Cmaj7 chord will more or less guarantee a sufficient fit. Another advantage of chord-scale theory is that it is a unified framework for tonal and modal jazz alike. There is some discussion about the adequacy, benefits, and disadvantages of chord-scale theory for jazz research and practice (Ake, 2002), but this is outside the scope of this paper. It suffices to say that chord-scale theory is an approximation for modeling tonal choices in certain jazz styles, which seems to be useful for our present purposes. In chord-scale theory, the mapping of chords to scales is not unique; typically several suitable scales are available to the player, depending on style, taste, and tonal context. For instance, a min7 chord can be mapped to a Dorian or an Aeolian scale, depending on whether it is interpreted as a ii7 or a i7 chord in the current context, and whether the tune is considered modal or tonal in character. For the generative model, we used a simple and fixed mapping of chords to scales with fixed probabilities of being chosen. No attempts were made to find the most appropriate scale for a chord, which would require harmonic analysis and often external style information. This is left for future extensions of the model. Upon realizing a WBA atom over a chord with a certain starting pitch, first, a suitable scale is randomly selected by a simple weighted sampling from the set of allowed scales in Table 3. Then the pitches are generated based on the current WBA value, which has the form of an interval sequence. For diatonic and arpeggio atoms, occasionally, a link atom is inserted if the current starting pitch is not part of the chord or the corresponding scale. This is the only way link atoms are used in the generative model. The rationale behind this is that a jazz performer might also use links to get from an unsuitable pitch to a suitable one before executing an arpeggio or a diatonic scale, as these should (normally) match the current chord. For a more detailed description how the atoms are realized, see Section S2.2 in the supplementary information. Table 3 Chord-scales used in the current model. Scale contents are given as pitch class vectors with 0 representing the root of the chord. Chord Type Scales Scale content maj, maj7 Ionian [0, 2, 4, 5, 7, 9, 11] min, min7 Dorian [0, 2, 3, 5, 7, 9, 10] 7 Mixolydian [0, 2, 4, 5, 7, 9, 10] Major Blues [0, 2, 3, 4, 7, 9] Mixolydian ♯11 [0, 2, 4, 6, 7, 9, 10] Altered Scale [0, 1, 3, 4, 6, 8, 10] m7b5 Locrian [0, 1, 3, 5, 6, 8, 10] Phrygian [0, 1, 3, 5, 7, 8, 10] o, o7 Octatonic Scale [0, 2, 3, 5, 6, 8, 9, 11] ### 4.7 Implementation The main algorithm is depicted in Algorithm 1. It consists of two nested loops: the outer one generates phrases, and inner one generates pitch and rhythm sequences. Pitch sequences are generated based on a first-order Markov model of WBA atoms, conditioned on the MLU, and rhythm sequences are added to the pitch sequences also conditional to the MLU (see Section 4.5). Input to the algorithm is a lead sheet, i. e., a chord sequence with metrical information, taken either from the iRealPro corpus or extracted from the WJD chord annotations. The model is currently constrained to $\begin{array}{l}4\\ 4\end{array}$ time and a sixteenth note tatum resolution, but these restrictions could easily be lifted. Further input is a pitch range, in order to avoid running out of instrument ranges. This is ensured by filtering out pitches that are out of range, and by adjusting WBA directions if the current pitch is 30% below the upper or 30% above the lower pitch range limit. This is a crude but effective simulation of actual playing practice, which results in the common “regression to the mean” in melodic motion (Von Hippel and Huron, 2000). The main loop runs until the number of specified choruses is reached by using the onset ticks (in sixteenth units) as main control condition. The generated tone events are finally converted to a proprietary CSV representation which is then converted to MIDI or Lilypond scores using the MeloSpySuite/GUI software from the Jazzomat project (Pfleiderer et al., 2017). One example of a generated solo, that was also used in the evaluation (Algorithm-1-Original, see be-low), can be found in Figure 2. Figure 2 Example of a generated solo over an F-blues chord sequence, used in the evaluation (Algorithm-1-Original). ## 5. Evaluation As discussed in Section 2, we decided to address some general problems of fair evaluation of generated jazz solos using Turing-like tests instead of starting with a large-scale evaluation of the model right away. The results of our exploratory experiment will inform the design of these in the future. Furthermore, we did not explore the possibility of objective evaluations in this paper. We leave this also for the future. ### 5.1 Preparation of Stimuli We produced a set of stimuli along the following dimensions: • Good vs. bad algorithmic solos. We generated a set of 50 solos containing one chorus of a simple jazz blues in F (over the chord sequence ∥F7 | B♭7 | F7 | Cmin7 F7 | B♭7 | B♭7 | F7 | F7 | Gmin7 | C7 | F7 | F7∥). After listening to the results, we selected one of the most convincing solos and one of the least convincing ones. • Tweaked vs. raw algorithmic solos. For the most convincing artificial solo, we prepared two versions. One was just the solo as generated by the algorithm, for the other one, we tweaked a few notes, which seemed suboptimal to our expert ears, and manually added microtiming variations and dynamics for a more realistic performance. • Human vs. algorithmic solos. We selected three kinds of human solos to be compared to the algorithmic ones. The first set of human solos were taken at random from the WJD, using the F blues subset. This contained one solo by Charlie Parker (“Billie’s Bounce”), Miles Davis (“Vierd Blues”), and Sonny Rollins (“Vierd Blues”). Next, we took four solos from a former (unpublished) study, where jazz students had improvised solos to an F blues play-along. The students had different levels of expertise (beginner, intermediate, advanced, and graduated). Thirdly, the authors recorded one solo each over the backing track used for all stimuli. The first author (AUT1) played a single line solo on a digital piano and the second author (AUT2) played a solo on electric guitar. • Original vs. MIDI-fied solos. We used the original recordings of the authors and also produced two MIDI-fied versions by either using the recorded MIDI from the piano solo or an automatically converted version of the guitar solo by using the audio-to-MIDI converter of the DAW plug-in Melodyne (editor version). All MIDI versions were rendered with a tenor sax sample over the same backing track with piano, bass, and drums (see Section S5 in the supplementary information). Only the two original solos by the authors did not use the tenor sax sound and played thus the role of a baseline condition. For all candidate solos, only the first chorus was used and rendered with tempo 120 BPM to create our stimuli, which lasted for approximately 25 seconds each (see Table 4). Table 4 Stimuli used for the evaluation. In column Performance Type specifics of the interpretation are given. Deadpan MIDI: Fully-quantized MIDI without dynamics; MIDI with microtiming: MIDI with semiautomatically added microtiming (swing); Audio: Recorded audio; Converted audio-to-MIDI: recorded audio converted to MIDI with Melodyne, keeping microtiming and dynamics; Recorded MIDI: human-played MIDI with microtiming and dynamics. Id Solo ID Generator Performance Type Solo Sound 1 Algorithm-1-Original WBA-MLU-Algorithm Deadpan MIDI tenor sax 2 Algorithm-1-Improved WBA-MLU-Algorithm/AUT2 MIDI with microtiming tenor sax 3 Algorithm-2-Original WBA-MLU-Algorithm Deadpan MIDI tenor sax 4 WJD-Sonny Rollins Sonny Rollins (“Vierd Blues”) MIDI with microtiming tenor sax 5 WJD-Miles Davis Miles Davis (“Vierd Blues”) MIDI with microtiming tenor sax 6 WJD-Charlie Parker Charlie Parker (“Billie’s Bounce”) MIDI with microtiming tenor sax 7 Student-Beginner Beginner MIDI with microtiming tenor sax 8 Student-Intermediate Intermediate MIDI with microtiming tenor sax 11 Author-Original AUT2 Audio e-guitar 12 Author-MIDI AUT2 Converted audio-to-MIDI tenor sax 13 Author-Original AUT1 Audio piano 14 Author-MIDI AUT1 Recorded MIDI tenor sax ### 5.2 Method We prepared an online survey using the SoSci-Survey platform with the 14 stimuli as the core items. Each solo had to be assessed on a questionnaire containing 10 Likert-like items (cf. Table S2 for a complete list) with answer options ranging from 1 = “completely disagree” to 7 = “completely agree” for all items except items 8 to 9, which had their own range but with the same polarity. The rationale was to present items that reflect typical qualitative judgments of jazz solos that do not use deep jazz-specific terminology. The sequence of solos was randomized for each participant. We collected basic demographic data (age, gender) and asked three self-assessment questions pertaining to jazz and music expertise (“I am a jazz expert”, “I am a jazz fan”, “I am a music expert”) using the same 7-option Likert-scale. We also asked the participants for textual feedback on the experiment itself. Ethics approval was not required by our host insti-tution for this study. Participants gave their informed consent before starting the experiment. ### 5.3 Participants By advertising on social media and approaching friends and colleagues, we obtained a convenience sample of 41 participants (7 female; mean age 27.0, SD 9.9), of which 29 were identified as jazz experts based on the sum of responses to the three expertise items being greater or equal to 12. The overall median value on the item “I am jazz fan” was 7 (“completely agree”). In conclusion, we can say that the sample contains a large share of jazz experts (on different levels), while all self-identified as jazz fans. ### 5.4 Results #### 5.4.1 Solo characteristics As the scores on all items (except item 10) showed various strong correlations, we reduced the variables by using a factor analysis with three factors and oblimin rotation, which explained 85% of the variance (see Section S2.1 in the supplementary information). The factors were named MUSICALITY (convincing, liking, expressive, swing, inventive, expertise), COMPLEXITY (virtuosic, complex), and RHYTHM_EXACT (rhythm exact). The number of factors was determined using standard methods (KMO, Screeplot). The factor RHYTHM_EXACT will be not considered further, as it is not very informative for our aims here. In Figure 3(A), MUSICALITY ratings can be found. Algorithmic solos were ranked very low, except for the enhanced solo. Two student solos were rated even lower than all algorithmic solos. As expected, the most natural-sounding solo, one of the author solos (AUT2-Original), was rated highest. Rankings by experts and non-experts are more or less similar. Note that the range of values are quite large for most of the solos. For the COMPLEXITY factor, as seen in Figure 3(B), two of the student solos and the first algorithmic solo in two versions were among the top 5. Two other student solos and the Davis and the Rollins solo were rank lowest. The MIDI-fied versions of the Author’s solos were consistently rated lower here than the original versions, despite identical musical content. Again, the range of values is quite large. Figure 3 Boxplot of MUSICALITY (A) and COMPLEXITY (B) values for all solos, separately for rater expertise. Left panels: Jazz experts; right panels: non-experts; blue: algorithmic solos; brown: author (MIDI) solos; yellow: student solos; green: author (original) solos; violet: WJD solos. #### 5.4.2 Recognition of origin We accounted an algorithmically generated solo as correctly (and rather confidently) recognized, if the answer on Item 10 (“Do you think the notes of this solo were generated by a computer algorithm?”, variable artificial) had a value of 5 or larger, otherwise we accounted it as unrecognized. Conversely, a human generated solo was accounted as correctly recognized if it received a value of 3 or less on Item 10, and as unrecognized otherwise. The recognition accuracy of a solo is then defined as the proportion of responses counted as correctly recognized. The results can be found in Table 6, separately for experts and non-experts and human and algorithmically generated solos. A more detailed display for all 14 stimuli can be found in Figure 5 and in Table S4 in the supplement. Only four solos have a recognition accuracy whose 95 % confidence intervals do not cross the random baseline of 50 % (AUT2-Original, WJD-Sonny Rollins, Student-Intermediate, Algorithm-2-Original). For the non-jazz experts, this is only true for one solo (AUT2-Original). Some solos are consistently misclassified (AUT1-MIDI and Student-Graduated). Algorithm-2-Original, the unconvincing solo, is successfully identified as computer-generated with a mean accuracy of 69 %, though this comes mostly from the jazz experts; the non-experts are basically guessing here. Algorithm-1-Original performs better with an accuracy of 59 %. The improved version Algorithm-1-Improved is able to fool the raters with an overall mean accuracy of 44 %, but experts are better with an accuracy of 53 % slightly over the random baseline, whereas non-experts are completely at loss with an accuracy of only 18 %. Interestingly, one of the student solos (Student-Graduated) is the one most clearly misclassified. The MIDI-fied versions of the author solos are regarded as computer-generated by experts and non-experts alike. This might be an effect of the different articulation and approaches by rendering piano and guitar solos with a tenor saxophone sound, e. g., due to differing attack times. They are also rated much worse on the other factors compared to the original version. This result tells a cautionary tale. For the WJD solos, Sonny Rollins’s solo is most clearly identified as human-generated, whereas the other WJD solos are rated much more ambiguously. For the Charlie Parker solo this might come from the fact that the backing track had a slightly different chord sequence than the original solo and the tempo was perceivably slowed down from the original. The original version of Miles Davis’s solo also has slightly different chords, but the most important factor might be that the very spacious solo of Davis works because the piano player fills in the spaces, which is, of course, not the case for the MIDI-fied version used here. Figure 4 Boxplot of liking values for sources of solos, separately for rater expertise. Left boxes: jazz experts, right: non-experts. Figure 5 Recognition accuracy of all 14 stimuli by expertise level. Left: all, middle: jazz experts, right: no jazz experts. Error bars are 95% confidence interval of proportion. The pooled accuracies for expert/non-expert raters and human/algorithmic solos can be found in Table 6. Experts had an accuracy of 64 % for correctly identifying algorithmic and 54 % for human solos, which is only slightly above chance. For non-experts the aggregated values are even below chance level, 41 % for algorithmic and 44 % for human solos, which means that non-experts tend to assume algorithms at work even when this was not the case. This might be a result of the explicit framing of the survey as a “Turing Test” for jazz, which might have raised the baseline expectation for computer-generated solos, while, in fact, only a minority (3 out of 14) were actually computer-generated. #### 5.4.3 Relationship of identification and characteristics A correlation analysis of the two factors MUSICALITY and COMPLEXITY with artificial can be seen in Table 5. MUSICALITY and COMPLEXITY are strongly positively correlated, whereas MUSICALITY and artificial are strongly negatively correlated. As liking (item 8) is the strongest contributing factor to MUSICALITY, we checked if recognition accuracy might be related to liking. Interestingly, liking and recognition accuracy are strongly positively correlated with r = .86 for human generated solos, but strongly negatively correlated for computer-generated solos with r = –.66. This suggests that the participants judge solos as human-generated on the basis of their liking of the solo, probably based on a bias against artificially generated music (Moffat and Kelly, 2006). Table 5 Pearson’s correlation coefficients for MUSICALITY (MUS), COMPLEXITY (COMP), and artificial (Item 10). All correlations p ≤ .001. MUS COMP artificial MUS 1.00 0.44 –0.59 COMP 0.44 1.00 –0.18 artificial –0.59 –0.18 1.00 Table 6 Recognition accuracy for computer and human generated solos by experts and non-experts. Solo Generator Expertise Accuracy Algorithm Expert .644 Non-expert .417 Human Expert .536 Non-expert .447 #### 5.4.4 Relationship of recognition and liking We checked further, if and to what extent the solos were aesthetically pleasing for the participants, by looking at the ratings on item 8 (“How did you like the solo excerpt? (not at all-very much)?”). First, we conducted a mixed linear regression (package lmerTest for R) to see whether jazz experts and non-expert differ in their overall liking scores. Indeed, the non-expert had slightly higher liking score (β = 0.385, df = 531, p = .018), with experts having a mean of 3.72 and non-experts one of 4.1, so non-experts seem to be more forgiving. There were also stark differences between subjects in the ratings. The participants mean values of liking ranged from 1.29 to 5.43 with a mean of 3.8, a median of 4.0, and a standard deviation of 0.95. The difference in liking with respect to the source of the solo (author original, author MIDI, WJD, student, algorithm) can be found in Figure 4. As expected, the original solos were liked the best (mean liking = 5.14), followed by the master solos from the WJD (mean liking = 4.37), the MIDI-fied author solos (mean liking = 3.47), the algorithmic solos (mean liking = 3.37), and the students’ solos (mean liking = 3.29). A linear mixed model with source and jazz expertise as fixed effects and participant as random effect showed that there is no significant difference between experts and non-experts if individual preferences are taken into account, and that only original (β = 1.77, SE = 0.23, Z(503) = 7.7. Pr(> |t|) < .0001) and WJD solos (β = 1.00, SE = 0.206, Z(503) = 4.85. Pr(> |t|) < .0001) were significantly more liked than algorithmic, MIDI-fied author, and student solos. Actually, two of the student solos were liked less than all three algorithmic solos, whereas the other two were liked considerably better. The mean liking values for all sources did not reach the neutral level of 4 except for WJD and author originals. The liking ratings of the author solo AUT2 dropped from 5.69 for the original recording to 3.51 for the MIDI-fied tenor sax version, a huge loss of d = 2.19 on a 7-point scale. For AUT1, the drop was from 4.59 to 3.44 (d = 1.15). The difference, particularly for AUT2, is striking and clearly demonstrates the influence of a natural-sounding performance on aesthetic appreciation (of jazz solos). For the variable inventive, the ranking of solos is very similar to that of liking, with the exception that Algorithm-1-Improved is ranked fourth. In terms of source or origin, the algorithmic solos were tied with the MIDI-fied author solos (mean inventive = 3.77 for both), whereas the rest of the ranking was the same as for liking (Originals: 4.42, WJD: 3.85, students: 3.4). ## 6. Discussion ### 6.1 Evaluation of our Model We presented a novel algorithm to generate monophonic jazz solos over a given chord sequence. We evaluated the algorithm with a Turing-like listening test with 41 jazz-affine participants. We could show that a hand selected, edited, and expressive rendition of one of the generated solos (Algorithm-1-Improved) could fool the panel, as it was slightly more often considered to be human-generated than computer-generated. Moreover, the recognition accuracies verged on the chance level, so we can state that even the jazz experts were not entirely sure about its origin. On the other hand, the unedited, deadpan version of the same solo was successfully recognized as computer-generated, at least by the experts, but still with considerable uncertainty. The second, “bad” solo was even more clearly recognized as computer-generated, mainly by the jazz experts in the panel, whereas the non-experts were not completely sure here either. These results were anticipated, but they have to be viewed in perspective, as even solos by eminent jazz masters such as Charlie Parker and Miles Davis were frequently judged as computer-generated, when presented as deadpan MIDI over a computer-generated backing track. Only the original audio solo by the second author was unequivocally considered to be human-generated, which was expected as this was specifically included as a baseline. However, the second original solo (AUT1-Original) was not as often recognized as human-generated and the MIDI-fied version of the second original solo (AUT2-MIDI) was likewise considered mainly to be computer-generated. This clearly demonstrates that an expressive, “natural” performance is crucial for human judgements in Turing-like music tests. Furthermore, comparison of algorithmically generated solos with those of jazz greats might also not be the most important test as the computer-generated solos were recognized correctly more often than three of the four student solos. This seems to make sense, as devising a successful algorithm that is able to invent masterly solos seems a bit too much to ask for, thus a comparison with less proficient performers might be more fair. The performance of our new algorithm is promising, as it is just in its early stages, merely a proof-of-concept, and uses a relatively simple model. This is however quite powerful, because it is based on empirical and analytical results of jazz improvisation and powered by a rather large database of solo transcriptions. There are many possible avenues for improvement. The most obvious weakness of the model is the rhythm model, particularly for lick MLUs. The simplified rhythm model for line MLUs works rather well. For further improvements on the rhythm model, one would need a indepth analysis of rhythm and meter and their interaction in jazz solos, similar to the WBA study, but which is unfortunately missing, as of yet. Also, the model does not allow for incorporating pre-learned patterns, which basically all jazz improvisers do (Norgaard, 2014). We plan to improve the algorithm along-side further empirical research on jazz improvisation in an iterative process. ### 6.2 Evaluation of Evaluation We also explored the problem space of Turing-like evaluation of computer-generated jazz solos. Along our basic distinctions, we found these results. • Good vs. bad algorithmic solos. The worse (as judged by the authors) algorithmic solo was less favorably evaluated by our respondents in all aspects, and also much more often correctly identified as computer-generated. • Tweaked vs. raw algorithmic solos. Even a little tweaking of the musical surface can improve the assessment of a solo. One possible reason is that even small or spurious signals of “non-authencity” can be picked up by humans to make their judgement. • Human vs. algorithmic solos. Solos by jazz masters were generally better judged than solos by jazz students and algorithmic solos, but some of the algorithmic solos performed clearly better than student solos. On the other hand, deadpan synthesised solos of masters were rated worse than “natural”-sounding solos by the authors. • Original vs. MIDI-fied solos. Performance seems crucial as the MIDI-fied versions of the original author solos rendered with different instrumental sounds were rated much worse and also less often recognized as human-generated solos. Besides this, we found generally low inter-rater agreement and large variances of judgements, and saw clear differences between jazz experts and non-jazz experts. Finally, we found evidence of bias against computer-generated music, in the sense that participants seemed to expect computer-generated solos to be worse and were more likely to assume non-human origin if they did not like the solos. We also noted that by framing the experiment as a “Turing Test” people tended to expect more computer-generated solos than there actually were. Hence, their rating behaviour could have been influenced by expectations for these kinds of studies. ## 7. Conclusion and Outlook In light of these results, we see three possible approaches for large-scale evaluation of generative models for jazz solos. 1. Using Human- and computer-generated solos performed by a single human player over a fixed accompaniment to keep all background factors constant while providing reasonably expressive performances with microtiming, articulation, dynamics, and timbre features (e. g., vibrato, slurs, bends). This procedure requires considerable effort and is probably not suitable for testing many solos at once. Interestingly, in the free test feed-back field of the survey, where we asked for further comments, many participants suggested exactly this kind of procedure (see Section S5 in the supplementary information). 2. Devising a system that is capable of generating sufficiently natural jazz performances. This would be very efficient for quick and frequent evaluations in conjunction with using a service for recruiting online participants. Such an algorithm does not exist yet and might thus require considerable development effort. It might be in reach with the current state of technology. However, in contrast to the field of classical music, not much research has gone into developing such a system, yet, but see Friberg et al. (2021); Arcos et al. (1998). Once such a system is available, large scale evaluation will become easy and cheap. 3. Using deadpan versions of human- and computer-generated solos. Because of the low baseline accuracy in this setup, a large number of raters and solos would be required to get reliable estimates. The advantage of this approach is that it is relative cheap to realize, even though the recruitment of a sufficiently large number of experts might be an issue, but using an even larger number of non-experts could remedy this problem. Finally, there is the complementary option of evaluating solos based on objective features, as proposed by Yang and Lerch (2020), similar to the “critic” in the evaluation framework proposed by Wiggins and Pearce (2001). This is rather straightforward to implement if suitable corpora are available and we want to explore it in the future. Another option, often tacitly or explicitly part of any algorithm development, are formal or informal analyses by experts. For instance, some tweaks and design decisions that ended up in the current version of the WBA-MLU algorithm were informed by the expertise of the authors. But, ultimately, we think that there will be no way around Turing-like tests, as objective feature distributions are not likely to provide a sufficient description of music, and clearly not of truly innovative music, whereas expert analytic evaluations do not scale. Such approaches could be useful to select the most promising candidates from a set of generated solos (if style conformity is the goal, which is often the case in analysis-by-synthesis contexts). One last remark should be made in regard that the evaluation of human vs. computer-generated solos is driven by aesthetics and a bias against computer-generated music. This implies that for commercial (and other) applications of algorithmically generated music, it has to be very good, i. e., being unrecognizable as such, otherwise the knowledge about its origin can result in audience aversion against the product. Here, more research is needed, as in this small pilot study, we could only find some evidence in this direction, which clearly warrants more targeted and systematic examination of this phenomenon (cf. Moffat and Kelly (2006); Chamberlain et al. (2018), who found such bias against computer-generated music and art-works). This study had a mostly exploratory nature, both in regard to the proposed novel algorithm and an evaluation procedure for monophonic jazz solos, and presents promising and insightful first results to both aspects. Our future plans are twofold. First, we want to elaborate the WBA-MLU model further, as many simplifying assumptions were used at the moment in order to create a functioning system. Secondly, we plan to improve the evaluation framework. We think it is worth to check if the strategy that humans play artificially generated solos is feasible. Additionally, we think that the development of a system that is capable to render more natural-sounding performances, if probably only for a single type of instrument, is in reach with the current state of technology. Supplementary Material PDF with further information. DOI: https://doi.org/10.5334/tismir.87.s1 ## Notes 1For instance, to provide backing tracks for practising soloing, e. g., Band-in-a-Box https://www.pgmusic.com/ or iRealPro https://www.irealpro.com/. 2A list of commercial AI music generators can be found, for instance, here: https://topten.ai/music-generators-review/. The Top 3 names are Amper Music https://www.ampermusic.com/, AIVA https://www.aiva.ai/, and Ecrett Music https://ecrettmusic.com/. ## Reproducibility There is an accompanying OSF site for this paper: https://osf.io/kjsdr. It contains the R project jazz-turing with the survey data, the audio stimuli, and the analysis code for the evaluation. The folder samples provides 40 generated solos as MIDI files and scores. There is also a link to parkR, an R package for solo generation based on our model, which can be installed from https://github.com/klausfrieler/parkR. ## Acknowledgements We like to thank all participants in the evaluation experiment, Simon Dixon for proof-reading the manuscript, and three anomymous reviewers for their helpful comments. ## Competing Interests The authors have no competing interests to declare. ## Author Contributions KF developed the generative algorithm, developed, conducted, and analyzed the evaluation survey, and wrote the paper. WGZ developed and conducted the evaluation survey and wrote the paper. ## References 1. Aebersold, J. (1967). A New Approach to Jazz Improvisation. 2. Ake, D. A. (2002). Jazz Cultures. University of California Press, Berkeley. DOI: https://doi.org/10.1525/9780520926967 3. Arcos, J. L., Lopéz de Mántaras, R., and Serra, X. (1998). SaxEx: A case-based reasoning system for generating expressive musical performances. Journal of New Music Research, 27:194–210. DOI: https://doi.org/10.1080/09298219808570746 4. Biles, J. A. (1994). GenJam: Evolution of a jazz improviser. In Proceedings of the International Computer Music Conference (ICMC), pages 131–137. 5. Chamberlain, R., Mullin, C., Scheerlinck, B., and Wagemans, J. (2018). Putting the art in artificial: Aesthetic responses to computer-generated art. Psychology of Aesthetics, Creativity, and the Arts, 12(2):177–192. DOI: https://doi.org/10.1037/aca0000136 6. Coker, J. (1987). Improvising Jazz. Simon & Schuster, New York, 1st fireside edition. 7. Cooke, M., and Horn, D. (2002). The Cambridge Companion to Jazz. Cambridge University Press, Cambridge, UK. OCLC: 758544526. DOI: https://doi.org/10.1017/CCOL9780521663205 8. Friberg, A., Gulz, T., and Wettebrandt, C. (2021). Computer tools for modeling swing timing interactions in a jazz ensemble. In 16th International Conference on Music Perception and Cognition and 11th Triennial Conference of the European Society for the Cognitive Sciences of Music (ICMPC16-ESCOM11), Sheffield, UK. 9. Frieler, K. (2018). A feature history of jazz solo improvisation. In Knauer, W., editor, Jazz @ 100: An Alternative to a Story of Heroes, volume 15 of Darmstadt Studies in Jazz Research. Wolke Verlag, Hofheim am Taunus. 10. Frieler, K. (2019). Constructing jazz lines: Taxonomy, vocabulary, grammar. In Pfleiderer, M. and Zaddach, W.-G., editors, Jazzforschung heute: Themen, Methoden, Perspektiven, Berlin. Edition Emwas. 11. Frieler, K. (2020). Miles vs. Trane: Computational and statistical comparison of the improvisatory styles of Miles Davis and John Coltrane. Jazz Perspectives, 12(1):123–145. DOI: https://doi.org/10.1080/17494060.2020.1734053 12. Frieler, K., Pfleiderer, M., Abeßer, J., and Zaddach, W.-G. (2016). Midlevel analysis of monophonic jazz solos: A new approach to the study of improvisation. Musicae Scientiae, 20(2):143–162. DOI: https://doi.org/10.1177/1029864916636440 13. Grachten, M. (2001). JIG: Jazz Improvisation Generator. In Proceedings of the MOSART Workshop on Current Research Directions in Computer Music, Barcelona, Spain. 14. Haviv Hakimi, S., Bhonker, N., and El-Yaniv, R. (2020). BebopNet: Deep neural models for personalized jazz improvisations. In Proceedings of the 21st International Society for Music Information Retrieval Conference, Montréal, Canada. 15. Hiller, L. A., and Isaacson, L. M. (1959). Experimental Music: Composition With an Electronic Computer. McGraw-Hill, New York. 16. Hung, H.-T., Wang, C.-Y., Yang, Y.-H., and Wang, H.-M. (2019). Improving automatic jazz melody generation by transfer learning techniques. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pages 1–8, Lanzhou, China. DOI: https://doi.org/10.1109/APSIPAASC47483.2019.9023224 17. Johnson-Laird, P. N. (1991). Jazz improvisation: A theory at the computational level. In Howell, P., West, R., and Cross, I., editors, Representing Musical Structure, London. Academic Press. 18. Johnson-Laird, P. N. (2002). How jazz musicians improvise. Music Perception, 10:415–442. DOI: https://doi.org/10.1525/mp.2002.19.3.415 19. Kaufman, J. C., and Beghetto, R. A. (2009). Beyond big and little: The Four C Model of Creativity. Review of General Psychology, 13(1):1–12. DOI: https://doi.org/10.1037/a0013688 20. Keller, R., Schofield, A., Toman-Yih, A., Merritt, Z., and Elliott, J. (2013). Automating the explanation of jazz chord progressions using idiomatic analysis. Computer Music Journal, 37(4):54–69. DOI: https://doi.org/10.1162/COMJ_a_00201 21. Keller, R. M., and Morrison, D. (2007). A grammatical approach to automatic improvisation. In Proceedings of the 4th Sound and Music Computing Conference, pages 330–337, Lefkada, Greece. 22. Lothwesen, K., and Frieler, K. (2012). Gestaltungsmuster und Ideenuss in Jazzpiano-Improvisationen: Eine Pilotstudie zum Einfluss von Tempo, Tonalität und Expertise. In Lehmann, A., Jeßulat, A., and Wünsch, C., editors, Kreativität: Struktur und Emotion. Königshausen & Neumann, Würzburg. 23. Madaghiele, V., Lisena, P., and Troncy, R. (2021). MINGUS: Melodic improvisation neural generator using Seq2Seq. In 22nd International Society for Music Information Retrieval Conference. 24. Moffat, D., and Kelly, M. (2006). An investigation into people’s bias against computational creativity in music composition. In Third Joint Workshop on Computational Creativity, ECAI 2006, Trento, Italy. Universita di Trento. 25. Narmour, E. (1990). The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model. University of Chicago Press, Chicago. 26. Norgaard, M. (2011). Descriptions of improvisational thinking by artist-level jazz musicians. Journal of Research in Music Education, 59(2):109–127. DOI: https://doi.org/10.1177/0022429411405669 27. Norgaard, M. (2014). How jazz musicians improvise: The central role of auditory and motor patterns. Music Perception: An Interdisciplinary Journal, 31(3):271–287. DOI: https://doi.org/10.1525/mp.2014.31.3.271 28. Owens, T. (1974). Charlie Parker: Techniques of Improvisation. PhD thesis, University of California, Los Angeles. 29. Pachet, F. (2003). The Continuator: Musical interaction with style. Journal of New Music Research, 32(3):333–341. DOI: https://doi.org/10.1076/jnmr.32.3.333.16861 30. Pachet, F. (2012). Musical virtuosity and creativity. In McCormack, J. and d’Inverno, M., editors, Computers and Creativity, pages 115–146. Springer, Berlin, Heidelberg. DOI: https://doi.org/10.1007/978-3-642-31727-9_5 31. Papadopoulos, G., and Wiggins, G. (1998). A genetic algorithm for the generation of jazz melodies. In Human and Artificial Information Processing: Finnish Conference on Artificial Intelligence (STeP’98), Jyväskylä, Finland. 32. Pfleiderer, M., Frieler, K., Abeßer, J., Zaddach, W.-G., and Burkhart, B., editors (2017). Inside the Jazzomat: New Perspectives for Jazz Research. Schott Music GmbH & Co. KG, Mainz, 1. Auflage OCLC: 1015349144. 33. Pressing, J. (1984). Cognitive Processes in Improvisation. Cognitive Processes in the Perception of Art. Elsevier, North-Holland. DOI: https://doi.org/10.1016/S0166-4115(08)62358-4 34. Pressing, J. (1988). Improvisation: Method and models. In Sloboda, J. A., editor, Generative Processes in Music: The Psychology of Performance, Improvisation, and Composition, pages 129–178, Oxford. Clarendon. DOI: https://doi.org/10.1093/acprof:oso/9780198508465.003.0007 35. Quick, D., and Thomas, K. (2019). A functional model of jazz improvisation. In Proceedings of the 7th ACM SIGPLAN International Workshop on Functional Art, Music, Modeling, and Design (FARM 2019), pages 11–21, Berlin, Germany. ACM Press. DOI: https://doi.org/10.1145/3331543.3342577 36. Ramalho, G. L., Rolland, P.-Y., and Ganascia, J.-G. (1999). An artificially intelligent jazz performer. Journal of New Music Research, 28(2):105–129. DOI: https://doi.org/10.1076/jnmr.28.2.105.3120 37. Russell, G. (1953). George Russell’s Lydian chromatic concept of tonal organization. Concept Pub. Co, Brookline, Mass. OCLC: ocm50075662. 38. Schütz, M. (2015). Improvisation im Jazz: eine empirische Untersuchung bei Jazzpianisten auf der Basis der Ideenussanalyse. Number 34 in Schriftenreihe Studien zur Musikwissenschaft. Kovač, Hamburg. OCLC: 915812622. 39. Toiviainen, P. (1995). Modelling the target-note technique of bebop-style jazz improvisation: An artificial neural network approach. Music Perception, 12:398–413. DOI: https://doi.org/10.2307/40285674 40. Trieu, N., and Keller, R. (2018). JazzGAN: Improvising with generative adversarial networks. In Proceedings of the 6th International Workshop on Musical Metacreation (MUME 2018), Salamanca, Spain. 41. Von Hippel, P., and Huron, D. (2000). Why do skips precede reversals? The effect of tessitura on melodic structure. Music Perception: An Interdisciplinary Journal, 18(1):59–85. DOI: https://doi.org/10.2307/40285901 42. Wiggins, G., and Pearce, M. (2001). Towards a framework for the evaluation of machine compositions. In Proceedings of the AISB’01 Symposium on Artificial Intelligence and Creativity in the Arts and Sciences, pages 22–32, York. 43. Wu, S.-L., and Yang, Y.-H. (2020). The Jazz Transformer on the front line: Exploring the shortcomings of AI-composed music through quantitative measures. In 21st International Society for Music Information Retrieval Conference, pages 142–149, Montréal, Canada. 44. Yang, L.-C., and Lerch, A. (2020). On the evaluation of generative models in music. Neural Computing and Applications, 32(9):4773–4784. DOI: https://doi.org/10.1007/s00521-018-3849-7
2022-10-01 08:47:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4617025554180145, "perplexity": 4730.174708014387}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00688.warc.gz"}
https://www.vedantu.com/question-answer/find-the-probability-of-having-5-sundays-in-the-class-11-maths-cbse-5ee49b095cbfd47b46977622
Courses Courses for Kids Free study material Free LIVE classes More # Find the probability of having 5 Sundays in the month of February in leap year. $A.{\text{ }}\dfrac{2}{7} \\ B.{\text{ 0}} \\ {\text{C}}{\text{. }}\dfrac{1}{7} \\ D.{\text{ 1}} \\$ Last updated date: 20th Mar 2023 Total views: 306k Views today: 5.84k Verified 306k+ views Hint: In order to solve this question, we must know the basic approach of theoretical probability. Theoretical probability is a method to express the likelihood that something will occur. It is calculated by dividing the number of favourable outcomes by the total possible outcomes. $\therefore$ 4 week = 7 $\times$4 = 28 days $\therefore$ 29 – 28 = 1 day is remaining. The required probability = $\dfrac{1}{7}$ $\therefore$ The probability of having 5 Sundays in the month of February in leap year = $\dfrac{1}{7}$
2023-03-26 11:14:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5998446345329285, "perplexity": 1092.7224969018384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00241.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/mc1/chapter/9/lesson/9.2.2/problem/9-69
### Home > MC1 > Chapter 9 > Lesson 9.2.2 > Problem9-69 9-69. Find the area and perimeter of the figure at right. All angles are right angles. Show your work. Use the new side labels to help you find both the area and the perimeter of this figure. Remember that area is the space within a shape and perimeter is the distance around a shape. To find the area, add the areas of the separate sections, as labeled in red. $37.8 + 19.74 = 57.54$ square units
2022-06-25 11:36:06
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.618738055229187, "perplexity": 1064.5496620635263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00507.warc.gz"}
https://codegolf.stackexchange.com/questions/60251/curse-you-turing-make-a-partial-halting-solver-that-yells-at-turing-when-he-t
# Curse you Turing!!! Make a partial halting solver that yells at Turing when he tries to break it Here goes a quick proof by Alan Turing that there is no program H that can solve the halting problem. If there was, you could make a program X such that X(n) = if H(n,n) == Halt then output X(n) else output ":-)". What is the value of H(X,X)? Your job is to make a partial halting solver, such that when some X of your choice is inputted, your program will output "Curse you Turing!!!" repeatedly. Specifically, you will make two programs in the same programming language. One will be H, the partial halting solver. Given a program P (in the same programming language) and an input, H will: • output "halts" if P halts • output "no halts" if P doesn't halt • If P is program X (with input X), output "Curse you Turing!!!" infinitely, on different lines. • If H isn't X, and H can't figure out if P halts, go into an infinite loop without output. Program X, will be given an input n, and then it will check H(n,n). If it outputs "not halts", it will terminate at some point. For any other result of H(n,n), it will go into an infinite loop. The output of X is unimportant. H and X will not have access to external resources. Specifically, that means that there is a mutual-quine-aspect to this problem. As to how hard H has to try to figure out whether a program halts: • Once it determines that its input isn't X and X, it will run the program with the input for 10 seconds. If it terminates (either correctly or with error) it will output "halts". • You must identify an infinite number of "no halts" programs. Oh, and this may be go without saying, but the programming language needs to be Turing complete (I knew you were thinking of an esoteric language without infinite loops. Sorry.) (That said, Turing complete subsets of languages are also okay.) Your score is the number of bytes in H. Least number of bytes wins! Bonuses: • -50 bytes: If H captures output from its input program, and reoutputs it but with "PROGRAM OUTPUT: " in front of every line. (This is only done when H runs the program to test it.) • -20 bytes: If H outputs "halts" based on criteria other than time as well. Try and find a fun combination of H and X calls to put together to make it interesting (like H(H,(X,X)) or even more crazy stuff.) • Poor Turing.... – kirbyfan64sos Oct 9 '15 at 22:48 • How would your H recognize an X? – LegionMammal978 Oct 10 '15 at 18:58 • @LegionMammal978 You only have to detect one X of your choosing. (I think it may actually be impossible to detect X programs in general.) – PyRulez Oct 10 '15 at 19:01 • The challenge is mathematical only in genre—its only link to math is that the Halting problem is a mathematical topic. Entries will need to use quine techniques, but solve no math problems. I quote from the tag wiki: "Challenges relating to or in some way involving mathematics; that is, solving a math problem is needed to come up with a solution, or the solution is a program that solves a math problem, or the program generates math problems, etc." It mentions nothing about genre. We didn't tag Minimal NetHack "game". – lirtosiast Oct 10 '15 at 19:06 • Yes, but again only in genre. A sufficiently talented programmer who knows nothing of math, computer-scientific or otherwise, could solve the challenge as well as Turing himself. The solutions do not "partially [solve] the halting problem" in a mathematical sense; they simply determine whether the input is its source code, or if it halts in less than 10 seconds. – lirtosiast Oct 10 '15 at 19:16 # Mathematica, 140138 127 bytes H=(While[#==#2[[1]]=="While[#~H~{#}==\"halts\"]&",Print@"Curse you Turing!!!"];TimeConstrained[#=="#&"||ToExpression@#@@#2;"halts",10,"no halts"])& Defines a function H that takes a function string and argument list as arguments. This also receives the -20 bonus, with the extra criteria being that if the program is #&, then it halts. This is to show that it is flawed. Here is the X: While[#~H~{#}=="halts"]& • You should test your own submissions, but I'll fire up my copy of Mathematica, try to find out how it works, and look at this. – lirtosiast Oct 10 '15 at 19:21 • For which programs does it output "no halts"? It looks like it outputs "no halts" for programs over 10 seconds. They could still halt, its just unknown. – PyRulez Oct 10 '15 at 21:33 • @LegionMammal978 That's to determine if it halts. It doesn't tell you if it doesn't halt. – PyRulez Oct 11 '15 at 12:52 • @LegionMammal978 Analyze the source. Figure out an infinite number of cases, then the program gives up. – PyRulez Oct 11 '15 at 13:02 • @LegionMammal978 Only one X, but a smallish infinity of other infinite loops (not all of them, just an infinite number of them). – PyRulez Oct 11 '15 at 13:04
2020-07-09 09:20:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3273641765117645, "perplexity": 1424.121544558485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00566.warc.gz"}
https://ifc43-docs.standards.buildingsmart.org/IFC/RELEASE/IFC4x3/HTML/lexical/Qto_SlabBaseQuantities.htm
# 6.1.5.13 Qto_SlabBaseQuantities ## 6.1.5.13.1 Semantic definition Base quantities that are common to the definition of all occurrences of slabs.
2023-03-26 02:37:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8096998929977417, "perplexity": 8101.398796540309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00747.warc.gz"}
https://www.physicsforums.com/threads/from-point-a-to-point-b-basic-algebra.284812/
# From point A to point B, basic algebra. 1. Jan 14, 2009 ### calisoca 1. The problem statement, all variables and given/known data I'm not sure how to get from point A to point B. It seems simple enough, but I'm just not seeing it! 2. Relevant equations point A: $$\frac{81}{n^4} \ [\frac{n(n+1)}{2}]^2 \ - \ \frac{54}{n^2} \ [\frac{n(n+1)}{2}]$$ point B: $$\frac{81}{4} \ (1 + \frac{1}{n})^2 \ - \ 27(1 + \frac{1}{n})$$ 3. The attempt at a solution I'm honestly at a loss on this simple problem, however sad that may seem. Last edited: Jan 14, 2009 2. Jan 14, 2009 ### danago For the left hand term, distribute the power of 2 within the bracket and make any obvious cancellations. Then use the fact that an/bn = (a/b)n to get it closer to the desired form. See if you can figure out the last step. For the right hand term, first make an obvious cancellation, and then break the fraction up as required. I have tried to only give hints so you can do the actual math yourself. Hopefully one of the hints may be the thing you were looking for and you can finish it off yourself 3. Jan 14, 2009 ### calisoca Danago, thank you very much. I will work with the hints you gave me and see if I can figure it out. It's not hard, it's just that I haven't actually done any real algebra in the last few months, so I'm a bit slow right now. If I need any more help from here, I'll post, but otherwise, thank you again for your help! 4. Jan 14, 2009 ### calisoca Okay, here is what I have so far. However, getting the first term to what I want is problematic for me. Step 6 is where I am stuck. May I please get some help getting past Step 6? I'd greatly appreciate it. 1.) $$\frac{81}{n^4} \ [\frac{n(n+1)}{2}]^2 \ - \ \frac{54}{n^2} \ [\frac{n(n+1)}{2}]$$ 2.) $$\frac{81}{n^4} \ [\frac{n^2 + n}{2}]^2 \ - \ \frac{54}{n^2} \ [\frac{n^2 + n}{2}]$$ 3.) $$\frac{81}{n^4} \ [\frac{n^4 + 2n^3 + n^2}{4}] \ - \ \frac{54}{n^2} \ [\frac{n^2 + n}{2}]$$ 4.) $$\frac{81(n^4 + 2n^3 + n^2)}{4n^4} \ - \ \frac{54(n^2 + n)}{2n^2}$$ 5.) $$\frac{81}{4} \ [\frac{n^4 + 2n^3 + n^2}{n^4}] \ - \ \frac{54}{2} \ [\frac{n^2 + n}{n^2}]$$ 6.) $$\frac{81}{4} \ (1 + \frac{2}{n} + \frac{1}{n^2}}) \ - \ \frac{54}{2} \ (1 + \frac{1}{n})$$ 5. Jan 14, 2009 ### calisoca Crap! Just figured it out. Like Danago said, $$\frac{a^2}{b^2} = (\frac{a}{b})^2$$ So.... 1.) $$\frac{81}{n^4} \ [\frac{n(n+1)}{2}]^2 \ - \ \frac{54}{n^2} \ [\frac{n(n+1)}{2}]$$ 2.) $$\frac{81}{n^4} \ [\frac{n^2 + n}{2}]^2 \ - \ \frac{54}{n^2} \ [\frac{n^2 + n}{2}]$$ 3.) $$\frac{81}{n^4} \ [\frac{n^4 + 2n^3 + n^2}{4}] \ - \ \frac{54}{n^2} \ [\frac{n^2 + n}{2}]$$ 4.) $$\frac{81(n^4 + 2n^3 + n^2)}{4n^4} \ - \ \frac{54(n^2 + n)}{2n^2}$$ 5.) $$\frac{81}{4} \ [\frac{n^4 + 2n^3 + n^2}{n^4}] \ - \ \frac{54}{2} \ [\frac{n^2 + n}{n^2}]$$ 6.) $$\frac{81}{4} \ (1 + \frac{2}{n} + \frac{1}{n^2}}) \ - \ \frac{54}{2} \ (1 + \frac{1}{n})$$ 7.) $$(1 + \frac{2}{n} + \frac{1}{n^2}) = (1 + \frac{1}{n})^2$$ 8.) $$\frac{81}{4} \ (1 + \frac{1}{n})^2 \ - \ 27(1 + \frac{1}{n})$$ Thanks for the help! 6. Jan 14, 2009 ### danago No problems You could actually have done it in a slightly simpler way, without expanding the brackets how you did. Ill demonstrate with the left hand term: $$\frac{81}{n^4} \ [\frac{n(n+1)}{2}]^2 =\frac{81}{n^4} \ [\frac{n^2(n+1)^2}{4}] =\frac{81}{4} \ [\frac{(n+1)^2}{n^2}] =\frac{81}{4} \ [\frac{(n+1)}{n}]^2 =\frac{81}{4} \ [1+\frac{1}{n}]^2$$ It still ends with the same result, but its probably a little simpler
2017-09-26 15:20:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579344749450684, "perplexity": 409.8316467802571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696182.97/warc/CC-MAIN-20170926141625-20170926161625-00465.warc.gz"}
https://discourse.julialang.org/t/bigfloat-lufact/4653
# Bigfloat lufact #1 Has anyone noticed anything odd with lufact on BigFloats? Is there some theoretical weird instability thing I should know about? I am wondering because as I am testing some algorithms I am noticing a pattern that the same methods (ODE solvers) with the same coefficients (some methods are only specified to Float64 precision) suddenly fail if the coefficients are changed to BigFloats. But I can “save” the methods if I change the linear solve from lufact to qrfact. This is odd and is against my intuition: why would the same numbers give a bad lufact when converted to BigFloat but not as Float64s? If there’s no reasoning for this and it’s likely a problem with the generic fallback I’ll dig into this more and make an MWE. I just wanted to ask first because it might take a bit to pull out an actual example which isn’t integrated with everything else, but I have many different tests showing that it’s only lufact. I know qrfact is more stable, but this seems bizarre to me. #2 I haven’t seen anything like that. How are the solvers failing? How ill conditioned are the matrices? It would be great if you could provide an example. #3 I pulled an example out. In this case, it looks like lufact! is the only one correct on BigFloat: A = 92.317*eye(4) b = [0.0970454, 0.00944241, 0.167562, 0.08518] println("True solution") println(b/92.317) lufact!(A) println("lufact") println(A\b) A = 92.317*eye(4) qrfact!(A) println("qrfact") println(A\b) A = 92.317*eye(4) svdfact!(A) println("svdfact") println(A\b) Abig = big.(A) bbig = big.(b) lufact!(Abig) println("lufact bigfloat") println(Float64.(Abig\bbig)) Abig = big.(A) qrfact!(Abig) println("qrfact bigfloat") println(Float64.(Abig\bbig)) Abig = big.(A) using GenericSVD svdfact!(Abig) println("generic svdfact bigfloat") println(Float64.(Abig\bbig)) This prints out the following: True solution [0.00105122, 0.000102282, 0.00181507, 0.00092269] lufact [0.00105122, 0.000102282, 0.00181507, 0.00092269] qrfact [0.00105122, 0.000102282, 0.00181507, 0.00092269] svdfact [0.00105122, 0.000102282, 0.00181507, 0.00092269] lufact bigfloat [0.00105122, 0.000102282, 0.00181507, 0.00092269] qrfact bigfloat [-0.00105122, -0.000102282, -0.00181507, 0.00092269] generic svdfact bigfloat [0.0485227, 0.00472121, 0.083781, 0.04259] Now I’m getting really confused. There’s definitely something odd in GenericSVD though. How do you use debuggers? #4 You’d have to use the output of the in-place factorization. The input matrix hasn’t changed type and the interpretation of the elements has changed. E.g. for the generic SVD, you’d like to do julia> A = big.(92.317*eye(4)); julia> F = svdfact!(A); julia> Float64.(F\b) 4-element Array{Float64,1}: 0.00105122 0.000102282 0.00181507 0.00092269 julia> b/92.317 4-element Array{Float64,1}: 0.00105122 0.000102282 0.00181507 0.00092269 #5 Oh my bad. The real code is actually doing the correct form there. Let me try to find out what’s going on once more. #6 Aha, I got it: W = 92.1935*eye(8) linsolve_tmp_vec = [0.323673, 0.369958, 0.0735182, 0.409571, 0.46994, 0.335103, 0.0287147, 0.402722] vectmp = similar(linsolve_tmp_vec) pA = lufact!(W) A_ldiv_B!(vectmp,pA,linsolve_tmp_vec) println((vectmp)) # vectmp == [0.0035108, 0.00401284, 0.000797434, 0.00444251, 0.00509732, 0.00363478, 0.000311461, 0.00436823] W = big.(92.1935*eye(8)) linsolve_tmp_vec = [0.323673, 0.369958, 0.0735182, 0.409571, 0.46994, 0.335103, 0.0287147, 0.402722] vectmp = similar(linsolve_tmp_vec) pA = lufact!(W) A_ldiv_B!(vectmp,pA,linsolve_tmp_vec) println(Float64.(vectmp)) # vectmp == [0.323673, 0.369958, 0.0735182, 0.409571, 0.46994, 0.335103, 0.0287147, 0.402722] Interestingly, in the bigfloat one we get vectmp == linsolve_tmp_vec (terrible variable names, don’t hate) but not vectmp === linsolve_tmp_vec. So I checked: println(@which A_ldiv_B!(vectmp,pA,linsolve_tmp_vec)) A_ldiv_B!(Y::Union{AbstractArray{T,1}, AbstractArray{T,2}} where T, A::Factorization, B::Union{AbstractArray{T,1}, AbstractArray{T,2}} where T) in Base.LinAlg at linalg\factorization.jl:u55 which for me is the same as this spot: I see a copy! there, and so maybe nothing is really being called? #7 There is a real issue here and it is quite old. The reference to the input array is broken so although the array returned has the right values, the rhs is not getting updated. See https://github.com/JuliaLang/julia/issues/22683 #8 I have a good idea about what went wrong. The results to Abig\bbig after qrfact! and svdfact! must be wrong, because of wrong usage. You could have written Abig = qrfact!(Abig) etc. While the contents of Abig is overwritten, the Abig\big works with the modified Abig (with an ordinary matrix inverse multiplication. In order to call the multiplication of typeof(qrfact!(Abig), it is necessary to have an object of that type, not Matrix{BigFloat}. Of course it is better style to use a different variable name for the factorization. #9 No, read the whole thread. Yes, the first example was botched, but not the later one. It’s an issue in Julia.
2019-02-17 02:27:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26909029483795166, "perplexity": 3277.382879641949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481428.19/warc/CC-MAIN-20190217010854-20190217032854-00331.warc.gz"}
http://clay6.com/qa/45904/a-potential-difference-of-250-volt-is-applied-across-the-plates-of-a-capaci
# A potential difference of $250\; volt$ is applied across the plates of a capacitor of $10\; pF$. Calculate the charge on the plates of the capacitor. ## 1 Answer Here,$V= 250 V$ $C= 10 \;p$ $F= 10 \times 10^{-12} F= 10^{-11}F$ $Q= CV=10^{-11} \times 250=2.5 \times 10^{-9}$ Hence A is the correct answer. answered Jun 13, 2014 by 1 answer 1 answer
2018-06-19 19:47:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6473622918128967, "perplexity": 792.8789065631371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00335.warc.gz"}
https://packages.tesselle.org/kairos/articles/kairos.html
## Load packages library(kairos) library(folio) # Datasets ## Not All Dates Are Created Equal This vignette presents different methods for dating archaeological assemblages using artifact count data. Here, dating refers to “the placement in time of events relative to one another or to any established scale of temporal measurement” (Dean 1978). This involves distinguishing between relative (that provide only a chronological sequence of events) and absolute dating methods (that yield a calendric indication and may provide the duration of an event) (O’Brien and Lyman 2002). Strictly speaking, there is no absolute dating given how dates are produced and given that any date refers to a scale. The distinction between absolute and relative time can be rephrased more clearly as quantifiable vs non-quantifiable (O’Brien and Lyman 2002): absolute dates “are expressed as points on standard scales of time measurement” (Dean 1978). We will keep here the distinction between a date an age as formulated by Colman, Pierce, and Birkeland (1987): “a date is a specific point in time, whereas an age is an interval of time measured back from the present.” Dealing with dates in archaeology can be tricky if one does not take into account the sources of the chronological information. In most cases, a date represents a terminus for a given archaeological assemblage. That is, a date before (terminus ante-quem) or after (terminus post-quem) which the formation process of the assemblage took place. This in mind, one obvious question that should underlie any investigation is what does the date represent? First, let’s be more formal: • An archaeological event is determined by its unknown calendar date $$t$$ with associated error $$\delta t$$. • $$t \pm \delta t$$ can be provided by different means and is assumed to be related to the event. This implies that: • There are no error-free dates in archaeology (although uncertainties cannot always be quantified). • Errors are assumed here to be symmetrical. This is true for most physical dating methods, but may be false after some data processing (e.g. 14C calibration). For a set of $$m$$ assemblages in which $$p$$ different types of artifact were recorded, let $$X = \left[ x_{ij} \right] ~\forall i \in \left[ 1,m \right], j \in \left[ 1,p \right]$$ be the $$m \times p$$ count matrix with row and column sums: \begin{align} x_{i \cdot} = \sum_{j = 1}^{p} x_{ij} && x_{\cdot j} = \sum_{i = 1}^{m} x_{ij} && x_{\cdot \cdot} = \sum_{i = 1}^{m} x_{i \cdot} = \sum_{j = 1}^{p} x_{\cdot j} && \forall x_{ij} \in \mathbb{N} \end{align} Note that all $$x_{ij}$$ are assumed to be error-free. ## Mean Ceramic Date ### Definition The Mean Ceramic Date (MCD) is a point estimate of the occupation of an archaeological site (South 1977). The MCD is estimated as the weighted mean of the date midpoints of the ceramic types $$t_j$$ (based on absolute dates or the known production interval) found in a given assemblage. The weights are the conditional frequencies of the respective types in the assemblage. The MCD is defined as: $t^{MCD}_i = \sum_{j = 1}^{p} t_j \times \frac{x_{ij}}{x_{i \cdot}}$ ### Limitation The MCD is a point estimate: knowing the mid-date of an assemblage and not knowing the time span of accumulation might be short sighted. MCD offers a rough indication of the chronological position of an assemblage, but does not tell if an assemblage represents ten or 100 years. ### Usage ## Coerce the zuni dataset to an abundance (count) matrix data("zuni", package = "folio") zuni_counts <- as_count(zuni) ## Set the start and end dates for each ceramic type zuni_dates <- list( LINO = c(600, 875), KIAT = c(850, 950), RED = c(900, 1050), GALL = c(1025, 1125), ESC = c(1050, 1150), PUBW = c(1050, 1150), RES = c(1000, 1200), TULA = c(1175, 1300), PINE = c(1275, 1350), PUBR = c(1000, 1200), WING = c(1100, 1200), WIPO = c(1125, 1225), SJ = c(1200, 1300), LSJ = c(1250, 1300), SPR = c(1250, 1300), PINER = c(1275, 1325), HESH = c(1275, 1450), KWAK = c(1275, 1450) ) ## Calculate date midpoint zuni_mid <- vapply(X = zuni_dates, FUN = mean, FUN.VALUE = numeric(1)) ## Calculate MCD zuni_mcd <- mcd(zuni_counts, dates = zuni_mid) #> LZ1105 LZ1103 LZ1100 LZ1099 LZ1097 LZ1096 #> 1162 1138 1154 1091 1092 841 ## Event & Accumulation Date ### Definition Event and accumulation dates are density estimates of the occupation and duration of an archaeological site (Bellanger, Husi, and Tomassone 2006; Bellanger, Tomassone, and Husi 2008; Bellanger and Husi 2012). The event date is an estimation of the terminus post-quem of an archaeological assemblage. The accumulation date represents the “chronological profile” of the assemblage. According to Bellanger and Husi (2012), accumulation date can be interpreted “at best […] as a formation process reflecting the duration or succession of events on the scale of archaeological time, and at worst, as imprecise dating due to contamination of the context by residual or intrusive material.” In other words, accumulation dates estimate occurrence of archaeological events and rhythms of the long term. #### Event Date Event dates are estimated by fitting a Gaussian multiple linear regression model on the factors resulting from a correspondence analysis - somewhat similar to the idea introduced by Poblome and Groenen (2003). This model results from the known dates of a selection of reliable contexts and allows to predict the event dates of the remaining assemblages. First, a correspondence analysis (CA) is carried out to summarize the information in the count matrix $$X$$. The correspondence analysis of $$X$$ provides the coordinates of the $$m$$ rows along the $$q$$ factorial components, denoted $$f_{ik} ~\forall i \in \left[ 1,m \right], k \in \left[ 1,q \right]$$. Then, assuming that $$n$$ assemblages are reliably dated by another source, a Gaussian multiple linear regression model is fitted on the factorial components for the $$n$$ dated assemblages: $t^E_i = \beta_{0} + \sum_{k = 1}^{q} \beta_{k} f_{ik} + \epsilon_i ~\forall i \in [1,n]$ where $$t^E_i$$ is the known date point estimate of the $$i$$th assemblage, $$\beta_k$$ are the regression coefficients and $$\epsilon_i$$ are normally, identically and independently distributed random variables, $$\epsilon_i \sim \mathcal{N}(0,\sigma^2)$$. These $$n$$ equations are stacked together and written in matrix notation as $t^E = F \beta + \epsilon$ where $$\epsilon \sim \mathcal{N}_{n}(0,\sigma^2 I_{n})$$, $$\beta = \left[ \beta_0 \cdots \beta_q \right]' \in \mathbb{R}^{q+1}$$ and $F = \begin{bmatrix} 1 & f_{11} & \cdots & f_{1q} \\ 1 & f_{21} & \cdots & f_{2q} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & f_{n1} & \cdots & f_{nq} \end{bmatrix}$ Assuming that $$F'F$$ is nonsingular, the ordinary least squares estimator of the unknown parameter vector $$\beta$$ is: $\widehat{\beta} = \left( F'F \right)^{-1} F' t^E$ Finally, for a given vector of CA coordinates $$f_i$$, the predicted event date of an assemblage $$t^E_i$$ is: $\widehat{t^E_i} = f_i \hat{\beta}$ The endpoints of the $$100(1 − \alpha)$$% associated prediction confidence interval are given as: $\widehat{t^E_i} \pm t_{\alpha/2,n-q-1} \sqrt{\widehat{V}}$ where $$\widehat{V_i}$$ is an estimator of the variance of the prediction error: $\widehat{V_i} = \widehat{\sigma}^2 \left( f_i^T \left( F'F \right)^{-1} f_i + 1 \right)$ were $$\widehat{\sigma} = \frac{\sum_{i=1}^{n} \left( t_i - \widehat{t^E_i} \right)^2}{n - q - 1}$$. The probability density of an event date $$t^E_i$$ can be described as a normal distribution: $t^E_i \sim \mathcal{N}(\widehat{t^E_i},\widehat{V_i})$ #### Accumulation Date As row (assemblages) and columns (types) CA coordinates are linked together through the so-called transition formulae, event dates for each type $$t^E_j$$ can be predicted following the same procedure as above. Then, the accumulation date $$t^A_i$$ is defined as the weighted mean of the event date of the ceramic types found in a given assemblage. The weights are the conditional frequencies of the respective types in the assemblage (akin to the MCD). The accumulation date is estimated as: $\widehat{t^A_i} = \sum_{j = 1}^{p} \widehat{t^E_j} \times \frac{x_{ij}}{x_{i \cdot}}$ The probability density of an accumulation date $$t^A_i$$ can be described as a Gaussian mixture: $t^A_i \sim \frac{x_{ij}}{x_{i \cdot}} \mathcal{N}(\widehat{t^E_j},\widehat{V_j}^2)$ Interestingly, the integral of the accumulation date offers an estimates of the cumulative occurrence of archaeological events, which is close enough to the definition of the tempo plot introduced by Dye (2016). ### Limitation Event and accumulation dates estimation relies on the same conditions and assumptions as the matrix seriation problem. Dunnell (1970) summarizes these conditions and assumptions as follows. The homogeneity conditions state that all the groups included in a seriation must: • Be of comparable duration. • Belong to the same cultural tradition. • Come from the same local area. The mathematical assumptions state that the distribution of any historical or temporal class: • Is continuous through time. • Exhibits the form of a unimodal curve. Theses assumptions create a distributional model and ordering is accomplished by arranging the matrix so that the class distributions approximate the required pattern. The resulting order is inferred to be chronological. Predicted dates have to be interpreted with care: these dates are highly dependent on the range of the known dates and the fit of the regression. ### Usage This package provides an implementation of the chronological modeling method developed by Bellanger and Husi (2012). This method is slightly modified here and allows the construction of different probability density curves of archaeological assemblage dates (event, activity and tempo). ## Bellanger et al. did not publish the data supporting their demonstration: ## no replication of their results is possible. ## Here is a pseudo-reproduction using the zuni dataset ## Assume that some assemblages are reliably dated (this is NOT a real example) ## The names of the vector entries must match the names of the assemblages zuni_dates <- c( LZ0569 = 1097, LZ0279 = 1119, CS16 = 1328, LZ0066 = 1111, LZ0852 = 1216, LZ1209 = 1251, CS144 = 1262, LZ0563 = 1206, LZ0329 = 1076, LZ0005Q = 859, LZ0322 = 1109, LZ0067 = 863, LZ0578 = 1180, LZ0227 = 1104, LZ0610 = 1074 ) ## Model the event and accumulation date for each assemblage model <- event(zuni_counts, dates = zuni_dates, cutoff = 90) summary(get_model(model)) #> #> Call: #> stats::lm(formula = date ~ ., data = contexts, na.action = stats::na.omit) #> #> Residuals: #> 1 2 3 4 5 6 7 8 #> 0.517235 -4.017534 -0.279200 0.662137 -1.246499 0.576044 2.634482 -4.383683 #> 9 10 11 12 13 14 15 #> -1.093837 -0.005002 2.543773 -0.032706 3.480918 -0.759429 1.403301 #> #> Coefficients: #> Estimate Std. Error t value Pr(>|t|) #> (Intercept) 1164.350 1.892 615.459 2.15e-13 *** #> F1 -158.314 1.472 -107.582 1.32e-09 *** #> F2 25.629 1.444 17.753 1.04e-05 *** #> F3 -5.546 1.905 -2.912 0.0333 * #> F4 11.416 3.407 3.351 0.0203 * #> F5 -2.713 2.448 -1.108 0.3183 #> F6 2.697 1.181 2.285 0.0711 . #> F7 3.966 3.001 1.322 0.2435 #> F8 11.132 2.941 3.785 0.0128 * #> F9 -4.886 2.020 -2.418 0.0602 . #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> Residual standard error: 3.669 on 5 degrees of freedom #> (405 observations deleted due to missingness) #> Multiple R-squared: 0.9997, Adjusted R-squared: 0.9992 #> F-statistic: 1979 on 9 and 5 DF, p-value: 2.456e-08 ## Estimate event dates event <- predict_event(model, margin = 1, level = 0.95) #> date lower upper error #> LZ1105 1168 1158 1178 4 #> LZ1103 1143 1139 1147 1 #> LZ1100 1156 1148 1164 3 #> LZ1099 1099 1092 1106 3 #> LZ1097 1088 1080 1097 3 #> LZ1096 839 829 849 4 ## Estimate accumulation dates acc <- predict_accumulation(model) #> LZ1105 LZ1103 LZ1100 LZ1099 LZ1097 LZ1096 #> 1170 1140 1158 1087 1092 875 ## Activity plot plot(model, type = "activity", event = TRUE, select = "LZ1105") ## Tempo plot plot(model, type = "tempo", select = "LZ1105") Resampling methods can be used to check the stability of the resulting model. If jackknife() is used, one type/fabric is removed at a time and all statistics are recalculated. In this way, one can assess whether certain type/fabric has a substantial influence on the date estimate. If bootstrap() is used, a large number of new bootstrap assemblages is created, with the same sample size, by resampling the original assemblage with replacement. Then, examination of the bootstrap statistics makes it possible to pinpoint assemblages that require further investigation. ## Check model variability ## Warning: this may take a few seconds ## Jackknife fabrics jack <- jackknife(model) #> date lower upper error bias #> LZ1105 1457 1447 1466 4 4913 #> LZ1103 948 945 952 1 -3315 #> LZ1100 1094 1086 1102 3 -1054 #> LZ1099 1253 1246 1260 3 2618 #> LZ1097 917 908 925 3 -2907 #> LZ1096 1060 1050 1070 4 3757 ## Bootstrap of assemblages boot <- bootstrap(model, n = 30) #> min mean max Q5 Q95 #> LZ1105 1122 1165.500 1206 1134.70 1190.65 #> LZ1103 1070 1146.433 1202 1091.35 1184.95 #> LZ1100 1021 1148.133 1221 1095.15 1193.80 #> LZ1099 1089 1099.767 1115 1089.00 1108.20 #> LZ1097 992 1085.833 1162 1006.05 1155.60 #> LZ1096 726 842.000 991 726.00 943.30 ## References Bellanger, L., Ph. Husi, and R. Tomassone. 2006. “Une approche statistique pour la datation de contextes archéologiques.” Revue de statistique appliquée 54 (2): 65–81. https://doi.org/10.1111/j.1475-4754.2006.00249.x. Bellanger, Lise, and Philippe Husi. 2012. “Statistical Tool for Dating and Interpreting Archaeological Contexts Using Pottery.” Journal of Archaeological Science 39 (4): 777–90. https://doi.org/10.1016/j.jas.2011.06.031. Bellanger, L., R. Tomassone, and P. Husi. 2008. “A Statistical Approach for Dating Archaeological Contexts.” Journal of Data Science 6: 135–54. Colman, Steven M., Kenneth L. Pierce, and Peter W. Birkeland. 1987. “Suggested Terminology for Quaternary Dating Methods.” Quaternary Research 28 (2): 314–19. https://doi.org/10.1016/0033-5894(87)90070-6. Dean, Jeffrey S. 1978. “Independent Dating in Archaeological Analysis.” In Advances in Archaeological Method and Theory, 223–55. Elsevier. https://doi.org/10.1016/B978-0-12-003101-6.50013-5. Dunnell, Robert C. 1970. “Seriation Method and Its Evaluation.” American Antiquity 35 (03): 305–19. https://doi.org/10.2307/278341. Dye, Thomas S. 2016. “Long-Term Rhythms in the Development of Hawaiian Social Stratification.” Journal of Archaeological Science 71 (July): 1–9. https://doi.org/10.1016/j.jas.2016.05.006. O’Brien, Michael J, and R. Lee Lyman. 2002. Seriation, Stratigraphy, and Index Fossils: The Backbone of Archaeological Dating. Dordrecht: Springer. Poblome, J., and P. J. F. Groenen. 2003. “Constrained Correspondence Analysis for Seriation of Sagalassos Tablewares.” In The Digital Heritage of Archaeology, edited by M. Doerr and A. Sarris. Hellenic Ministry of Culture. South, S. A. 1977. Method and Theory in Historical Archaeology. Studies in Archeology. New York: Academic Press.
2022-08-19 01:55:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49785348773002625, "perplexity": 5511.520162968123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00313.warc.gz"}
https://www.nature.com/articles/s41467-020-18841-7?error=cookies_not_supported&code=93e8985b-93c9-4723-bc66-8c0b32d53fb6
## Introduction Every organ surface and body cavity is lined by a confluent collective of epithelial cells. In homeostatic circumstances the epithelial collective remains effectively solid-like and sedentary. But during morphogenesis, remodeling or repair, as well as during malignant invasion or metastasis, the epithelial collective becomes fluid-like and migratory1,2,3,4. This conversion from sedentary to migratory behavior has traditionally been understood as a manifestation of the epithelial-to-mesenchymal transition (EMT) or the partial EMT (pEMT)5,6,7,8. Since its discovery in 1982, EMT has been intensively studied and well-characterized6,9,10. EMT is marked by progressive loss of epithelial character, including disrupted apico-basal polarity, disassembled cell–cell junctions, and impaired epithelial layer integrity and barrier function. This loss of epithelial character is accompanied by progressive gain of mesenchymal character, including gain of front–back polarity, activation of EMT-inducing transcription factors, and expression of mesenchymal markers11,12. In this process each epithelial cell tends to free itself from adhesions to immediate neighbors, and thereby can acquire migratory capacity and invasiveness. It has been suggested that the epithelial–mesenchymal axis is flanked at its extremes by unequivocal epithelial versus mesenchymal phenotypes separated by a continuous spectrum of hybrid epithelial/mesenchymal (E/M) or partial pEMT phenotypes5,8,13,14,15. Although such a one-dimensional spectrum of states has been regarded by some as being overly simplistic5,16, it is widely agreed that pEMT allows cell migration without full mesenchymal individualization17,18,19,20. During pEMT, cells coordinate with their neighbors through intermediate degrees of junctional integrity coupled with partial loss of apical–basal polarity and acquisition of graded degrees of front–back polarity and migratory capacity5,8. Moreover, EMT/pEMT is associated with the cells of highly aggressive tumors, endows cancer cells with stemness and resistance to cytotoxic anticancer drugs, and may be required in the fibrotic response21,22. In development23,24,25,26, wound healing27,28, fibrosis29, and cancer30,31,32,33,34, EMT/pEMT has provided a well-accepted framework for understanding collective migration, and in many contexts has been argued to be necessary2,17,22,28,35,36. In certain contexts, however, this conversion from sedentary to migratory behavior has been attributed to the recently discovered unjamming transition (UJT), in which epithelial cells migrate collectively and cooperatively37,38,39,40. By contrast with EMT, UJT in epithelial tissues is recently discovered and remains poorly understood37,38,39,40,41,42,43,44,45,46,47,48,49,50. During UJT the epithelial collective transitions from a jammed phase wherein cells remain virtually locked in place, as if the cellular collective were frozen and solid-like, toward an unjammed phase wherein cells often migrate in cooperative multicellular packs and swirls reminiscent of fluid flow. In both the solid-like jammed phase and the fluid-like unjammed phase, the epithelial collective retains an amorphous disordered structure. In the jammed phase, the motion of each individual cell tends to be caged by its nearest neighbors. But as the system progressively unjams and transitions to a fluid-like phase local rearrangements amongst neighboring cells become increasingly possible and tend to be cooperative, intermittent, and heterogeneous46,51,52,53,54. While poorly understood, cellular jamming and unjamming have been identified in epithelial systems in vitro37,39,43,46,48,49,54,55,56, in developmental systems in vivo39,42,50,57,58,59, and have been linked to the pathobiology of asthma37,38,39 and cancer45,47,60,61. Despite strong evidence implicating both pEMT and UJT in the solid–fluid transition of a cellular collective and the resulting collective migration of cells of epithelial origin35,37,38,39, the relationship between these transitions remains undefined62. For example, it is unclear if UJT necessarily entails elements of the pEMT program. The converse is also in question. As such, we do not yet know if the structural, dynamical, and molecular features of these solid–fluid transitions might be identical, overlapping, or entirely distinct. To discriminate among these possibilities, we examine mature, well-differentiated primary human bronchial epithelial (HBE) cells grown in air–liquid interface (ALI) culture; this model system is known to recapitulate the cellular constituency and architecture of intact human airway epithelia63,64,65. Here we show that UJT in this system is sufficient to account for vigorous epithelial layer migration in the absence of pEMT. Using the confluent layer of HBE cells, we trigger UJT by exposing the sedentary layer to a mechanical stress that has been tightly linked to aberrant remodeling of the asthmatic airway37,38,39,66. Cells thereafter migrate cooperatively, align into packs locally, and elongate systematically. Nevertheless, cell–cell junctions, apico-basal polarity, and barrier function remain intact in response, and mesenchymal markers remain unapparent. As such, pEMT is not evident. When we trigger pEMT and associated cellular migration by exposing the sedentary layer to TGF-β1, which is known to induce pEMT22,67, metrics of UJT versus pEMT diverge. To account for these striking physical observations a new computational model attributes the effects of pEMT mainly to diminished junctional tension but attributes those of UJT mainly to augmented cellular propulsion. Together, these findings establish that UJT is sufficient to account for vigorous epithelial layer migration even in the absence of pEMT. Distinct gateways to cellular migration therefore become apparent—UJT as it might apply to migration of epithelial sheets on a collective basis, and EMT/pEMT as it might apply to migration of mesenchymal cells on either a solitary or a collective basis. ## Results ### Cellular dynamics and structure: UJT versus pEMT diverge To induce UJT we exposed the cell layer to apical-to-basal mechanical compression of 2.9 kPa (30 cm H2O) for 3 h. This level of compression was chosen for three reasons. First, this level of compressive stress mimics that experienced by the epithelial layer during asthmatic bronchoconstriction66,68,69,70,71,72,73. Second, based upon simple physical arguments this level of compressive stress is readily generalizable to other situations For example, the bulk compressive stiffness modulus of the cell is quite large—on the order of 107–108 Pa— whereas traction stresses are typically on the order of 100 Pa, intercellular stresses are on the order of 1000 Pa, cellular Young’s moduli vary from 100 Pa to 10 kPa depending on cell type74, and the cortical shear stiffness is on the order of 1000 Pa37,75,76,77. As such, in the vicinity of the lateral intercellular space a compressive stress in the kPa range has been shown to cause localized cellular strains on the order of 0.1, which in turn have been shown to trigger robust intracellular signaling66. More generally, forces applied locally or generated endogenously are known to initiate biochemical signaling, cellular deformation and migration78,79,80. Finally, we have established in previous reports that pathologic responses increase with increasing compressive stress but exhibit a maximum response at 20–30 cm H2O. These pathologic responses include asthmatic airway remodeling81,82,83,84,85,86,87,88,89,90 and a robust UJT37,38,39. To induce pEMT we exposed the cell layer to TGF-β1 (10 ng/ml), a well-known EMT-inducing agent22,67. Although pEMT has been defined inconsistently in the literature, this dose has been established to induce pEMT in HBE cells in ALI culture67, a result borne out by our experiments and in line with the criteria defined in a recent consensus statement on EMT12. That consensus emphasizes that EMT status cannot be assessed solely on the basis of a single or even a small number of molecular markers. Rather, it asserts that primary criteria for defining EMT status must include morphological and functional cellular properties in conjunction with molecular markers. Accordingly, to assess EMT status we have examined expression and localization of mesenchymal markers (N-cadherin, fibronectin-EDA, vimentin, ZEB-1, Snail1) and epithelial markers (E-cadherin and ZO-1) together with functional assays (cell migration and layer permeability) and cellular structure (F-actin organization and cell shape). As a time-matched positive control for pEMT we used the cell layer subjected to graded concentrations of TGF-β1 (see “Methods” section). Following exposure to either compressive stress or TGF-β1 in our experimental setup, signatures of UJT and pEMT were evaluated at 24, 48, and 72 h (Supplementary Fig. 1). In a sedentary confluent epithelial layer, initiation of either UJT or pEMT results in collective migration8,18,35,37,38,39. While the precise dynamic and structural characteristics of the HBE layer undergoing pEMT have not been previously explored, UJT is known to be marked by the onset of stochastic but cooperative migratory dynamics together with systematic elongation of cell shapes37,38,39,49,91,92. Dynamics: We quantified migratory dynamics using average cell speed and effective diffusivity (Deff)37,92. Control HBE cells were essentially stationary, as if frozen in place, exhibiting only occasional small local motions which were insufficient for cells to uncage or perform local rearrangements with their immediate neighbors (Fig. 1a–c). We refer to this as kinetic arrest or, equivalently, the jammed phase. Following exposure to mechanical compression, however, these cells underwent UJT and became migratory37,38,39, with both average speed and effective diffusivity increasing substantially over time and maintained to at least 72 h following compression (Fig. 1a–c, Supplementary Table 1). Following exposure to TGF-β1, jammed cells underwent pEMT22,67, as documented in greater detail below (Fig. 2, Supplementary Figs. 2 and 3). Up to 24 h later these cells migrated with average speeds comparable to cells following compression (Fig. 1a–c). However, as pEMT progressed beyond 24 h cellular motions slowed to the baseline levels, indicating return to kinetic arrest and a jammed phase (Fig. 1a–c, Supplementary Table 1). Structure: We segmented cells from images of cell boundaries labeled for E-cadherin or ZO-1 (Supplementary Fig. 2) and quantified cell shapes using the cellular aspect ratio (AR), calculated as the ratio of the major axis to the minor axis of the cellular moment of inertia39. Control HBE cells exhibited a cobblestone, rounded, and relatively uniform appearance with $$\overline {{\rm{{AR}}}}$$ = 1.6 (Fig. 1d, e). Following exposure to compression, however, cell shapes became more elongated and variable, with progressive growth of $$\overline {{\rm{{AR}}}}$$ to 2.3 at 72 h (Fig. 1d, e, Supplementary Table 1). Following exposure to TGF-β1, cells elongated to $$\overline {{\rm{{AR}}}}$$ = 1.8 at 24 h but plateaued thereafter (Fig. 1d, e, Supplementary Table 1). As discussed below, the boundaries of cells treated with TGF-β1 also exhibited increased edge tortuosity (Fig. 2d). After compression, both cell migration and elongation grew over time (Fig. 1). In agreement with previously published work, these data indicate that the control unperturbed cells exhibited dynamic and structural signatures of a jammed epithelium, while the compressed cells exhibited dynamics and structural signatures of an unjammed epithelium37,38,39. After TGF-β1 treatment, cell migration and elongation initially increased but migration thereafter tapered off and cell shapes remained unchanged. By these dynamic and structural metrics, UJT and pEMT were indistinguishable at 24 h but subsequently diverged. Overall, UJT and pEMT showed distinct profiles of cell migration and shape. ### During UJT, epithelial character persists We next investigated the extent to which molecular signatures of pEMT and UJT were distinct or overlapping. Control cells exhibited prominent tight junctions marked by apical ZO-1 and adherens junctions marked by lateral E-cadherin (Fig. 2a–c, Supplementary Fig. 2a–c). ZO-1 and E-cadherin appeared as ring-like structures, demarcating continuous cell boundaries and forming cell–cell junctions (Fig. 2b, c). Cell boundaries were relatively straight (Fig. 2d), suggesting that cell–cell junctions were under the influence of mechanical line tension93,94. Further, cells exhibited cortical F-actin rings, which are a hallmark of mature epithelium (Fig. 2f, Supplementary Fig. 3a), and exhibited undetectable or very low expression of mesenchymal markers (Fig. 2g–i, Supplementary Fig. 3b, c). In well-differentiated, mature pseudostratified HBE cells which are jammed, these data serve as a positive control for fully epithelial character. Exposure to TGF-β1 (10 ng/ml) disrupted epithelial architecture and led to acquisition of mesenchymal character (Fig. 2, Supplementary Figs. 2 and 3). Importantly, the transition through pEMT to full EMT strongly varies depending on both the degree and the duration of the EMT-initiating signal. Induction of full EMT of the well-differentiated HBE layer required extended exposure to TGF-β1 (Supplementary Fig. 4), but here we focus on pEMT. As expected, in response to TGF-β1 both apico-basal polarity and tight and adherens junctions, as marked by ZO-1 and E-cadherin, became progressively disrupted (Fig. 2a–c, Supplementary Fig. 2a–c), and the level of E-cadherin protein decreased (Fig. 2h, Supplementary Fig. 2d). Remaining cell–cell junctions stained for E-cadherin developed increased tortuosity (i.e., the ratio of edge contour length to edge end-to-end distance) suggestive of a reduction in line tension (Fig. 2d)93,94. Furthermore, to confirm disruption of barrier function, we measured barrier permeability using dextran-FITC (40 kDa)81 and observed a substantial increase (Fig. 2e). Cells progressively lost their cortical actin rings while acquiring abundant apical and medial F-actin fibers, a phenotypical feature of mesenchymal cells95 (Fig. 2f, Supplementary Fig. 3a). Cells also acquired increased expression of EMT-inducing transcription factors including Zeb1 and Snail1, and mesenchymal markers including N-cadherin, vimentin, and fibronectin (EDA isoform) (Fig. 2g–i, Supplementary Fig. 3b, c). Increased expression of these mesenchymal markers occurred simultaneously with disruption of epithelial junctions, thus indicating a clear manifestation of a hybrid E/M phenotype and pEMT. In HBE cells undergoing pEMT, these data serve as a positive control for loss of epithelial character and gain of mesenchymal character. Exposure to compression (30 cm H2O), by contrast, impacted neither apico-basal polarity nor junctional integrity, as indicated by the apical localization of ZO-1 and lateral localization of E-cadherin (Fig. 2a, Supplementary Fig. 3a). These junctions were continuous (Fig. 2b, c, Supplementary Fig. 2b, c) and nearly straight, thus indicating that during UJT the junctional tension was largely maintained (Fig. 2d). Unlike during pEMT, during UJT the overall level of E-cadherin protein remained unaffected (Fig. 2h, Supplementary Fig. 2d). During UJT cells maintained an apical cortical F-actin ring (Fig. 2f, Supplementary Fig. 3a). While barrier function was compromised during pEMT, it remained intact during UJT (Fig. 2e). By contrast to cells during pEMT, cells during UJT did not gain a detectable mesenchymal molecular signature (Fig. 2g–i, Supplementary Fig. 3). These data show that epithelial cells undergoing UJT, in contrast to pEMT, maintained fully epithelial character and did not gain mesenchymal character. UJT is therefore distinct from EMT. Unlike those underlying EMT12, molecular mechanisms of UJT are largely unexplored. ERK signaling has recently been shown to be involved in UJT in a model of breast cancer60 and waves of ERK activation regulate collective epithelial migration during wound healing in MDCK monolayers96,97. During the compression-induced UJT, a variety of transcriptional and intracellular signaling pathways are activated, including EGFR, PKC, and ERK pathways66,81,83,87,88,98. Indeed, we found that blocking ERK activity using the pharmacological inhibitor U0126 (10 µM)81,99 attenuated compression-induced cellular motility, thus confirming that ERK signaling is required for compression-induced UJT (Supplementary Fig. 5). ### During UJT, cellular cooperativity emerges To further discriminate between pEMT and UJT, we next focused on cooperativity of cell shape orientations and migratory dynamics. Because of immediate cell–cell contact in a confluent collective, changes of shape or position of one cell necessarily impacts shapes and positions of neighboring cells; cooperativity amongst neighboring cells is therefore a hallmark of jamming44,45,52,100,101. We measured cooperativity in two ways. First, we used segmented cell images to measure cell shapes and shape cooperativity that defined structural packs (Fig. 3a). We identified those cells in the collective that shared similar shape-orientation and then used a community-finding algorithm to identify contiguous orientation clusters (see “Methods” section, Fig. 3a). In both jammed and pEMT layers, cellular collectives formed orientational packs that contained on the order of 5–10 cells and remained constant over time (Fig. 3c). After UJT, by contrast, collectives formed orientational packs that contained 45 ± 22 cells at 24 h and grew to 237 ± 45 cells by 72 h (mean ± SEM, Fig. 3c, Supplementary Table 1). Second, we used cellular trajectories to measure dynamic cooperativity that defined migratory packs (Fig. 3b). Using optical flow over cell-sized neighborhoods102, cellular trajectories were constructed by integration. We then used the same community-finding algorithm as above, but here applied to trajectory orientations rather than cell shape orientations (see “Methods” section, Fig. 3b). As a measure of effective pack diameter we used (4a/π)1/2, where a is pack area. In jammed layers, cellular collectives exhibited small dynamic packs spanning 76 ± 31 µm and containing ~11 ± 7 cells (see “Methods” section, Supplementary Table 1). Interestingly, during pEMT, cells initially moved in dynamic packs spanning 223 ± 67 µm containing ~71 ± 29 cells at 24 h, but these packs disappeared over a time-course matching the disruption of the tight and adherens junctions (Figs. 2b, c and 3b, d). By contrast, during UJT cellular collectives initially exhibited relatively smaller dynamic packs spanning 115 ± 36 µm containing ~19 ± 9 cells at 24 h, but grew to packs spanning 328 ± 74 µm containing ~139 ± 55 cells at 72 h (Fig. 3b, d, Supplementary Table 1). To determine cellular cooperativity, we employed independent metrics for cellular structure and migratory dynamics. During UJT, structural orientation packs rose monotonically from 24 to 72 h, whereas dynamic orientation packs leveled off from 48 to 72 h (Fig. 3). Despite this unexplained discordance, our data indicate that after UJT, but not after pEMT, structure and dynamics became increasingly cooperative. These observations (Figs. 2, 3, Supplementary Figs. 2 and 3), taken together, indicate that coordinated cellular movement during UJT occurred in conjunction with maintenance of epithelial morphology and barrier function (Table 1). These data are consistent with an essential role for intact junctions in cellular cooperation103,104,105,106,107,108, but are the first to show emergence of coordinated cellular migration in a fully confluent epithelium with no evidence of mixed E/M characteristics or pEMT. ### pEMT versus UJT: discriminating among fluid-like phases Results above identify two distinct migratory mechanisms, one arising from pEMT and the other from UJT. To better understand underlying mechanical factors that differentiate pEMT versus UJT, we extend previous computational analyses based on so-called vertex models37,39,91,92,109,110,111,112. This extended model is described in detail in the Supplementary Methods and is referred to here as the dynamic vertex model (DVM). In the DVM each cell within the confluent epithelial layer adjusts its position and its shape so as to minimize local mechanical energy. This energy, in turn, derives from three main contributions: deformability of the cytoplasm and associated changes of cell area; contractility of the apical actin ring and associated changes of its perimeter; and homotypic binding of cell–cell adhesion molecules, such as cadherins, together with extensibility of attendant contractile elements and associated changes in cell perimeter37,109,110. These structures and associated energies, taken together, lead to a preferred cell perimeter, p0, and determine the tension borne along the cell–cell junction, here called edge tension37,109,110. Importantly, contributions of cortical contraction and cell–cell adhesion to system energy are of opposite signs and are therefore seen to be in competition113; cortical contraction favors a shorter cell perimeter whereas cell–cell adhesion favors a longer cell perimeter. Equivalently, decreasing cortical contraction causes edge tension to decrease whereas decreasing cell–cell adhesion causes edge tension to increase. As elaborated in the Supplementary Methods, DVM departs from previous analyses by allowing cell–cell junctions to become curved and tortuous, much as is observed during pEMT. Edge tortuosity can arise in regions where the effects of edge tension becomes small compared with intracellular pressure differences between adjacent cells. In the DVM, increasing p0 mimics well progressive disruption of the cell–cell-junction and is thus seen to reflect the known physical effects of pEMT (Fig. 4a). For example, when p0 is small and propulsive forces are small the cell layer remains jammed (panel i). Cells on average assume disordered but compact polygonal shapes37,109 and cell–cell junctions are straight. But as p0 is progressively increased cell shapes become progressively more elongated and cell edges become increasingly curvilinear and tortuous, as if slackened (panels ii and iii). Indeed, edge tensions progressively decrease (as depicted by intensities of the lines) with a transition near p0 = 4.1, at which point edge tensions approach zero and edge tortuosity begins to rise (Fig. 4b). Loss of edge tension coincides with fluidization of the layer and a small increase in cell speed (inset), at which point the shear modulus114 and energy barriers vanish (Supplementary Fig. 6a). Importantly, for p0 to increase as cell–cell adhesion diminishes, as necessarily occurs as pEMT progresses, DVM suggests that cortical contraction must diminish even faster. Vanishing edge tension in the fluidized state is consistent with the notion that EMT weakens cell–cell contacts, and junctions therefore become unable to support mechanical forces. When propulsive forces, v0, are increased while p0 is kept fixed, results mimic well the known physical effects of UJT (Fig. 4c). Cell shapes become progressively elongated but cell edges remain straight (panels iv–vi). Edge tension increases but without an increase in edge tortuosity (Fig. 4d). Simultaneously, the speed of the cell migration increases appreciably (inset). This increase in cell speed coincides with fluidization of the layer, at which point cellular propulsion has become sufficient to overcome energy barriers that impede cellular rearrangements (Supplementary Fig. 6b, c). Thus, DVM predicts a dominant role for propulsion during UJT. Indeed, previous experimental work has linked traction forces, propulsion, and collective epithelial migration115, which has been further shown to require ERK activation97,116. ERK activation, which is required for compression-induced UJT (Supplementary Fig. 5), thus provides a mechanistic link between theory and experiment. During UJT versus pEMT, the DVM predicts, further, that two different metrics of cell shape diverge (Fig. 5a). The cellular AR emphasizes cellular elongation but deemphasizes tortuosity whereas the shape index, q (perimeter/(area1/2)) also depends on elongation but emphasizes tortuosity. Indeed, direct measurements of AR versus q from cells undergoing UJT versus pEMT are consistent with the predicted relationship between AR versus q (Fig. 5a, Supplementary Fig. 6d). As regards cell shapes and their changes, UJT versus pEMT are therefore seen to follow divergent pathways. Together, these results attribute the effects of pEMT mainly to diminished edge tension but attributes those of UJT mainly to augmented cellular propulsion. As such, DVM provides a physical picture that helps to explain how the manifestations of pEMT versus UJT on cell shape and cell migration are distinct. We then used DVM to better understand emergence of collective behavior during UJT (Fig. 3). In promoting collective behaviors, previous computational approaches have pointed toward the importance of cell motility, cell–cell interactions, persistence, confinement, and heterogeneity43,117,118,119,120,121,122. We wish to emphasize, however, that these approaches often impose on an ad hoc basis a local penalty when any given cell fails to align with its immediate neighbors49,121,122,123,124. Cooperativity and flocking are therefore built into such theories ab initio, and thus are virtually guaranteed to arise. DVM, by contrast, imposes no such penalty. When they arise in DVM simulations, cooperativity and flocking are therefore spontaneous and emergent. Using DVM we assign to each cell a migratory persistence time, τp, which naturally gives rise to a single cell migratory persistence length, in units of average cell diameter, $$l_0 = v_0\tau_{\rm{{p}}}/{\mathrm{{\Gamma}}}$$ where $${\mathrm{{\Gamma}}}$$ is the viscous damping coefficient on each cell (Supplementary Methods)92. When persistence is small cooperative packs remain few and small. But as persistence progressively increases prominent cell packs are seen to emerge and grow (Fig. 4e, f). Such predicted dependence of the size of the collective emergent pack upon the persistence of individual cells is explained in terms of how the local energy barrier to cellular rearrangement is overcome. When persistence is small the local energy barrier to rearrangement can be overcome, and associated unjamming and migration can occur, only through the application of local cellular propulsive forces (panel vii). In the DVM these localized cellular rearrangements are stochastic and, therefore, result in random uncoordinated patterns of cellular migration and correspondingly small packs (panel vii). When persistence becomes larger, however, cell displacements become spontaneously coordinated across multiple cell diameters, and propulsive forces tend to become aligned and cooperative. As such, the cellular collective tends to unjam via cooperative pack-based migration rather than localized granular rearrangements (panels viii and ix). Equivalently, in the limit of more persistent yet uncoordinated cellular propulsive forces, low frequency (i.e. low energy) elastic mechanical modes become available to the system as opposed to the high-frequency modes which are associated with localized rearrangements. Since low-frequency elastic modes are spatially extended and collective in nature, larger migratory packs emerge92,125. This mode of collective migration has also been theoretically predicted recently by Henkes et al.126. As persistence is increased, DVM thus predicts that pack size and cell speed increase in concert (Figs. 4f and 5b). Direct measurements of cell speed versus mean pack size from cells undergoing UJT confirm the predicted relationship (Fig. 5b). Emergence of coordinated cell movement facilitates more efficient migration (Figs. 4e, f and  5b). To explain how the confluent epithelial collective can transition from a solid-like to a fluid-like phase, experimental and computational results, taken together, thus point to a unified physical picture of two distinct mechanisms. In pEMT, fluidization arises mainly from reduction in junctional integrity and loss of edge tension. In UJT, by contrast, fluidization arises mainly from increased propulsion or persistence. Moreover, UJT is marked by emergent migratory packs together with a systematic tendency for cellular elongation. ## Discussion Development, wound repair, and cancer metastasis are fundamental biological processes. In each process cells of epithelial origin are ordinarily sedentary but can become highly migratory. To understand the mechanisms by which an epithelial layer can transition from sedentary to migratory behavior, the primary mechanism in many contexts had been thought to require EMT or pEMT5,127,128,129,130. During EMT/pEMT cells lose apico-basal polarity and epithelial markers, while they concurrently gain front-to-back polarity and mesenchymal markers. Each cell thereby frees itself from the tethers that bind it to surrounding cells and matrix and assumes a migratory phenotype. In the process, epithelial barrier function becomes compromised. Here by contrast we establish the UJT as a distinct migratory process in which none of these events pertain. Collective epithelial migration can occur through UJT without EMT or pEMT. EMT/pEMT refers not to a unique biological program but rather to any one of many programs, each with the capacity to confer on epithelial cells an increasingly mesenchymal character12,127. In doing so, EMT/pEMT tends to be a focal event wherein some cue stimulates a single cell—or some cell subpopulation—to delaminate from its tissue of origin and thereafter migrate to potentially great distances26,131. As such, EMT likely evolved as a mechanism that allows individual epithelial cells or cell clusters to separate from neighbors within the cell layer and thereafter invade and migrate through adjacent tissue132. Like EMT/pEMT, the UJT is observed in diverse contexts and may encompass a variety of programs37,38,39,41,42,43,44,45,46,47,48,49,87. But by contrast with EMT/pEMT, UJT comprises an event that is innately collective, wherein some cue stimulates cells constituting an integrated tissue to migrate collectively and cooperatively133. Due to the presence of a non-degradable basal transwell-insert, epithelial cells in our system cannot invade, and thus we could not compare invasion phenotypes between UJT and pEMT. Nonetheless, our data are consistent with the hypothesis that UJT might have evolved as a mechanism that allows epithelial rearrangements, migration, remodeling, plasticity, or development within a tissue under the physiological constraint of preserving tissue continuity, integrity, and barrier function. We establish here in the mature layer of primary HBE cells that UJT does not require pEMT. That finding in turn motivates three new questions. First, UJT has now been observed across diverse biological systems37,38,39,41,42,43,44,45,46,47,48,49, but we do not yet know whether UJT is governed across these diverse systems by unifying biological processes or conserved signaling pathways. Second, although we now know that UJT can occur in the absence of pEMT, it remains unclear if pEMT can occur in the absence of UJT. This question is illustrated, for example, by the case of ventral furrow formation during gastrulation in the embryo of Drosophila melanogaster, which requires the actions of EMT transcription factors134,135,136. Prior to full expression of EMT and dissolution of cell–cell junctions in Drosophila, embryonic epithelial cells have been shown to unjam; cell shapes elongate and become more variable as cells begin to rearrange and migrate39. Supporting that notion, our data in HBE cells point towards a role for UJT in the earliest phase of pEMT; when junctional disruption and expression of EMT transcription factors and mesenchymal markers are apparent but minimal (24 h; Supplementary Figs. 2 and 3), wherein cells are seen to unjam, elongate, and migrate in large dynamic packs (Figs. 1a, b and 3d). These observations argue neither for nor against the necessity of EMT for progression of metastatic disease127,129,137,138, but do suggest the possibility of an ancillary mechanism. In many cases the striking distinction between EMT/pEMT versus UJT as observed here is unlikely to be so clear cut. It has been argued, for example, that EMT-induced intermediate cell states are sufficiently rich in their confounding diversity that they cannot be captured along a linear spectrum of phenotypes flanked at its extremes by purely epithelial versus mesenchymal states5,16. In connection with a cellular collective comprising an integrated tissue, observations reported here demonstrate, further, that fluidization and migration of the collective is an even richer process than had been previously appreciated. Mixed epithelial and mesenchymal characteristics, and the interactions between them, are thought to be essential for carcinoma cell invasion and dissemination13,14,16,20,120, but how UJT might fit into this physical picture remains unclear139,140. More broadly, the Human Lung Cell Atlas now points not only to dramatic heterogeneities of airway cells and cell states, but also to strong proximal-to-distal gradients along the airway tree141. But we do not yet know how these heterogeneities and their spatial gradients might impact UJT locally, or, conversely, how UJT might impact these gradients. In that light, the third and last question raised by this work is the extent to which EMT/pEMT and UJT might work independently, sequentially, or cooperatively to effect morphogenesis, wound repair, and tissue remodeling, as well as fibrosis, cancer invasion, and metastasis142,143. ## Methods ### Cell culture Primary HBE cells at passage 2 were differentiated in ALI, described below37,81,83,84,85. Primary HBE cells were isolated at Passage 0 at the Marsico Lung Institute/Cystic Fibrosis Research Center at the University of North Carolina, Chapel Hill. Human lungs unsuitable for transplanataion were obtained under protocol #03-1396 approved by the University of North Carolina at Chapel Hill Biomedical Institutional Review Board. Informed consent was obtained from authorized representatives of all organ donors. Lungs were from non-smokers with no history of chronic lung disease. Demographic information is available for all donors used in our study upon request. Cells were expanded to Passage 2 in our lab, and used for all experiments at Passage 2, as described in the manuscript. Passage 2 primary HBE cells were plated onto type I collagen (0.05 mg/ml) coated transwell inserts (Corning, 12 mm, 0.4 µm pore, polyester) and maintained in a submerged condition for 4–6 days. Culture media consisted of a 1:1 mixture of DMEM (high glucose, 4.5 g/L) and bronchial epithelial basal medium (BEBM, Lonza) supplemented with bovine pituitary extract (BPE, 52 µg/ml), epidermal growth factor (EGF, 0.5 ng/ml), epinephrine (0.5 µg/ml), hydrocortisone (0.5 µg/ml), insulin (5 µg/ml), triiodothyronine (6.5 ng/ml), transferrin (10 µg/ml), gentamicin (50 µg/ml), amphotericin-B (50 ng/ml), bovine serum albumin (1.5 µg/ml), nystatin (20 units/ml), and retinoic acid (50 nM). Thus, for the entire culture period, HBE cells were maintained in defined, serum-free media81. Once the layer became confluent, medium was removed from the apical surface and the ALI condition was initiated. Over 14–17 days in ALI, the cells differentiated and formed a pseudostratified epithelium which recapitulated the cellular architecture and constituency of the intact human airway39,63,64,86,144. Prior to the experiments, cells were maintained for 20 h with minimal medium depleted of EGF, BPE, and hydrocortisone. For experiments with time points longer than 24 h, cells were fed with fresh minimal media at 48 h following the initial media change prior to exposure. Experiments were repeated with primary cells from at least n = 3–4 donors in independent experiments. HBE cells were derived from donors with no history of smoking or respiratory disease, as used in our previous studies37,81,83,84,85. Experimental quantifications are shown across all donors and reported n is number of independent donors used. To initiate pEMT, cells were treated with recombinant human TGF-β1 (10 ng/ml, Cell Signaling Technology)67. This dose of TGF-β1 was chosen according to a dose-dependent experiment at 1, 10, and 50 ng/ml, at which EMT is induced in a variety of systems145,146,147,148,149. In well-differentiated HBE cells, 10 ng/ml is an effective dose to induce hallmarks of complete EMT at 14 days (Supplementary Fig. 4). Our analysis was performed between 24 and 72 h, while cells exhibited widely accepted signatures of pEMT5,8. Based on dose, we found slight variations in the exact levels of epithelial and mesenchymal markers but our conclusions remain unchanged. To initiate UJT, cells were exposed to mechanical compression with an apical-to-basal pressure differential of 30 cm H2O37,81,82,83,84,85. Briefly, silicon plugs with an access port were press-fit into the top of each transwell. Access ports were either open to room air for sham controls or connected to 5% CO2 (balanced room air) compressed to 30 cm H2O. Cells were exposed to compressed air for 3 h. Time-matched control cells were set up with vehicle treatment for TGF-β1 and a sham pressure for mechanical compression. For each donor and experiment, time-matched controls, TGF-β1-treated, and compressed conditions were all performed in parallel, with each experiment stopped at the indicated time (Supplementary Fig. 1 for experimental setup). ### Protein and mRNA expression analysis We detected protein levels by western blot analysis as described previously81. Cell lysates were collected into 150 µl 2× Laemmli buffer with 1 M DTT at 24, 48, or 72 h after initial exposure to stimuli (vehicle/sham, TGF-β1 at 10 ng/ml, or compression at 30 cm H2O). The following antibodies and dilutions were used, with primary antibody diluted in 5% skim milk or 5% BSA according to the manufacturer’s instructions: E-cadherin (1:10,000), N-cadherin (1:1000), Snail1 (1:1000), vimentin (1:1000), GAPDH (1:5000), all from Cell Signaling Technology; EDA-fibronectin (1:1000, Sigma). We report fold-changes of normalized protein levels compared either to vehicle control (for E-cadherin) or to TGF-β1—treated at 72 h (for mesenchymal markers) across n = 3 donors. We detected mRNA expression as previously described84. Cells were collected from the conditions and donors as described above at 3, 24, or 48 h after the initial exposure to stimuli, and RNA was isolated from cell lysates using the RNeasy Mini Kit (Qiagen) following the manufacturer’s instructions. Real-time qRT-PCR was performed using primers listed in Supplementary Table 2, and fold-changes were calculated by the comparative ΔΔCt method150. ### Immunofluorescence staining At 24, 48, or 72 h after initial exposure to stimuli, cells were fixed with either: 4% paraformaldehyde in PBS with calcium and magnesium for 30 min at room temperature; or, 100% methanol at −20 °C for 20 min. Cells were permeabilized with 0.2% Triton X-100 for 15 min and blocked with 1% bovine serum albumin and 10% normal goat serum for 1 h. Cells were stained for F-actin (Alexa fluor 488-Phalloidin, 1:40, 30 min) or for proteins of interest, as follows: E-cadherin (1:200, Cell Signaling Technology), ZO-1 (1:100, ThermoFisher), vimentin (1:100, Cell Signaling Technology), cellular fibronectin (Extra Domain A splice variant, denoted FN-EDA, 1:200, EMD Millipore). Cells were counterstained with Hoechst 33342 (1:5000) for nuclei. Following staining, transwell membranes were cut out from the plastic support and mounted on glass slides (Vectashield). Slides were imaged using Zen Blue 2.0 software on a Zeiss Axio Observer Z1 using an apotome module. Maximum intensity images were generated in ImageJ (v 1.52n). Side view images were reconstructed from a z-stack, while top down images were maximum intensity projections generated ~10 µm of the z-stack. ### Live imaging and dynamic analysis To determine cellular dynamics, time-lapse movies were acquired and analyzed. Images were taken every 6 min for 6 h, ending at 24, 48, or 72 h after initial exposure to stimuli. Phase contrast images were acquired using Zen Blue 2.0 software on a Zeiss Axio Observer Z1 with stage incubator (37 °C, 5% CO2). Time-lapse movies were analyzed using custom software written in Matlab (R2019a). Cellular dynamics were determined using an optical flow algorithm. The movies were registered to sub-pixel resolution using a discrete Fourier transform method151. Flow fields were calculated from the registered movies using Matlab’s OpticalFlowFarneback function (R2019a). Trajectories were seeded from the movie’s first frame using a square grid with spacing comparable to the cell size and obtained from forwards-integration of the flow fields; for our field of view there were about 4000 trajectories. The average speed was calculated from the displacement during a two-hour window, and the effective diffusivity was calculated from the slope of the mean square displacement. ### Permeability Epithelial barrier function was determined by a dextran-FITC flux assay, as described previously81. Directly following time-lapse imaging of HBE cells, 1 mg/ml dextran-FITC (40 kDa; Invitrogen) was added to the apical surface of cells. After 3 h, medium was collected from the basal chamber, and used for measuring fluorescence intensity of FITC. Fluorescence intensity measured in media from stimulated cells is expressed as fold-change relative to that in the media from time-matched control cells. ### Cell shape analysis To determine cell shape distributions, we marked cellular boundaries and measured shape characteristics as described below. To mark cellular boundaries, we segmented immunofluorescent cell images using SeedWater Segmenter (v0.5.7.1)13. Images used were maximum intensity projections of ZO-1 and E-cadherin at the apical region of the cell layer. Segmented images were used to determine cell boundaries and extract cell shape information, including apical cell area, perimeter, and AR from major and minor axes of an equivalent ellipse. This fitted ellipse has equivalent eigenvalues of the second area moment as of the polygon corresponding to the cell boundaries, as published previously39. In addition to cell AR, we computed the cell shape index q = perimeter/ area. We also extracted individual cell edges and computed the end-to-end distance and the contour distance along the edge to compute the edge tortuosity: $${\rm{{Tortuosity}}} = \frac{{{\rm{{contour}}}\;{\rm{{length}}}}}{{{\rm{{end - to - end}}}\;{\rm{{length}}}}}$$ (1) ### Structural and dynamic cluster analysis Orientation clusters, or packs, were determined from both cell shape orientation and from cell trajectory orientation, using a community-finding algorithm as described below. Cell shape orientation was determined from segmented immunofluorescent cell images, while cell trajectory orientation was determined from dynamic flow fields. Each cell or trajectory possessed orientation θj with respect to a global axis of reference. The method below was developed for cell shape orientation clusters and was then applied to cell trajectories to determine dynamic orientation clusters. The determination of orientation clusters started by initiating a neighbor-count on each cell in a given image. We detected the number of neighbors mi of the ith cell possessed similar orientations within a cutoff δθ = ±10°. This led to an increase of neighbor-count on each of these neighbor cells of cell i by the number mi. We created the set of these neighbor cells for cell i and repeated this neighbor-finding for each of the other members in the set except cell i. We increased the neighbor-counts on all the members by the newly found number of neighbors and updated the set of connected cells. We continued to look for neighbors for all the new members of the set until we were unable to find a neighbor with similar orientation for any new member. This gave us a cluster of structurally connected cells where each of the cells have at least one neighbor with orientation within δθ. We called this an orientation-based cluster or a structural pack. We determined the mean pack-size per cell by counting, for each cell, the number of cells in its pack, and averaging. This can be expressed mathematically as follows: if in the jth structural pack there are sj number of cells, and there are Nc cells in an image, the mean pack-size per cell would be $$s = (1/N_{\rm{{c}}})\mathop {\sum }\nolimits_{j = 1}^{N_{\rm{{c}}}} s_j$$. A null test for our algorithm was to set δθ = ±90° and find that all cells in an image became part of the same connected cluster giving a mean pack-size equal to the number of cells. We performed the same pack-size analysis on the cellular trajectories obtained from the velocity field determined using optical flow. We applied a uniform speed threshold equal to the mean speed on each image and then a cutoff on the orientations of velocity vectors given by δθ = 10°. The rest of the calculation proceeded as above. Once we obtained the number of velocity vectors in each dynamic pack, we converted this to a two-dimensional area corresponding to the size of the pack. We then expressed an effective pack size according to the (4a/π)1/2, where a is pack area. We also converted this areal pack size into an approximate number of cells by using the average cell size determined for control cells for four donors, from the shape analysis described above. ### Statistics and reproducibility All of the data was analyzed in Matlab using custom scripts. To determine statistical significance, we ran an ANOVA for each data set, comparing across the multiple donors used. This was followed by a post-hoc analysis using a Bonferroni correction, and p < 0.05 was considered significant. All experiments were repeated independently with HBE cells derived from at least three donors with two biological replicates per condition and timepoint. Dynamic measurements (Figs. 1c, d and 3d; Supplementary Fig. 5b) and functional measurements (Fig. 1d) were repeated in n = 4 donors while structural measurements (Figs. 1e, 2d, and  3c) were repeated in n = 3 donors. Individual data points for each donor are shown. For each biological replicate used to obtain dynamic measurements, corresponding to an individual transwell, timelapse imaging movies were taken from 6 to 18 fields of view, from which ~4000 trajectories were obtained via optical flow, described above. For each biological replicate used to obtain structural measurements, two fields of view were taken, from which ~2000 cells were evaluated. Protein measurements (Fig. 2h, Supplementary Figs. 2d and 3d) were repeated in n = 3 donors from independent experiments with two biological replicates per condition and timepoint. Western blots were loaded in parallel and both run and imaged with identical conditions, and each blot was normalized to its own internal loading control of GAPDH. The blots show representative data which was consistent across the n = 3 donors used. Immunofluorescent images comparing three experimental interventions across three timepoints (Fig. 2a–c, f, g and Supplementary Figs. 2a–c, 3a, b) were not quantified. Included images were representative, and display morphology and localization of epithelial and mesenchymal markers that was consistent across n = 3 donors and two biological replicates per donor. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
2023-02-06 14:28:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3674889802932739, "perplexity": 6607.991062840884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00205.warc.gz"}
http://link-springer-com-443.webvpn.fjmu.edu.cn/chapter/10.1007/3-540-54890-4_149
# Central peak signatures from vortices in 2D easy-plane antiferromagnets • F. G. Mertens • A. Völkel • G. M. Wysin • A. R. Bishop Part I: Magnetic and Optical Systems Part of the Lecture Notes in Physics book series (LNP, volume 393) ## Abstract We investigate the dynamics of a classical, anisotropic Heisenberg model. Assuming a dilute gas of ballistically moving vortices above the Kosterlitz-Thouless transition temperature, we calculate the dynamic form factors $$S(\vec q,\omega )$$ and test them by combined Monte Carlo-molecular dynamics simulations. For both in-plane and out-of-plane correlations we predict and observe central peaks (CP) which are, however, produced by quite different mechanisms, depending on whether the correlations are globally or locally sensitive to the presence of vortices. The positions of the peaks in q-space depend on the type of interaction and on the velocity dependence of the vortex structure. For a ferromagnet both CP's are centered at q = 0; for an antigerromagnet the static vortex structure is responsible for a CP at the Bragg points, while deviations from it due to the vortex motion produce a CP at q = 0. By fitting the CP's to the simulation data we obtain the correlation length and the mean vortex velocity. ## Keywords Central Peak Vortex Solution Free Vortex Graphite Intercalation Compound Vortex Velocity These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. ## References 1. 1. L.P. Regnault, J.P. Boucher, J. Rossat-Mignot, J. Bouillot, R. Pynn, J.Y. Henry, J.P. Renard, Physica 136 B, 329 (1986)Google Scholar 2. 1a. S.T. Bramwell, M.T. Hutchings, J. Norman, R. Pynn, P. Day, J. Phys. C 8, 1435 (1988)Google Scholar 3. 2. D.G. Wiesler, H. Zabel, S.M. Shapiro, Physica B 156 + 157, 292 (1989)Google Scholar 4. 3. M. Pomerantz, Surface Science 142, 556 (1984) 5. 3a. D.I. Head, B.H. Blott, D. Melville, J. Phys. C 8, 1649 (1988)Google Scholar 6. 4. F.G. Mertens, A.R. Bishop, G.M. Wysin, C. Kawabata, Phys. Rev. B 39, 591 (1989)Google Scholar 7. 5. M.E. Gouvêa, G.M. Wysin, A.R. Bishop, F.G. Mertens, Phys. Rev. B 39, 11840 (1989)Google Scholar 8. 6. H.J. Mikeska, J. Phys. C 13, 2913 (1980)Google Scholar 9. 7. A.R. Völkel, F.G. Mertens, A.R. Bishop, G.M. Wysin, Phys. Rev. B 43, 5992 (1991)Google Scholar 10. 8. D.R. Nelson, J.M. Kosterlitz, Phys. Rev. Lett. 39, 1201 (1977)Google Scholar ## Authors and Affiliations • F. G. Mertens • 1 • A. Völkel • 1 • G. M. Wysin • 2 • A. R. Bishop • 3 1. 1.University of BayreuthGermany 2. 2.Kansas State UniversityManhattanUSA 3. 3.Los Alamos National LaboratoryUSA
2020-09-26 15:20:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5111175179481506, "perplexity": 10561.04560019386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00508.warc.gz"}
https://it.mathworks.com/help/symbolic/chebyshevu.html
# chebyshevU Chebyshev polynomials of the second kind ## Description example chebyshevU(n,x) represents the nth degree Chebyshev polynomial of the second kind at the point x. ## Examples ### First Five Chebyshev Polynomials of the Second Kind Find the first five Chebyshev polynomials of the second kind for the variable x. syms x chebyshevU([0, 1, 2, 3, 4], x) ans = [ 1, 2*x, 4*x^2 - 1, 8*x^3 - 4*x, 16*x^4 - 12*x^2 + 1] ### Chebyshev Polynomials for Numeric and Symbolic Arguments Depending on its arguments, chebyshevU returns floating-point or exact symbolic results. Find the value of the fifth-degree Chebyshev polynomial of the second kind at these points. Because these numbers are not symbolic objects, chebyshevU returns floating-point results. chebyshevU(5, [1/6, 1/3, 1/2, 2/3, 4/5]) ans = 0.8560 0.9465 0.0000 -1.2675 -1.0982 Find the value of the fifth-degree Chebyshev polynomial of the second kind for the same numbers converted to symbolic objects. For symbolic numbers, chebyshevU returns exact symbolic results. chebyshevU(5, sym([1/6, 1/4, 1/3, 1/2, 2/3, 4/5])) ans = [ 208/243, 33/32, 230/243, 0, -308/243, -3432/3125] ### Evaluate Chebyshev Polynomials with Floating-Point Numbers Floating-point evaluation of Chebyshev polynomials by direct calls of chebyshevU is numerically stable. However, first computing the polynomial using a symbolic variable, and then substituting variable-precision values into this expression can be numerically unstable. Find the value of the 500th-degree Chebyshev polynomial of the second kind at 1/3 and vpa(1/3). Floating-point evaluation is numerically stable. chebyshevU(500, 1/3) chebyshevU(500, vpa(1/3)) ans = 0.8680 ans = 0.86797529488884242798157148968078 Now, find the symbolic polynomial U500 = chebyshevU(500, x), and substitute x = vpa(1/3) into the result. This approach is numerically unstable. syms x U500 = chebyshevU(500, x); subs(U500, x, vpa(1/3)) ans = 63080680195950160912110845952.0 Approximate the polynomial coefficients by using vpa, and then substitute x = sym(1/3) into the result. This approach is also numerically unstable. subs(vpa(U500), x, sym(1/3)) ans = -1878009301399851172833781612544.0 ### Plot Chebyshev Polynomials of the Second Kind Plot the first five Chebyshev polynomials of the second kind. syms x y fplot(chebyshevU(0:4, x)) axis([-1.5 1.5 -2 2]) grid on ylabel('U_n(x)') legend('U_0(x)', 'U_1(x)', 'U_2(x)', 'U_3(x)', 'U_4(x)', 'Location', 'Best') title('Chebyshev polynomials of the second kind') ## Input Arguments collapse all Degree of the polynomial, specified as a nonnegative integer, symbolic variable, expression, or function, or as a vector or matrix of numbers, symbolic numbers, variables, expressions, or functions. Evaluation point, specified as a number, symbolic number, variable, expression, or function, or as a vector or matrix of numbers, symbolic numbers, variables, expressions, or functions. collapse all ### Chebyshev Polynomials of the Second Kind • Chebyshev polynomials of the second kind are defined as follows: $U\left(n,x\right)=\frac{\mathrm{sin}\left(\left(n+1\right)a\mathrm{cos}\left(x\right)\right)}{\mathrm{sin}\left(a\mathrm{cos}\left(x\right)\right)}$ These polynomials satisfy the recursion formula $U\left(0,x\right)=1,\text{ }U\left(1,x\right)=2\text{ }x,\text{ }U\left(n,x\right)=2\text{ }x\text{ }U\left(n-1,x\right)-U\left(n-2,x\right)$ • Chebyshev polynomials of the second kind are orthogonal on the interval -1 ≤ x ≤ 1 with respect to the weight function $w\left(x\right)=\sqrt{1-{x}^{2}}$. • Chebyshev polynomials of the second kind are a special case of the Jacobi polynomials $U\left(n,x\right)=\frac{{2}^{2n}n!\left(n+1\right)!}{\left(2n+1\right)!}P\left(n,\frac{1}{2},\frac{1}{2},x\right)$ and Gegenbauer polynomials $U\left(n,x\right)=G\left(n,1,x\right)$ ## Tips • chebyshevU returns floating-point results for numeric arguments that are not symbolic objects. • chebyshevU acts element-wise on nonscalar inputs. • At least one input argument must be a scalar or both arguments must be vectors or matrices of the same size. If one input argument is a scalar and the other one is a vector or a matrix, then chebyshevU expands the scalar into a vector or matrix of the same size as the other argument with all elements equal to that scalar. ## References [1] Hochstrasser, U. W. “Orthogonal Polynomials.” Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. (M. Abramowitz and I. A. Stegun, eds.). New York: Dover, 1972.
2020-12-05 14:47:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8444034457206726, "perplexity": 1797.529323816826}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00605.warc.gz"}
https://www.semanticscholar.org/paper/Positive-definite-metric-spaces-Meckes/2de3e6b6f0778cffdcd132e9315d93ceab1761ac
# Positive definite metric spaces @article{Meckes2010PositiveDM, title={Positive definite metric spaces}, author={Mark W. Meckes}, journal={Positivity}, year={2010}, volume={17}, pages={733-757} } • M. Meckes • Published 29 December 2010 • Mathematics • Positivity Magnitude is a numerical invariant of finite metric spaces, recently introduced by Leinster, which is analogous in precise senses to the cardinality of finite sets or the Euler characteristic of topological spaces. It has been extended to infinite metric spaces in several a priori distinct ways. This paper develops the theory of a class of metric spaces, positive definite metric spaces, for which magnitude is more tractable than in general. Positive definiteness is a generalization of the… • Mathematics • 2015 abstract:The notion of the magnitude of a metric space was introduced by Leinster and developed in works by Leinster, Meckes and Willerton, but the magnitudes of familiar sets in Euclidean space are Magnitude is a real-valued invariant of metric spaces, analogous to the Euler characteristic of topological spaces and the cardinality of sets. The definition of magnitude is a special case of a Magnitude is an isometric invariant of metric spaces inspired by category theory. Recent work has shown that the asymptotic behavior under rescaling of the magnitude of subsets of Euclidean space is • Mathematics, Computer Science • 2019 This paper restricts the sets to finite subsets of Euclidean space and investigates its individual components, and gives an explicit formula for the corrected inclusion-exclusion principle, and defines a quantity associated with each point, called the moment, which gives an intrinsic ordering to the points. Magnitude is a measure of size defined for certain classes of metric spaces; it arose from ideas in category theory. In particular, magnitude is defined for compact subsets of Euclidean space and, in • Mathematics • 2017 Magnitude is a numerical isometric invariant of metric spaces, whose definition arises from a precise analogy between categories and metric spaces. Despite this exotic provenance, magnitude turns out • Mathematics • 2015 There is a general notion of the magnitude of an enriched category, defined subject to hypotheses. In topological and geometric contexts, magnitude is already known to be closely related to classical • Mathematics • 2022 We study the geometric significance of Leinster’s notion of magnitude for a smooth manifold with boundary of arbitrary dimension, motivated by open questions for the unit disk in R2. For a large • Mathematics, Computer Science ArXiv • 2019 We define a one-parameter family of entropies, each assigning a real number to any probability measure on a compact metric space (or, more generally, a compact Hausdorff space with a notion of • Mathematics Geometriae Dedicata • 2012 Magnitude is a canonical invariant of finite metric spaces which has its origins in category theory; it is analogous to cardinality of finite sets. Here, by approximating certain compact subsets of ## References SHOWING 1-10 OF 48 REFERENCES Magnitude is a real-valued invariant of metric spaces, analogous to the Euler characteristic of topological spaces and the cardinality of sets. The definition of magnitude is a special case of a As poo we get the space Em with the distance function maxi-, ... I xi X. Let, furthermore, lP stand for the space of real sequences with the series of pth powers of the absolute values convergent. The notion of the magnitude of a compact metric space was considered in arXiv:0908.1582 with Tom Leinster, where the magnitude was calculated for line segments, circles and Cantor sets. In this paper • Mathematics • 2013 Magnitude is a canonical invariant of finite metric spaces which has its origins in category theory; it is analogous to cardinality of finite sets. Here, by approximating certain compact subsets of • Mathematics • 1975 I. Isometric Embedding.- 1. Introduction.- 2. Isometric Embedding in Hilbert Space.- 3. Functions of Negative Type.- 4. Radial Positive Definite Functions.- 5. A Characterization of Subspaces of Lp, • Mathematics • 2001 We study finite metric spaces with elements picked from, and distances consistent with, ambient Riemannian manifolds. The concepts of negative type and strictly negative type are reviewed, and the We discuss the measure-theoretic metric invariants extent, mean distance and symmetry ratio and their relation to the concept of negative type of a metric space. A conjecture stating that a compact • Mathematics • 1999 Introduction Retractions, extensions and selections Retractions, extensions and selections (special topics) Fixed points Differentiation of convex functions The Radon-Nikodym property Negligible sets
2023-02-01 11:41:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9380430579185486, "perplexity": 837.9915814236363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499934.48/warc/CC-MAIN-20230201112816-20230201142816-00312.warc.gz"}
http://math.stackexchange.com/questions/198030/solving-an-integral-with-square-root-in-the-exponent
# Solving an integral with square root in the exponent I'm trying to solve to following integral (related to a previous post) $$\int_{-1}^{1}\exp{(Ax^2+Bx+C+(Dx+E)\sqrt{1-x^2})}dx$$ Any ideas on how to approach this? - As someone noted on one of your previous questions, you should have a look at this (meta.math.stackexchange.com/questions/3286/…) if you don't know how to upvote and/or accept answers. By doing so, you will increase your accept rate (which, at 14%, is considered very low). A higher accept rate encourages people to put effort in answering your questions. –  M Turgeon Sep 17 '12 at 15:26 There is no known closed-formula for the reduced case $B=C=D=E=0$ so I highly doubt you can find any magic answer. If you are lucky enough you will potentially end up using special functions like incomplete Gamma, which is not very easy to use. I don't get it. In your reduced case $\int_{-1}^{1}\exp(Ax^2) dx=2\exp(Ax^2)$ –  Wox Sep 17 '12 at 15:49 This is not correct. $\int_{-1}^1 \exp(Ax^2) dx$ cannot be solved explicitely because of the square. If an $x$ is put in front of the exponential then you can integrate, but there is no such thing here. –  vanna Sep 17 '12 at 15:53 Sorry, copy&paste error. Maple gave this answer $\int_{-1}^{1}\exp(Ax^2)=\sqrt{\frac{-\pi}{A}}\text{erf}(\sqrt{-A})$ –  Wox Sep 17 '12 at 15:59 Both solutions you provided are not closed-formulae. Look at the definition of $\rm erf$ ;) –  vanna Sep 17 '12 at 16:25
2014-07-30 13:23:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.787071168422699, "perplexity": 526.8653864677023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00268-ip-10-146-231-18.ec2.internal.warc.gz"}