url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://codereview.stackexchange.com/questions/128426/largest-common-subsequence-algorithm-in-c | # Largest common subsequence algorithm in C++
I'm relatively new to C++ and am trying to learn good programming practices and habits from the beginning. The code I wrote finds the largest common subsequence between to strings (and output the resulting subsequence, not the length of the subsequence).
I was wondering if there were any improvements I could make to make this more efficient. I know that it is exponential run-time, and there is a more efficient approach via dynamic programming, but I am simply interested in improving this specific algorithm (i.e. the recursive approach). For instance, I know that maybe concatenating my strings via += may be expensive, or maybe there is a way to use references to avoid the copying of strings.
#include <string>
#include <iostream>
// utility function to return the largest (wrt length) of two strings
std::string& max(std::string &s1, std::string &s2)
{
return s1.size() > s2.size() ? s1 : s2;
}
// Return the lowest common sequence of characters contained in the
// strings s1 and s2 (as a string)
// e.g. lcs("ACBEA", "ADCA") -> "ACA" (string objects, not const char *)
std::string lcs(std::string s1, std::string s2)
{
// initialize the lcs to be an empty string
std::string lcsStr;
if (s1.empty() || s2.empty())
return std::string(); // empty string constructor
if (s1.at(s1.size() - 1) == s2.at(s2.size() - 1))
{
lcsStr.push_back(s1.at(s1.size() - 1));
s1.pop_back();
s2.pop_back();
return lcs(s1, s2) += lcsStr;
}
else
return max(
lcs(s1.substr(0, s1.size() - 1), s2), lcs(s1, s2.substr(0, s2.size() - 1))
) += lcsStr;
}
int main()
{
std::cout << "Enter two strings:" << std::endl;
std::string s1, s2;
std::cin >> s1 >> s2;
std::cout << lcs(s1, s2) << std::endl;
return 0;
} | 2019-08-23 14:34:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2096405029296875, "perplexity": 6902.0162618248105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318421.65/warc/CC-MAIN-20190823130046-20190823152046-00372.warc.gz"} |
https://math.stackexchange.com/questions/1095647/transfinite-fixed-points-of-a-function | # Transfinite fixed points of a function
Let the function $F\colon On \rightarrow On$ be defined by the following recursion:
$F(0) = \aleph_0$
$F(\alpha+1) = 2^{F(\alpha)}$ (cardinal exponentiation)
$F(\lambda) = \sup\{F(\alpha): \alpha \lt \lambda\}$ for $\lambda$ a limit ordinal
Prove that there is a fixed point for $F$, i.e. an ordinal $\kappa$ with $F(\kappa) = \kappa$.
Are such fixed points always cardinals?
Thoughts: So I can see that such a fixed point is going to have to be for a limit ordinal, since the function is strictly increasing for successor ordinals.
$F(\lambda) = \sup\{\aleph_{\alpha}: \alpha \lt \lambda\}$
I feel as if $\aleph_{\omega}$ might be a fixed point and suspect that any fixed points have to be cardinals, but I don't have a justification for either.
I'm not sure how to go about proving a fixed point exists and whether it has to always be a cardinal.
Fixed points are points such that $F(\alpha)=\alpha$. It is certainly not true that $F(\omega_\omega)=\omega_\omega$, and it depends on the continuum function (which can be changed quite wildly between models of $\sf ZFC$) whether or not $2^{\aleph_n}=\aleph_k$ for $k<\omega$ at all.
The easy part, that a fixed point is always a cardinal, just show that $F(\alpha)$ is always a cardinal.
To show that there is a fixed point at all, you need to construct a sequence which has gaps that are increasing further and further. What happens $\lambda_{n+1}=F(\aleph_{\lambda_n})$? What is the limit of this sequence?
(As a side remark, this function is often denoted by $\beth$, and the numbers are called $\beth$ numbers.) | 2019-08-20 11:13:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9047567248344421, "perplexity": 161.02441908883458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315321.52/warc/CC-MAIN-20190820092326-20190820114326-00313.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/dcdsb.2009.12.23 | Article Contents
Article Contents
Vibrations of a nonlinear dynamic beam between two stops
• This work extends the model developed by Gao (1996) for the vibrations of a nonlinear beam to the case when one of its ends is constrained to move between two reactive or rigid stops. Contact is modeled with the normal compliance condition for the deformable stops, and with the Signorini condition for the rigid stops. The existence of weak solutions to the problem with reactive stops is shown by using truncation and an abstract existence theorem involving pseudomonotone operators. The solution of the Signorini-type problem with rigid stops is obtained by passing to the limit when the normal compliance coefficient approaches infinity. This requires a continuity property for the beam operator similar to a continuity property for the wave operator that is a consequence of the so-called div-curl lemma of compensated compactness.
Mathematics Subject Classification: Primary: 74M15; Secondary: 74H20, 74H25, 74K10, 35L75.
Citation: | 2023-04-02 02:07:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6034770607948303, "perplexity": 518.4331429574727}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00005.warc.gz"} |
https://jrpickeral.com/?m=201310 | ## Min-term and Max-term Don’t-Cares
A recent homework assignment in my digital electronics course must have made people ask my professor about how don’t-cares were represented in the functions, because he sent an email out explaining that $$+ d$$ represented don’t-cares for a min-term list and $$\cdot D$$ represented don’t-cares for a max-term list. I would have assumed that to be the case anyway. However, it still struck up the question in my mind of why are the two represented differently anyway?
The reason I question this is because don’t-cares can be either 1’s or 0’s without altering the outcome of the function. So, in that case, whether we choose to express don’t-cares with a lower-case $$d$$ or an upper-case $$D$$, does it really matter? The don’t-cares will have the same values whether we’re looking at the function in terms of SOP or POS. I understand that it looks nicer to have a capitalized $$D$$ with the capitalized $$M$$ of the max-term list, but, in reality, the values of the don’t-cares remain the same within the same function whether we are looking at the min-term or max-term list of the function. Is that not entirely true? So why bother transitioning between the lower-case or upper-case to represent the numerical form in the function? It just seems pointless to me. A lower-case $$d$$ isn’t going to throw the appearance of the expression off just because it is shown with a max-term list and ANDed as opposed to ORed with a min-term list.
$$f(A,B,C,D)=\prod M(1,2,3) \cdot d(0,4,5)$$ works just as well as $$f(A,B,C,D)=\prod M(1,2,3) \cdot D(0,4,5)$$ | 2018-12-15 06:29:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48002731800079346, "perplexity": 715.6693940612698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826800.31/warc/CC-MAIN-20181215061532-20181215083532-00579.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/180388-pdf-minimum-x-c-print.html | # Pdf of minimum of (x,c)
Printable View
• May 12th 2011, 09:05 PM
rpmatlab
Pdf of minimum of (x,c)
I have to find the pdf of y = minimum of (x,c) where $X$ is a Random Variable with exponential distribution and ' $c$' is a constant. The detailed steps of my approach was given in the attachment. Is this approach correct? Please comment on the correctness of the solution
• May 13th 2011, 04:44 AM
rargh
To be sure, I tried solving it myself, and I get a different result.
Check it on:
https://docs.google.com/document/pub...tpnmGR2yr9Osn0
• May 13th 2011, 06:59 AM
theodds
Strictly speaking, Y doesn't admit a pdf. There is an atom at c: $P(c \le y) = 1$ or $0$ according as $c \le y$ or $c > y$ (note both are constants) so there is an error in this step in your work.
You can do this with the dirac delta function if you're comfortable with that. It looks like rargh has the right answer. | 2016-09-28 17:32:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410757780075073, "perplexity": 374.1894001227505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661640.68/warc/CC-MAIN-20160924173741-00082-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://bookdown.org/cdorm/lefko3gentle/ipms.html | # Chapter 7 Matrix Models IV: Integral Projection Models
“If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.”
— John von Neumann
As seen in Chapter 5, MPMs may be estimated using functions representing vital rates, where these vital rate functions may then be used to estimate each matrix element. We term this approach the function-based MPM, because of its use of functions to estimate matrix elements. One reason that this approach is very powerful is that it allows population ecologists to create simulations in which vital rates themselves are manipulated, for example via altered climate relationships or management regimes.
Easterling et al. (2000) proposed a special case of the function-based MPM called the integral projection model (IPM). In integral projection models, the familiar projection equation $$\mathbf{n_{t+1}}=\mathbf{A} \mathbf{n_t}$$ changes to
$$$n(k, t+1) = \int_L^U K(k, j) n(j, t) dj \tag{7.1}$$$
where an individual in state $$j$$ in time $$t$$ either transitions to state $$k$$ or produces offspring in state $$k$$ in time $$t+1$$, $$n(j, t) dj$$ refers to the number of individuals with their state in the range between $$j$$ and $$j+dj$$, $$L$$ and $$U$$ represent the lower and upper bounds of the state variable, and $$K(k, j)$$ is the projection kernel $$K(k, j) = P(k, j) + F(k, j)$$. In this projection kernel, $$P(k, j)$$ represents the survival-transition probability from state $$j$$ in time $$t$$ to state $$k$$ in time $$t+1$$, and $$F(k, j)$$ represents the production of offspring in state $$k$$ in time $$t+1$$ by an individual in state $$j$$ in time $$t$$. Because this equation is written as an integral as opposed to a discrete summation, it may appear to be quite different from the matrix approaches that we have seen so far. Indeed, there are those who have attempted to use these equations in their pure, integral forms, and when used analytically IPMs are not really MPMs. However, in practice, IPMs are typically discretized so that they may be parameterized as matrices. This discretization allows practitioners to use matrix approaches for analysis, because the projection kernel in equation 7.1 becomes perfectly analogous to the discrete projection equation $$\mathbf{n_{t+1}}=\mathbf{A} \mathbf{n_t}$$.
Just as in other function-based MPMs, the IPM generally assumes that survival probabilities follow a binomial distribution and so may be estimated via generalized linear models, generalized linear mixed models, general additive models, or related approaches assuming a binomial response. Likewise, probabilities of reproduction, observation, or maturity, should also follow a binomial response. We see a difference in the estimation of the size probability, because IPMs require the use of a continuous size metric. In fact, the “traditional” IPM assumes a Gaussian distribution for size, and the underlying assumptions of the Gaussian distribution make assumptions that may fail in many circumstances and hence lead to biased results. A more flexible IPM approach allows the use of other distributions, including continuous distributions such as the gamma distribution, and discrete distributions such as the Poisson or negative binomial distributions when necessary. Package lefko3 allows all of these distributions.
Discretized IPMs assume vital rate models that parameterize the main kernel populating the matrix elements. Let’s review the fourteen vital rate models possible in lefko3:
1. Survival probability - This is the probability of surviving from occasion t to occasion t+1, given that the individual is in stage $$j$$ in occasion t (and, if historical, in stage $$l$$ in occasion t-1). In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and individual identity, and a number of individual or environmental covariates in occasions t and t-1. This parameter is required in all function-based matrices.
2. Observation probability - This is the probability of observation in occasion t+1 of an individual in stage $$k$$ given survival from occasion t to occasion t+1. This parameter is only used when at least one stage is technically not observable. For example, some plants are capable of vegetative dormancy, in which case they are alive but do not necessarily sprout in all years. In these cases, the probability of sprouting may be estimated as the observation probability. Note that this probability does not refer to observer effort, and so should only be used to differentiate completely unobservable stages where the observation status refers to an important biological phenomenon, such as when individuals may be alive but have a size of zero. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
3. Primary size transition probability - This is the probability of becoming size $$k$$ in occasion t+1 assuming survival from occasion t to occasion t+1 and observation in that time. If multiple size metrics are used, then this refers only to the first of these, which we may refer to as the primary size variable. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and individual identity, and a number of individual or environmental covariates in occasions t and t-1. This parameter is required in all function-based size-classified matrices.
4. Secondary size transition probability - This is the probability of becoming size $$k$$ in occasion t+1 assuming survival from occasion t to occasion t+1 and observation in that time, within a second size metric used for classification in addition to the primary metric. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
5. Tertiary size transition probability - This is the probability of becoming size $$k$$ in occasion t+1 assuming survival from occasion t to occasion t+1 and observation in that time, within a third size metric used for classification in addition to the primary and secondary metrics. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
6. Reproduction probability - This is the probability of becoming reproductive in occasion t+1 given survival from occasion t to occasion t+1, and observation in that time. Note that this should be used only if the researcher wishes to separate breeding from non-breeding mature stages. If all adult stages are potentially reproductive and no separation of reproducing from non-reproducing adults is required by the life history model, then this parameter should not be estimated. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
7. Fecundity rate - This refers to the rate of production of new individuals into stages in time t+1 as offspring from reproduction events happening in time t. Under the default setting, this is the rate of successful production of offspring in occasion t by individuals alive, observable, and reproductive in that time, and, if assuming a pre-breeding model and sufficient information is provided in the dataset, the survival of those offspring into occasion t+1 in whatever juvenile class is possible. Thus, the fecundity rate of seed-producing plants might be split into seedlings, which are plants that germinated within a year of seed production, and dormant seeds. Alternatively, it may be given only as produced fruits or seeds, with the survival and germination of seeds provided elsewhere in the MPM development process, such as within a supplement table. An additional setting allows fecundity rate to be estimated using data provided for occasion t+1 instead of occasion t. In lefko3, this parameter may be modeled as a function of up to three size metrics, reproductive status, patch, year, age, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
8. Juvenile survival probability - This is the probability of surviving from juvenile stage $$j$$ in occasion t to occasion t+1. It is used only when the user wishes to model vital rates for a single size-unclassified juvenile period separately from adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
9. Juvenile observation probability - This is the probability of observation in occasion t+1 of an individual in juvenile stage $$j$$ in occasion t given survival from occasion t to occasion t+1. It is used only when the user wishes to model vital rates for a single size-unclassified juvenile period separately from adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
10. Juvenile primary size transition probability - This is the probability of becoming stage $$k$$ in occasion t+1 assuming survival from juvenile stage $$j$$ in occasion t to occasion t+1 and observation in that time. It is in terms of a single size metric. It is used only when the user wishes to model vital rates for a single size-unclassified juvenile period separately from adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and a number of individual or environmental covariates in occasions t and t-1, and individual identity.
11. Juvenile secondary size transition probability - This is the probability of becoming stage $$k$$ in occasion t+1 assuming survival from juvenile stage $$j$$ in occasion t to occasion t+1 and observation in that time, in a secondary size metric in addition to the primary size metric. It is used only when the user wishes to model vital rates for a single size-unclassified juvenile period separately from adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
12. Juvenile tertiary size transition probability - This is the probability of becoming stage $$k$$ in occasion t+1 assuming survival from juvenile stage $$j$$ in occasion t to occasion t+1 and observation in that time, in a tertiary size metric in addition to the primary and secondary size metrics. It is used only when the user wishes to model vital rates for a single size-unclassified juvenile period separately from adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
13. Juvenile reproduction probability - This is the probability of reproducing in mature stage $$k$$ in occasion t+1 given survival from juvenile stage $$j$$ in occasion t to occasion t+1, and observation in that time. It is used only when the user wishes to model vital rates for a single size-unclassified juvenile period separately from adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and individual identity, and a number of individual or environmental covariates in occasions t and t-1.
14. Juvenile maturity probability - This is the probability of becoming mature in occasion t+1 given survival from juvenile stage $$j$$ in occasion t to occasion t+1. It is used only when the user wishes to model vital rates for a single size-unclassified juvenile period separately from adults. In lefko3, this parameter may be modeled as a function of up to three size metrics, patch, year, and individual identity, and a number of individual or environmental covariates in occasions t and t-1. Note that this parameter denotes transition to maturity.
Of these fourteen vital rates, most users will estimate at least parameters (1) survival probability, (3) primary size transition probability, and (7) fecundity rate. These three are the default set for function modelsearch(). Parameters (2) observation probability and (6) reproduction probability may be used when some stages are included that are completely unobservable (and so do not have any size), or that are mature but non-reproductive, respectively. Parameters (4) secondary size transition and (5) tertiary size transition should only be used when size classification involves more than one size variable. Parameters (8) through (14) should only be added if the dataset contains juvenile individuals transitioning to maturity, and these juveniles live essentially as a single juvenile stage for some amount of time before transitioning to maturity, or before transitioning to a stage that is size-classified in the same manner as adult stages are. If juveniles can be classified by size similarly to adults (or at least on the same scale), then only vital rates (1) through (7) should be used and stage groups can be used with supplement tables to disallow transitions back to juvenile stages. If multiple juvenile stages exist on a different size classification system than adults, then stage groups may also be included as categorical variables in linear vital rate modeling in rates (1) through (7) to stratify vital rate models properly.
Let us assume that the state of the individual is represented by a continuous variable, such as a continuous size metric. This continuous size metric will be used to estimate parameter (3), the primary size transition probability. It may be Gaussian distributed, as is often assumed in IPMs , or may follow a different continuous distribution such as the gamma distribution. The survival-transition kernel $$P(k, j)$$ and the fecundity kernel $$F(k, j)$$ will estimated as in the function-based MPM, as products of conditional rates or probabilities that are themselves estimated via linear models, additive models, or some other function-based approach. How, then, is the primary size transition probability estimated?
In practice, the continuous state variable is broken down into a series of continuous domains, each with its own midpoint and upper and lower bounds. Individual domains under the Gaussian and gamma distributions are shown in figures 7.1 a and d, respectively. The domain midpoints are sometimes referred to as mesh points, and together with their upper and lower bounds they compose a series of size bins of generally equal size. To approximate a continuous size, we choose a rather high number of size bins, $$m$$, perhaps on the order of 100 or even more.
## 7.1 Midpoint method vs. cumulative density function (CDF)
There are several methods to estimate the size transition probabilities associated with each size bin. The original method developed is referred to as the midpoint method . It is the default in packages such as IPMpack and in some published guides to IPM creation . The mesh points are then defined as $$j_i = L + (i - 0.5)h$$, where $$i$$ is the set of integers from 1 to $$m$$ ($$i=1,2,...,m$$), $$L$$ is the lower state bound as before, and $$h$$ is the width of the state bin or size bin, given as $$h=(U-L)/m$$. If each kernel is composed of a vital rate function assuming some sort of probability distribution, and size is distributed on a Gaussian, gamma, or other continuous distribution, then $$h$$ accounts for the area under the distribution density curve for the corresponding vital rate contributing to the kernel at each midpoint size value used in the model (figures 7.1 b and e). If we think of the integral as being approximated by a series of rectangles under the function being integrated, then $$h$$ accounts for the width of the rectangle in the area approximation. Thus, we have the following.
$$$n(x_j, t+1) = h \sum_{i=1}^m K(x_j, x_i) n(x_i, t) \tag{7.2}$$$
Doak et al. (2021) pointed out that the midpoint method yields biased results, often overestimating size transition probabilities. They proposed a second method based on the cumulative density function associated with the continuous distribution being used. We will call this the CDF method here. In this method, the cumulative probability associated with the lower and higher boundaries of the size bin are first calculated. Then, the cumulative probability associated with the lower boundary is subtracted from the cumulative probability associated with the higher boundary, yielding the exact probability associated with the size bin itself (figures 7.1 c and f). This method does not yield biased results, and so is the default method used in lefko3 (although the midpoint method is available as an option).
The practical impact is that this approach has us creating size bins, just as in the function-based approach. If we have created size bins, then we have essentially created size-classified life history stages. Thus, equation 7.2 can be treated as a matrix projection, perfectly analogous to $$\mathbf{n_{t+1}}=\mathbf{Kn_t}$$. So from a practical standpoint, an integral projection model is simply a function-based matrix projection model in which a continuous size metric determines demography. Indeed, Ellner & Rees (2006) further proposed a generalization allowing IPMs to be developed with discrete stages in some portions of the life history, referring to this as a complex integral projection model. So in practice, there is now virtually no practical difference between IPMs and function-based MPMs, although there are theoretical differences due to the assumption of integrals over continuous size in the former.
## 7.2 Creating IPMs in lefko3
How do we create IPMs in package lefko3? This turns out to be quite easy. To illustrate the process, we will use the Lathyrus vernus dataset (see section 1.6.2). In this exercise, we will create both ahistorical and historical IPMs.
First, we will clear memory and take a look at the dataset.
rm(list=ls(all=TRUE))
library(lefko3)
data(lathyrus)
summary(lathyrus)
> SUBPLOT GENET Volume88 lnVol88
> Min. :1.000 Min. : 1.0 Min. : 3.4 Min. :1.200
> 1st Qu.:2.000 1st Qu.: 48.0 1st Qu.: 63.0 1st Qu.:4.100
> Median :3.000 Median : 97.0 Median : 732.5 Median :6.600
> Mean :3.223 Mean :110.2 Mean : 749.4 Mean :5.538
> 3rd Qu.:4.000 3rd Qu.:167.5 3rd Qu.:1025.5 3rd Qu.:6.900
> Max. :6.000 Max. :284.0 Max. :7032.0 Max. :8.900
> NA's :404 NA's :404
> FCODE88 Flow88 Intactseed88 Dead1988 Dormant1988
> Min. :0.0000 Min. : 1.00 Min. : 0 Mode:logical Mode:logical
> 1st Qu.:0.0000 1st Qu.: 4.00 1st Qu.: 0 NA's:1119 NA's:1119
> Median :0.0000 Median : 8.00 Median : 0
> Mean :0.3399 Mean :11.86 Mean : 3
> 3rd Qu.:1.0000 3rd Qu.:15.00 3rd Qu.: 4
> Max. :1.0000 Max. :66.00 Max. :34
> NA's :404 NA's :910 NA's :875
> Missing1988 Seedling1988 Volume89 lnVol89
> Mode:logical Min. :1.000 Min. : 1.8 Min. :0.600
> NA's:1119 1st Qu.:2.000 1st Qu.: 15.6 1st Qu.:2.700
> Median :2.000 Median : 118.8 Median :4.800
> Mean :2.144 Mean : 573.3 Mean :4.855
> 3rd Qu.:3.000 3rd Qu.: 968.8 3rd Qu.:6.900
> Max. :3.000 Max. :6539.4 Max. :8.800
> NA's :1022 NA's :294 NA's :294
> Min. :0.0000 Min. : 1.00 Min. : 0.000 Min. :1
> 1st Qu.:0.0000 1st Qu.: 5.00 1st Qu.: 0.000 1st Qu.:1
> Median :0.0000 Median :11.00 Median : 5.000 Median :1
> Mean :0.2667 Mean :14.88 Mean : 8.273 Mean :1
> 3rd Qu.:1.0000 3rd Qu.:20.00 3rd Qu.:13.000 3rd Qu.:1
> Max. :1.0000 Max. :97.00 Max. :66.000 Max. :1
> NA's :294 NA's :906 NA's :899 NA's :1077
> Dormant1989 Missing1989 Seedling1989 Volume90 lnVol90
> Min. :1 Min. :1 Min. :1.000 Min. : 2.1 Min. :0.700
> 1st Qu.:1 1st Qu.:1 1st Qu.:2.000 1st Qu.: 12.6 1st Qu.:2.500
> Median :1 Median :1 Median :2.000 Median : 61.0 Median :4.100
> Mean :1 Mean :1 Mean :2.136 Mean : 244.1 Mean :4.207
> 3rd Qu.:1 3rd Qu.:1 3rd Qu.:2.000 3rd Qu.: 295.2 3rd Qu.:5.700
> Max. :1 Max. :1 Max. :3.000 Max. :4242.8 Max. :8.400
> NA's :1046 NA's :1112 NA's :1001 NA's :245 NA's :245
> Min. :0.0000 Min. : 1.000 Min. : 0.000 Min. :1
> 1st Qu.:0.0000 1st Qu.: 3.000 1st Qu.: 0.000 1st Qu.:1
> Median :0.0000 Median : 6.000 Median : 0.000 Median :1
> Mean :0.1581 Mean : 8.104 Mean : 2.514 Mean :1
> 3rd Qu.:0.0000 3rd Qu.:10.750 3rd Qu.: 1.000 3rd Qu.:1
> Max. :1.0000 Max. :54.000 Max. :37.000 Max. :1
> NA's :246 NA's :985 NA's :981 NA's :1007
> Dormant1990 Missing1990 Seedling1990 Volume91 lnVol91
> Min. :1 Min. :1 Min. :1.000 Min. : 4.0 Min. :1.400
> 1st Qu.:1 1st Qu.:1 1st Qu.:2.000 1st Qu.: 12.0 1st Qu.:2.500
> Median :1 Median :1 Median :2.000 Median : 118.5 Median :4.800
> Mean :1 Mean :1 Mean :2.186 Mean : 418.7 Mean :4.642
> 3rd Qu.:1 3rd Qu.:1 3rd Qu.:2.000 3rd Qu.: 689.7 3rd Qu.:6.500
> Max. :1 Max. :1 Max. :3.000 Max. :6645.8 Max. :8.800
> NA's :1054 NA's :1105 NA's :1049 NA's :305 NA's :305
> FCODE91 Flow91 Intactseed91 Dead1991 Dormant1991
> Min. :0.0000 Min. : 1.00 Min. : 0.000 Min. :1 Min. :1
> 1st Qu.:0.0000 1st Qu.: 4.00 1st Qu.: 0.000 1st Qu.:1 1st Qu.:1
> Median :0.0000 Median : 8.00 Median : 3.500 Median :1 Median :1
> Mean :0.2525 Mean :11.12 Mean : 5.805 Mean :1 Mean :1
> 3rd Qu.:1.0000 3rd Qu.:15.00 3rd Qu.:10.000 3rd Qu.:1 3rd Qu.:1
> Max. :1.0000 Max. :48.00 Max. :48.000 Max. :1 Max. :1
> NA's :307 NA's :954 NA's :919 NA's :925 NA's :1034
> Missing1991 Seedling1991
> Min. :1 Min. :1.000
> 1st Qu.:1 1st Qu.:2.000
> Median :1 Median :2.000
> Mean :1 Mean :1.973
> 3rd Qu.:1 3rd Qu.:2.000
> Max. :1 Max. :3.000
> NA's :1095 NA's :1082
This dataset includes information on 1,119 individuals, so there are 1,119 rows with data. There are 38 columns. The first two columns give identifying information about each individual (SUBPLOT refers to the patch, and GENET refers to individual identity), with each individual’s data entirely restricted to one row. This is followed by four sets of nine columns, each named VolumeXX, lnVolXX, FCODEXX, FlowXX, IntactseedXX, Dead19XX, DormantXX, Missing19XX, and SeedlingXX, where XX corresponds to the year of observation and with years organized consecutively. Thus, columns 3-11 refer to year 1988, columns 12-20 refer to year 1989, etc.
### 7.2.1 Developing stageframes for IPMs
First, we will create a stageframe for this dataset. We will base our stageframe on the life history model provided in Ehrlén (2000), but use a different size classification based on leaf volume to allow IPM construction and make all mature stages other than vegetative dormancy reproductive (figure 7.2).
In the stageframe code below, we show that we want an IPM by choosing two stages that serve as the size limits for the IPM’s discretized size bin classification. These two size classes should have exactly the same characteristics in the stageframe other than size. The sizes input into the sizes vector for these two stages should not be midpoints. Instead, the size for the lower limit should be the lower limit of the minimum size bin, while the size input for the upper limit should be the upper limit of the maximum size bin. By choosing these two size limits, we can skip adding and describing the many size classes that will fall between these limits - function sf_create() will create all of these for us. We mark these limits in the vector that we load into the stagenames option using the string "ipm". We then input all other characteristics for these size bins, such as observation status, maturity status, reproductive status, and these characteristics must be the same for both the minimum and maximum size bins. Package lefko3 will then create and name all IPM size classes according to its own conventions. The default number of size classes is 100 bins, and this can be altered using the ipmbins option. Note that this is essentially the same procedure described in section 2.4.
sizevector <- c(0, 100, 0, 1, 7100)
stagevector <- c("Sd", "Sdl", "Dorm", "ipm", "ipm")
repvector <- c(0, 0, 0, 1, 1)
obsvector <- c(0, 1, 0, 1, 1)
matvector <- c(0, 0, 1, 1, 1)
immvector <- c(1, 1, 0, 0, 0)
propvector <- c(1, 0, 0, 0, 0)
indataset <- c(0, 1, 1, 1, 1)
binvec <- c(0, 100, 0.5, 1, 1)
lathframeipm <- sf_create(sizes = sizevector, stagenames = stagevector,
repstatus = repvector, obsstatus = obsvector, propstatus = propvector,
indataset = indataset, binhalfwidth = binvec, ipmbins = 100, roundsize = 3)
dim(lathframeipm)
> [1] 103 29
This stageframe has 103 stages - dormant seed, seedling, vegetative dormancy, and 100 size-classified adult stages. Let’s look at just a few key columns.
lathframeipm[,c("stage", "size", "sizebin_min", "sizebin_max", "comments")]
> stage size sizebin_min sizebin_max comments
> 1 Sd 0.000 0.00 0.00 Dormant seed
> 2 Sdl 100.000 0.00 200.00 Seedling
> 3 Dorm 0.000 -0.50 0.50 Dormant
> 4 sza_36.495_0 36.495 1.00 71.99 ipm adult stage
> 5 sza_107.48_0 107.485 71.99 142.98 ipm adult stage
> 6 sza_178.47_0 178.475 142.98 213.97 ipm adult stage
> 7 sza_249.46_0 249.465 213.97 284.96 ipm adult stage
> 8 sza_320.45_0 320.455 284.96 355.95 ipm adult stage
> 9 sza_391.44_0 391.445 355.95 426.94 ipm adult stage
> 10 sza_462.43_0 462.435 426.94 497.93 ipm adult stage
> 11 sza_533.42_0 533.425 497.93 568.92 ipm adult stage
> 12 sza_604.41_0 604.415 568.92 639.91 ipm adult stage
> 13 sza_675.40_0 675.405 639.91 710.90 ipm adult stage
> 14 sza_746.39_0 746.395 710.90 781.89 ipm adult stage
> 15 sza_817.38_0 817.385 781.89 852.88 ipm adult stage
> 16 sza_888.37_0 888.375 852.88 923.87 ipm adult stage
> 17 sza_959.36_0 959.365 923.87 994.86 ipm adult stage
> 18 sza_1030.3_0 1030.355 994.86 1065.85 ipm adult stage
> 19 sza_1101.3_0 1101.345 1065.85 1136.84 ipm adult stage
> 20 sza_1172.3_0 1172.335 1136.84 1207.83 ipm adult stage
> 21 sza_1243.3_0 1243.325 1207.83 1278.82 ipm adult stage
> 22 sza_1314.3_0 1314.315 1278.82 1349.81 ipm adult stage
> 23 sza_1385.3_0 1385.305 1349.81 1420.80 ipm adult stage
> 24 sza_1456.2_0 1456.295 1420.80 1491.79 ipm adult stage
> 25 sza_1527.2_0 1527.285 1491.79 1562.78 ipm adult stage
> 26 sza_1598.2_0 1598.275 1562.78 1633.77 ipm adult stage
> 27 sza_1669.2_0 1669.265 1633.77 1704.76 ipm adult stage
> 28 sza_1740.2_0 1740.255 1704.76 1775.75 ipm adult stage
> 29 sza_1811.2_0 1811.245 1775.75 1846.74 ipm adult stage
> 30 sza_1882.2_0 1882.235 1846.74 1917.73 ipm adult stage
> 31 sza_1953.2_0 1953.225 1917.73 1988.72 ipm adult stage
> 32 sza_2024.2_0 2024.215 1988.72 2059.71 ipm adult stage
> 33 sza_2095.2_0 2095.205 2059.71 2130.70 ipm adult stage
> 34 sza_2166.1_0 2166.195 2130.70 2201.69 ipm adult stage
> 35 sza_2237.1_0 2237.185 2201.69 2272.68 ipm adult stage
> 36 sza_2308.1_0 2308.175 2272.68 2343.67 ipm adult stage
> 37 sza_2379.1_0 2379.165 2343.67 2414.66 ipm adult stage
> 38 sza_2450.1_0 2450.155 2414.66 2485.65 ipm adult stage
> 39 sza_2521.1_0 2521.145 2485.65 2556.64 ipm adult stage
> 40 sza_2592.1_0 2592.135 2556.64 2627.63 ipm adult stage
> 41 sza_2663.1_0 2663.125 2627.63 2698.62 ipm adult stage
> 42 sza_2734.1_0 2734.115 2698.62 2769.61 ipm adult stage
> 43 sza_2805.1_0 2805.105 2769.61 2840.60 ipm adult stage
> 44 sza_2876.0_0 2876.095 2840.60 2911.59 ipm adult stage
> 45 sza_2947.0_0 2947.085 2911.59 2982.58 ipm adult stage
> 46 sza_3018.0_0 3018.075 2982.58 3053.57 ipm adult stage
> 47 sza_3089.0_0 3089.065 3053.57 3124.56 ipm adult stage
> 48 sza_3160.0_0 3160.055 3124.56 3195.55 ipm adult stage
> 49 sza_3231.0_0 3231.045 3195.55 3266.54 ipm adult stage
> 50 sza_3302.0_0 3302.035 3266.54 3337.53 ipm adult stage
> 51 sza_3373.0_0 3373.025 3337.53 3408.52 ipm adult stage
> 52 sza_3444.0_0 3444.015 3408.52 3479.51 ipm adult stage
> 53 sza_3515.0_0 3515.005 3479.51 3550.50 ipm adult stage
> 54 sza_3585.9_0 3585.995 3550.50 3621.49 ipm adult stage
> 55 sza_3656.9_0 3656.985 3621.49 3692.48 ipm adult stage
> 56 sza_3727.9_0 3727.975 3692.48 3763.47 ipm adult stage
> 57 sza_3798.9_0 3798.965 3763.47 3834.46 ipm adult stage
> 58 sza_3869.9_0 3869.955 3834.46 3905.45 ipm adult stage
> 59 sza_3940.9_0 3940.945 3905.45 3976.44 ipm adult stage
> 60 sza_4011.9_0 4011.935 3976.44 4047.43 ipm adult stage
> 61 sza_4082.9_0 4082.925 4047.43 4118.42 ipm adult stage
> 62 sza_4153.9_0 4153.915 4118.42 4189.41 ipm adult stage
> 63 sza_4224.9_0 4224.905 4189.41 4260.40 ipm adult stage
> 64 sza_4295.8_0 4295.895 4260.40 4331.39 ipm adult stage
> 65 sza_4366.8_0 4366.885 4331.39 4402.38 ipm adult stage
> 66 sza_4437.8_0 4437.875 4402.38 4473.37 ipm adult stage
> 67 sza_4508.8_0 4508.865 4473.37 4544.36 ipm adult stage
> 68 sza_4579.8_0 4579.855 4544.36 4615.35 ipm adult stage
> 69 sza_4650.8_0 4650.845 4615.35 4686.34 ipm adult stage
> 70 sza_4721.8_0 4721.835 4686.34 4757.33 ipm adult stage
> 71 sza_4792.8_0 4792.825 4757.33 4828.32 ipm adult stage
> 72 sza_4863.8_0 4863.815 4828.32 4899.31 ipm adult stage
> 73 sza_4934.8_0 4934.805 4899.31 4970.30 ipm adult stage
> 74 sza_5005.7_0 5005.795 4970.30 5041.29 ipm adult stage
> 75 sza_5076.7_0 5076.785 5041.29 5112.28 ipm adult stage
> 76 sza_5147.7_0 5147.775 5112.28 5183.27 ipm adult stage
> 77 sza_5218.7_0 5218.765 5183.27 5254.26 ipm adult stage
> 78 sza_5289.7_0 5289.755 5254.26 5325.25 ipm adult stage
> 79 sza_5360.7_0 5360.745 5325.25 5396.24 ipm adult stage
> 80 sza_5431.7_0 5431.735 5396.24 5467.23 ipm adult stage
> 81 sza_5502.7_0 5502.725 5467.23 5538.22 ipm adult stage
> 82 sza_5573.7_0 5573.715 5538.22 5609.21 ipm adult stage
> 83 sza_5644.7_0 5644.705 5609.21 5680.20 ipm adult stage
> 84 sza_5715.6_0 5715.695 5680.20 5751.19 ipm adult stage
> 85 sza_5786.6_0 5786.685 5751.19 5822.18 ipm adult stage
> 86 sza_5857.6_0 5857.675 5822.18 5893.17 ipm adult stage
> 87 sza_5928.6_0 5928.665 5893.17 5964.16 ipm adult stage
> 88 sza_5999.6_0 5999.655 5964.16 6035.15 ipm adult stage
> 89 sza_6070.6_0 6070.645 6035.15 6106.14 ipm adult stage
> 90 sza_6141.6_0 6141.635 6106.14 6177.13 ipm adult stage
> 91 sza_6212.6_0 6212.625 6177.13 6248.12 ipm adult stage
> 92 sza_6283.6_0 6283.615 6248.12 6319.11 ipm adult stage
> 93 sza_6354.6_0 6354.605 6319.11 6390.10 ipm adult stage
> 94 sza_6425.5_0 6425.595 6390.10 6461.09 ipm adult stage
> 95 sza_6496.5_0 6496.585 6461.09 6532.08 ipm adult stage
> 96 sza_6567.5_0 6567.575 6532.08 6603.07 ipm adult stage
> 97 sza_6638.5_0 6638.565 6603.07 6674.06 ipm adult stage
> 98 sza_6709.5_0 6709.555 6674.06 6745.05 ipm adult stage
> 99 sza_6780.5_0 6780.545 6745.05 6816.04 ipm adult stage
> 100 sza_6851.5_0 6851.535 6816.04 6887.03 ipm adult stage
> 101 sza_6922.5_0 6922.525 6887.03 6958.02 ipm adult stage
> 102 sza_6993.5_0 6993.515 6958.02 7029.01 ipm adult stage
> 103 sza_7064.5_0 7064.505 7029.01 7100.00 ipm adult stage
The function sf_create() has created our mesh points and associated size bins. This is in addition to the discrete stages covering the dormant seed, seedling, and dormant adult stages. Of course, we could have made this even more complex. For example, we could have created two sets of stages to use as the upper and lower bounds of two sets of continuous size states that differ in some key characteristic, such as reproductive status. We also could have set up the IPM using two or three different size metrics and used the ipm option within each or only some of them. This function provides a great deal of flexibility and power to create exactly the life history model that you may want.
### 7.2.2 Formatting demographic data and testing distribution assumptions
Next, we will format the data into hfv format. Because this is an IPM, we need to estimate linear models of vital rates. This will require us either to fix or to remove NAs in size and fecundity, so we will set NAas0 = TRUE. We will also set NRasRep = TRUE because we will assume that all adult stages other than dormancy are reproductive, and there are mature individuals in the dataset that do not reproduce but need to be included in reproductive stages (setting this option to TRUE makes sure that the reproductive status of non-reproductive individuals in potentially reproductive stages is set to 1, although the actual fecundity is not altered). Finally, we will ignore patches marked in the dataset and estimate matrices only for the full population in order to preserve statistical power for vital rate modeling in historical IPM analysis.
In the input to verticalize3() below, we utilize a repeating pattern of variable names arranged in the same order for each monitoring occasion. This arrangement allows us to enter only the first variable in each set, as long as noyears and blocksize are set properly and no gaps or shuffles appear in the dataset. The data management functions that we have created for lefko3 do not require such repeating patterns, but they do make the required input in the function much shorter and more succinct. Note also that we will use a new individual identity variable that incorporates the patch identity (indiv_id), to prevent repeat individual identities across patches.
lathyrus$indiv_id <- paste(lathyrus$SUBPLOT, lathyrus$GENET) lathvertipm <- verticalize3(lathyrus, noyears = 4, firstyear = 1988, individcol = "indiv_id", blocksize = 9, juvcol = "Seedling1988", sizeacol = "Volume88", repstracol = "FCODE88", fecacol = "Intactseed88", deadacol = "Dead1988", nonobsacol = "Dormant1988", stageassign = lathframeipm, stagesize = "sizea", censorcol = "Missing1988", censorkeep = NA, censorRepeat = TRUE, censor = TRUE, NAas0 = TRUE, NRasRep = TRUE) summary_hfv(lathvertipm) > > This hfv dataset contains 2527 rows, 42 variables, 1 population, > 1 patch, 1053 individuals, and 3 time steps. > rowid popid patchid individ year2 > Min. : 1.0 :2527 :2527 Length:2527 Min. :1988 > 1st Qu.: 237.0 Class :character 1st Qu.:1988 > Median : 522.0 Mode :character Median :1989 > Mean : 537.3 Mean :1989 > 3rd Qu.: 820.5 3rd Qu.:1990 > Max. :1118.0 Max. :1990 > firstseen lastseen obsage obslifespan sizea1 > Min. : 0 Min. : 0 Min. :1.000 Min. :0.000 Min. : 0.0 > 1st Qu.:1988 1st Qu.:1991 1st Qu.:1.000 1st Qu.:2.000 1st Qu.: 0.0 > Median :1988 Median :1991 Median :2.000 Median :3.000 Median : 9.0 > Mean :1979 Mean :1981 Mean :1.822 Mean :2.437 Mean : 387.3 > 3rd Qu.:1988 3rd Qu.:1991 3rd Qu.:2.000 3rd Qu.:3.000 3rd Qu.: 624.6 > Max. :1990 Max. :1991 Max. :3.000 Max. :3.000 Max. :7032.0 > repstra1 feca1 juvgiven1 obsstatus1 > Min. :0.0000 Min. : 0.0000 Min. :0.00000 Min. :0.0000 > 1st Qu.:0.0000 1st Qu.: 0.0000 1st Qu.:0.00000 1st Qu.:0.0000 > Median :0.0000 Median : 0.0000 Median :0.00000 Median :1.0000 > Mean :0.1805 Mean : 0.9889 Mean :0.06292 Mean :0.5548 > 3rd Qu.:0.0000 3rd Qu.: 0.0000 3rd Qu.:0.00000 3rd Qu.:1.0000 > Max. :1.0000 Max. :66.0000 Max. :1.00000 Max. :1.0000 > repstatus1 fecstatus1 matstatus1 alive1 > Min. :0.0000 Min. :0.00000 Min. :0.0000 Min. :0.0000 > 1st Qu.:0.0000 1st Qu.:0.00000 1st Qu.:0.0000 1st Qu.:0.0000 > Median :0.0000 Median :0.00000 Median :1.0000 Median :1.0000 > Mean :0.1805 Mean :0.08983 Mean :0.5204 Mean :0.5833 > 3rd Qu.:0.0000 3rd Qu.:0.00000 3rd Qu.:1.0000 3rd Qu.:1.0000 > Max. :1.0000 Max. :1.00000 Max. :1.0000 Max. :1.0000 > stage1 stage1index sizea2 repstra2 > Length:2527 Min. : 0.000 Min. : 0.0 Min. :0.000 > Class :character 1st Qu.: 0.000 1st Qu.: 12.6 1st Qu.:0.000 > Mode :character Median : 3.000 Median : 105.6 Median :0.000 > Mean : 7.405 Mean : 480.8 Mean :0.237 > 3rd Qu.: 12.000 3rd Qu.: 732.5 3rd Qu.:0.000 > Max. :103.000 Max. :7032.0 Max. :1.000 > feca2 juvgiven2 obsstatus2 repstatus2 > Min. : 0.00 Min. :0.0000 Min. :0.0000 Min. :0.000 > 1st Qu.: 0.00 1st Qu.:0.0000 1st Qu.:1.0000 1st Qu.:0.000 > Median : 0.00 Median :0.0000 Median :1.0000 Median :0.000 > Mean : 1.14 Mean :0.1112 Mean :0.9454 Mean :0.237 > 3rd Qu.: 0.00 3rd Qu.:0.0000 3rd Qu.:1.0000 3rd Qu.:0.000 > Max. :66.00 Max. :1.0000 Max. :1.0000 Max. :1.000 > fecstatus2 matstatus2 alive2 stage2 > Min. :0.0000 Min. :0.0000 Min. :1 Length:2527 > 1st Qu.:0.0000 1st Qu.:1.0000 1st Qu.:1 Class :character > Median :0.0000 Median :1.0000 Median :1 Mode :character > Mean :0.1053 Mean :0.8888 Mean :1 > 3rd Qu.:0.0000 3rd Qu.:1.0000 3rd Qu.:1 > Max. :1.0000 Max. :1.0000 Max. :1 > stage2index sizea3 repstra3 feca3 juvgiven3 > Min. : 2.00 Min. : 0.0 Min. :0.000 Min. : 0.00 Min. :0 > 1st Qu.: 4.00 1st Qu.: 10.0 1st Qu.:0.000 1st Qu.: 0.00 1st Qu.:0 > Median : 5.00 Median : 72.0 Median :0.000 Median : 0.00 Median :0 > Mean : 10.12 Mean : 389.7 Mean :0.218 Mean : 1.29 Mean :0 > 3rd Qu.: 14.00 3rd Qu.: 543.4 3rd Qu.:0.000 3rd Qu.: 0.00 3rd Qu.:0 > Max. :103.00 Max. :6645.8 Max. :1.000 Max. :66.00 Max. :0 > obsstatus3 repstatus3 fecstatus3 matstatus3 > Min. :0.0000 Min. :0.000 Min. :0.0000 Min. :1 > 1st Qu.:1.0000 1st Qu.:0.000 1st Qu.:0.0000 1st Qu.:1 > Median :1.0000 Median :0.000 Median :0.0000 Median :1 > Mean :0.8346 Mean :0.218 Mean :0.1116 Mean :1 > 3rd Qu.:1.0000 3rd Qu.:0.000 3rd Qu.:0.0000 3rd Qu.:1 > Max. :1.0000 Max. :1.000 Max. :1.0000 Max. :1 > alive3 stage3 stage3index > Min. :0.0000 Length:2527 Min. : 0.000 > 1st Qu.:1.0000 Class :character 1st Qu.: 4.000 > Median :1.0000 Mode :character Median : 5.000 > Mean :0.9224 Mean : 8.735 > 3rd Qu.:1.0000 3rd Qu.:11.000 > Max. :1.0000 Max. :97.000 Before we move on to the next steps in analysis, let’s take a closer look at fecundity. In this dataset, fecundity is mostly a count of intact seeds, and only differs in six cases where the seed output was estimated based on other models. To see this, try the following code. writeLines(paste0("Length of fecundity in t+1: ", length(lathvertipm$feca3)))
> Length of fecundity in t+1: 2527
writeLines(paste0("Total non-integer entries in fecundity in occasion t+1: ",
length(which(lathvertipm$feca3 != round(lathvertipm$feca3)))))
> Total non-integer entries in fecundity in occasion t+1: 0
writeLines(paste0("\nLength of fecundity in t: ", length(lathvertipm$feca2))) > > Length of fecundity in t: 2527 writeLines(paste0("Total non-integer entries in fecundity in occasion t: ", length(which(lathvertipm$feca2 != round(lathvertipm$feca2))))) > Total non-integer entries in fecundity in occasion t: 6 writeLines(paste0("\nLength of fecundity in t-1: ", length(lathvertipm$feca1)))
>
> Length of fecundity in t-1: 2527
writeLines(paste0("Total non-integer entries in fecundity in occasion t-1: ",
length(which(lathvertipm$feca1 != round(lathvertipm$feca1)))))
> Total non-integer entries in fecundity in occasion t-1: 6
We see that we have quite a bit of fecundity data, and that it is overwhelmingly but not exclusively integer. So, we can either treat fecundity as a continuous variable, or round the values and treat fecundity as a count variable. We will choose the latter approach in this analysis.
lathvertipm$feca3 <- round(lathvertipm$feca3)
lathvertipm$feca2 <- round(lathvertipm$feca2)
lathvertipm$feca1 <- round(lathvertipm$feca1)
Let’s now look at size. Ideally, we would assume the Gaussian distribution for this continuous variable. However, strong skew might recommend the gamma distribution. Let’s view a density plot (figure 7.3).
plot(density(lathvertipm$sizea2)) This distribution appears to be arguably right-skewed, but the mean appears to be near the boundary value of zero. We have absorbed the size of zero into the Dormant stage, but there might be reason to avoid using the gamma to avoid issues with zeros. So, we will try using the Gaussian distribution here. Although we wish to treat fecundity as a count, it is still not clear what underlying distribution we should use. This package currently allows eight choices: Gaussian, gamma, Poisson, negative binomial, zero-inflated Poisson, zero-inflated negative binomial, zero-truncated Poisson, and zero-truncated negative binomial. To assess which to use, we should first assess whether the mean and variance of the count are equal using a dispersion test. The Poisson distribution assumes that the mean and variance are equal, and so we can test this assumption using a chi-squared test. If it is not significantly different, then we may use some variant of the Poisson distribution. If the data are significantly over- or under-dispersed, then we should use the negative binomial distribution. If fecundity of zero is possible in reproductive stages, as in cases where reproductive status is defined by flowering rather than by offspring production, then we should also test whether the number of zeros is significantly greater than expected under these distributions, and use a zero-inflated distribution if so (if fecundity does not equal zero in any reproductive individuals at all, then we should use a zero-truncated distribution). Let’s look at a plot of the distribution of fecundity (figure 7.4). hist(subset(lathvertipm, repstatus2 == 1)$feca2, main = "Fecundity",
xlab = "Intact seeds produced in occasion t")
We see that the distribution seems to conform to a classic count variable with a very low mean value. The first bar suggests that there may be too many zeros to use a standard Poisson or negative binomial distribution. But to make that decision, let’s formally test the assumptions that the mean equals the variance, and that there are not excess zeros. Both tests use chi-squared distribution-based approaches, with the zero-inflation test based on Broek (1995). This is done automatically via the hfv_qc() function.
hfv_qc(lathvertipm, vitalrates = c("surv", "obs", "size", "fec"),
juvestimate = "Sdl", indiv = "individ", year = "year2")
> Survival:
>
> Data subset has 43 variables and 2246 transitions.
>
> Variable alive3 has 0 missing values.
> Variable alive3 is a binomial variable.
>
>
> Observation status:
>
> Data subset has 43 variables and 2121 transitions.
>
> Variable obsstatus3 has 0 missing values.
> Variable obsstatus3 is a binomial variable.
>
>
> Primary size:
>
> Data subset has 43 variables and 1916 transitions.
>
> Variable sizea3 has 0 missing values.
> Variable sizea3 appears to be a floating point variable.
> 1256 elements are not integers.
> The minimum value of sizea3 is 3.4 and the maximum is 6646.
> The mean value of sizea3 is 512.8 and the variance is 507200.
> The value of the Shapiro-Wilk test of normality is 0.7134 with P = 3.014e-49.
> Variable sizea3 differs significantly from a Gaussian distribution.
>
> Variable sizea3 is fully positive, lacking even 0s.
>
>
> Reproductive status:
>
> Data subset has 43 variables and 1916 transitions.
>
> Variable repstatus3 has 0 missing values.
> Variable repstatus3 is a binomial variable.
>
>
> Fecundity:
>
> Data subset has 43 variables and 2246 transitions.
>
> Variable feca2 has 0 missing values.
> Variable feca2 appears to be an integer variable.
>
> Variable feca2 is fully non-negative.
>
> Overdispersion test:
> Mean feca2 is 1.282
> The variance in feca2 is 23.21
> The probability of this dispersion level by chance assuming that
> the true mean feca2 = variance in feca2,
> and an alternative hypothesis of overdispersion, is 0
> Variable feca2 is significantly overdispersed.
>
> Zero-inflation and truncation tests:
> Mean lambda in feca2 is 0.2774
> The actual number of 0s in feca2 is 1980
> The expected number of 0s in feca2 under the null hypothesis is 623
> The probability of this deviation in 0s from expectation by chance is 0
> Variable feca2 is significantly zero-inflated.
>
>
> Juvenile survival:
>
> Data subset has 43 variables and 281 transitions.
>
> Variable alive3 has 0 missing values.
> Variable alive3 is a binomial variable.
>
>
> Juvenile observation status:
>
> Data subset has 43 variables and 210 transitions.
>
> Variable obsstatus3 has 0 missing values.
> Variable obsstatus3 is a binomial variable.
>
>
> Juvenile primary size:
>
> Data subset has 43 variables and 193 transitions.
>
> Variable sizea3 has 0 missing values.
> Variable sizea3 appears to be a floating point variable.
> 127 elements are not integers.
> The minimum value of sizea3 is 2.1 and the maximum is 61.
> The mean value of sizea3 is 11.23 and the variance is 50.81.
> The value of the Shapiro-Wilk test of normality is 0.5997 with P = 5.72e-21.
> Variable sizea3 differs significantly from a Gaussian distribution.
>
> Variable sizea3 is fully positive, lacking even 0s.
>
>
> Juvenile reproductive status:
>
> Data subset has 43 variables and 193 transitions.
>
> Variable repstatus3 has 0 missing values.
> Variable repstatus3 is a binomial variable.
>
>
> Juvenile maturity status:
>
> Data subset has 43 variables and 210 transitions.
>
> Variable matstatus3 has 0 missing values.
> Variable matstatus3 is a binomial variable.
Such significant results for both tests show us that we should use a zero-inflated negative binomial distribution for fecundity.
Now we will create supplement tables to provide extra data for matrix estimation that is not included in the main demographic dataset. Specifically, we will provide the seed dormancy probability and germination rate, which are given as transitions from the dormant seed stage to another year of seed dormancy or to the germinated seedling stage, respectively. We assume that the germination rate is the same regardless of whether seed was produced in the previous year or has been in the seedbank for longer. We will incorporate these terms both as fixed constants for specific transitions within the resulting matrices, and as multipliers for fecundity, since ultimately fecundity will be estimated as the production of seed multiplied by the seed germination rate or the seed dormancy rate. Because some individuals stay in the seedling stage for only one year, and the seed stage itself cannot be observed and so does not exist in the dataset, we will also set a proxy set of transitions so that R assumes that the transitions from seed in occasion t-1 to seedling in occasion t to all mature stages in occasion t+1 are equal to the equivalent transitions from seedling in both occasions t-1 and t.
We will start with the ahistorical case, and then move on to the historical case, where we also need to input the corresponding stages in occasion t-1 and transition types from occasion t-1 to t for each transition. Note the use of the "rep", "mat", and "npr" designations in Stage1 - these are abbreviations telling R to use all reproductive stages, all mature stages, or all non-propagule stages (mature stages plus the seedling stage) in general, respectively.
lathsupp2 <- supplemental(stage3 = c("Sd", "Sdl", "Sd", "Sdl"),
stage2 = c("Sd", "Sd", "rep", "rep"),
givenrate = c(0.345, 0.054, NA, NA),
multiplier = c(NA, NA, 0.345, 0.054),
type = c(1, 1, 3, 3), stageframe = lathframeipm, historical = FALSE)
lathsupp3 <- supplemental(stage3 = c("Sd","Sd","Sdl","Sdl","npr","Sd","Sdl"),
stage2 = c("Sd", "Sd", "Sd", "Sd", "Sdl", "rep", "rep"),
stage1 = c("Sd", "rep", "Sd", "rep", "Sd", "mat", "mat"),
eststage3 = c(NA, NA, NA, NA, "npr", NA, NA),
eststage2 = c(NA, NA, NA, NA, "Sdl", NA, NA),
eststage1 = c(NA, NA, NA, NA, "Sdl", NA, NA),
givenrate = c(0.345, 0.345, 0.054, 0.054, NA, NA, NA),
multiplier = c(NA, NA, NA, NA, NA, 0.345, 0.054),
type = c(1, 1, 1, 1, 1, 3, 3), type_t12 = c(1, 2, 1, 2, 1, 1, 1),
stageframe = lathframeipm, historical = TRUE)
lathsupp2
> stage3 stage2 stage1 eststage3 eststage2 eststage1 givenrate multiplier
> 1 Sd Sd <NA> <NA> <NA> <NA> 0.345 1.000
> 2 Sdl Sd <NA> <NA> <NA> <NA> 0.054 1.000
> 3 Sd rep <NA> <NA> <NA> <NA> NA 0.345
> 4 Sdl rep <NA> <NA> <NA> <NA> NA 0.054
> convtype convtype_t12
> 1 1 1
> 2 1 1
> 3 3 1
> 4 3 1
lathsupp3
> stage3 stage2 stage1 eststage3 eststage2 eststage1 givenrate multiplier
> 1 Sd Sd Sd <NA> <NA> <NA> 0.345 1.000
> 2 Sd Sd rep <NA> <NA> <NA> 0.345 1.000
> 3 Sdl Sd Sd <NA> <NA> <NA> 0.054 1.000
> 4 Sdl Sd rep <NA> <NA> <NA> 0.054 1.000
> 5 npr Sdl Sd npr Sdl Sdl NA 1.000
> 6 Sd rep mat <NA> <NA> <NA> NA 0.345
> 7 Sdl rep mat <NA> <NA> <NA> NA 0.054
> convtype convtype_t12
> 1 1 1
> 2 1 2
> 3 1 1
> 4 1 2
> 5 1 1
> 6 3 1
> 7 3 1
### 7.2.3 Estimating vital rate models
Integral projection models require functions of vital rates to populate them. Here, we will develop these functions as linear models using modelsearch(). First, we will create the historical models to assess whether history is a significant influence on vital rates. Note that we have set the appropriate size and fecundity distributions through the settings sizedist = "gamma", fecdist = "negbin", and fec.zero = TRUE. We have also set suite = "size", because we have not stratified our size classes by reproductive status.
lathmodels3ipm <- modelsearch(lathvertipm, historical = TRUE, approach= "mixed",
suite = "size", vitalrates = c("surv", "obs", "size", "fec"),
juvestimate = "Sdl", bestfit = "AICc&k", sizedist = "Gaussian",
fecdist = "negbin", fec.zero = TRUE, indiv = "individ", year = "year2",
year.as.random = TRUE, juvsize = TRUE, show.model.tables = TRUE, quiet = TRUE)
Now let’s see a summary.
summary(lathmodels3ipm)
> This LefkoMod object includes 7 linear models.
> Best-fit model criterion used: aicc&k
>
>
>
> Survival model:
>
> Call: stats::glm(formula = alive3 ~ sizea1 + sizea2 + sizea1:sizea2 +
> 1, family = "binomial", data = subdata)
>
> Coefficients:
> (Intercept) sizea1 sizea2 sizea1:sizea2
> 2.161e+00 1.704e-03 1.145e-03 -4.636e-07
>
> Degrees of Freedom: 2245 Total (i.e. Null); 2242 Residual
> Null Deviance: 965.1
> Residual Deviance: 892.4 AIC: 900.4
>
>
>
> Observation model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: obsstatus3 ~ (1 | year2)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 1351.5346 1362.8539 -673.7673 1347.5346 2119
> Random effects:
> Groups Name Std.Dev.
> year2 (Intercept) 0
> Number of obs: 2121, groups: year2, 3
> Fixed Effects:
> (Intercept)
> 2.235
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Size model:
> Linear mixed model fit by REML ['lmerMod']
> Formula: sizea3 ~ sizea1 + sizea2 + (1 | year2) + (1 | individ) + sizea1:sizea2
> Data: subdata
> REML criterion at convergence: 29132.25
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 0.0
> year2 (Intercept) 247.2
> Residual 480.4
> Number of obs: 1916, groups: individ, 845; year2, 3
> Fixed Effects:
> (Intercept) sizea1 sizea2 sizea1:sizea2
> 8.998e+01 3.119e-01 5.954e-01 -9.417e-05
> fit warnings:
> Some predictor variables are on very different scales: consider rescaling
> optimizer (nloptwrap) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Secondary size model:
> [1] 1
>
>
>
> Tertiary size model:
> [1] 1
>
>
>
> Reproductive status model:
> [1] 1
>
>
>
> Fecundity model:
> Formula: feca2 ~ sizea2 + (1 | year2) + (1 | individ)
> Zero inflation: ~sizea1 + sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik df.resid
> 2895.248 2952.417 -1437.624 2236
> Random-effects (co)variances:
>
> Conditional model:
> Groups Name Std.Dev.
> year2 (Intercept) 0.1241
> individ (Intercept) 0.3508
>
> Zero-inflation model:
> Groups Name Std.Dev.
> year2 (Intercept) 0.3979
> individ (Intercept) 1.0249
>
> Number of obs: 2246 / Conditional model: year2, 3; individ, 931 / Zero-inflation model: year2, 3; individ, 931
>
> Dispersion parameter for nbinom2 family (): 2.42
>
> Fixed Effects:
>
> Conditional model:
> (Intercept) sizea2
> 1.6982784 0.0003123
>
> Zero-inflation model:
> (Intercept) sizea1 sizea2
> 4.1155831 -0.0004907 -0.0017048
>
>
> Juvenile survival model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: alive3 ~ (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 323.6696 334.5847 -158.8348 317.6696 278
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 0.0003658
> year2 (Intercept) 0.0000000
> Number of obs: 281, groups: individ, 281; year2, 3
> Fixed Effects:
> (Intercept)
> 1.084
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Juvenile observation model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: obsstatus3 ~ sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 61.3733 74.7617 -26.6867 53.3733 206
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 53.003
> year2 (Intercept) 1.206
> Number of obs: 210, groups: individ, 210; year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 12.83279 0.03985
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Juvenile size model:
> Linear mixed model fit by REML ['lmerMod']
> Formula: sizea3 ~ sizea2 + (1 | year2)
> Data: subdata
> REML criterion at convergence: 1243.682
> Random effects:
> Groups Name Std.Dev.
> year2 (Intercept) 1.955
> Residual 5.995
> Number of obs: 193, groups: year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 3.0848 0.8465
>
>
>
> Juvenile secondary size model:
> [1] 1
>
>
>
> Juvenile tertiary size model:
> [1] 1
>
>
>
> Juvenile reproduction model:
> [1] 1
>
>
>
> Juvenile maturity model:
> [1] 1
>
>
>
>
>
> Number of models in survival table: 5
>
> Number of models in observation table: 5
>
> Number of models in size table: 5
>
> Number of models in secondary size table: 1
>
> Number of models in tertiary size table: 1
>
> Number of models in reproduction status table: 1
>
> Number of models in fecundity table: 25
>
> Number of models in juvenile survival table: 2
>
> Number of models in juvenile observation table: 2
>
> Number of models in juvenile size table: 2
>
> Number of models in juvenile secondary size table: 1
>
> Number of models in juvenile tertiary size table: 1
>
> Number of models in juvenile reproduction table: 1
>
> Number of models in juvenile maturity table: 1
>
>
>
>
>
> General model parameter names (column 1), and
> specific names used in these models (column 2):
> parameter_names mainparams
> 1 time t year2
> 2 individual individ
> 3 patch patch
> 4 alive in time t+1 surv3
> 5 observed in time t+1 obs3
> 6 sizea in time t+1 size3
> 7 sizeb in time t+1 sizeb3
> 8 sizec in time t+1 sizec3
> 9 reproductive status in time t+1 repst3
> 10 fecundity in time t+1 fec3
> 11 fecundity in time t fec2
> 12 sizea in time t size2
> 13 sizea in time t-1 size1
> 14 sizeb in time t sizeb2
> 15 sizeb in time t-1 sizeb1
> 16 sizec in time t sizec2
> 17 sizec in time t-1 sizec1
> 18 reproductive status in time t repst2
> 19 reproductive status in time t-1 repst1
> 20 maturity status in time t+1 matst3
> 21 maturity status in time t matst2
> 22 age in time t age
> 23 density in time t density
> 24 individual covariate a in time t indcova2
> 25 individual covariate a in time t-1 indcova1
> 26 individual covariate b in time t indcovb2
> 27 individual covariate b in time t-1 indcovb1
> 28 individual covariate c in time t indcovc2
> 29 individual covariate c in time t-1 indcovc1
> 30 stage group in time t group2
> 31 stage group in time t-1 group1
>
>
>
>
>
> Quality control:
>
> Survival model estimated with 931 individuals and 2246 individual transitions.
> Survival model accuracy is 0.944.
> Observation status model estimated with 858 individuals and 2121 individual transitions.
> Observation status model accuracy is 0.903.
> Primary size model estimated with 845 individuals and 1916 individual transitions.
> Primary size model R-squared is 0.546.
> Secondary size model not estimated.
> Tertiary size model not estimated.
> Reproductive status model not estimated.
> Fecundity model estimated with 931 individuals and 2246 individual transitions.
> Fecundity model R-squared is 0.443.
> Juvenile survival model estimated with 281 individuals and 281 individual transitions.
> Juvenile survival model accuracy is 0.747.
> Juvenile observation status model estimated with 210 individuals and 210 individual transitions.
> Juvenile observation status model accuracy is 1.
> Juvenile primary size model estimated with 193 individuals and 193 individual transitions.
> Juvenile primary size model R-squared is 0.303.
> Juvenile secondary size model not estimated.
> Juvenile tertiary size model not estimated.
> Juvenile reproductive status model not estimated.
> Juvenile maturity status model not estimated.
We see the influence of history on survival and size and fecundity. So, the historical IPM is the correct choice here, although the R2 for primary size and fecundity are both quite low. However, we will also create an ahistorical IPM for comparison. For that purpose, we will create the ahistorical vital rate model set.
lathmodels2ipm <- modelsearch(lathvertipm, historical = FALSE,
approach = "mixed", suite = "size", juvestimate = "Sdl",
vitalrates = c("surv", "obs", "size", "fec"), bestfit = "AICc&k",
sizedist = "gaussian", fecdist = "negbin", fec.zero = TRUE, indiv = "individ",
year = "year2", year.as.random = TRUE, juvsize = TRUE,
show.model.tables = TRUE, quiet = TRUE)
Let’s see a summary.
summary(lathmodels2ipm)
> This LefkoMod object includes 7 linear models.
> Best-fit model criterion used: aicc&k
>
>
>
> Survival model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: alive3 ~ (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 709.9383 727.0890 -351.9691 703.9383 2243
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 14.6
> year2 (Intercept) 0.0
> Number of obs: 2246, groups: individ, 931; year2, 3
> Fixed Effects:
> (Intercept)
> 10.03
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Observation model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: obsstatus3 ~ (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 1337.3193 1354.2982 -665.6596 1331.3193 2118
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 1.216
> year2 (Intercept) 0.000
> Number of obs: 2121, groups: individ, 858; year2, 3
> Fixed Effects:
> (Intercept)
> 2.788
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Size model:
> Linear mixed model fit by REML ['lmerMod']
> Formula: sizea3 ~ sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> REML criterion at convergence: 29294.15
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 0.0
> year2 (Intercept) 210.9
> Residual 504.6
> Number of obs: 1916, groups: individ, 845; year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 164.0695 0.6211
> optimizer (nloptwrap) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Secondary size model:
> [1] 1
>
>
>
> Tertiary size model:
> [1] 1
>
>
>
> Reproductive status model:
> [1] 1
>
>
>
> Fecundity model:
> Formula: feca2 ~ sizea2 + (1 | year2) + (1 | individ)
> Zero inflation: ~sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik df.resid
> 2905.970 2957.422 -1443.985 2237
> Random-effects (co)variances:
>
> Conditional model:
> Groups Name Std.Dev.
> year2 (Intercept) 0.1432
> individ (Intercept) 0.3305
>
> Zero-inflation model:
> Groups Name Std.Dev.
> year2 (Intercept) 0.3898
> individ (Intercept) 1.0047
>
> Number of obs: 2246 / Conditional model: year2, 3; individ, 931 / Zero-inflation model: year2, 3; individ, 931
>
> Dispersion parameter for nbinom2 family (): 2.17
>
> Fixed Effects:
>
> Conditional model:
> (Intercept) sizea2
> 1.736911 0.000286
>
> Zero-inflation model:
> (Intercept) sizea2
> 4.014336 -0.001969
>
>
> Juvenile survival model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: alive3 ~ (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 323.6696 334.5847 -158.8348 317.6696 278
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 0.0003658
> year2 (Intercept) 0.0000000
> Number of obs: 281, groups: individ, 281; year2, 3
> Fixed Effects:
> (Intercept)
> 1.084
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Juvenile observation model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: obsstatus3 ~ sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 61.3733 74.7617 -26.6867 53.3733 206
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 53.003
> year2 (Intercept) 1.206
> Number of obs: 210, groups: individ, 210; year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 12.83279 0.03985
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Juvenile size model:
> Linear mixed model fit by REML ['lmerMod']
> Formula: sizea3 ~ sizea2 + (1 | year2)
> Data: subdata
> REML criterion at convergence: 1243.682
> Random effects:
> Groups Name Std.Dev.
> year2 (Intercept) 1.955
> Residual 5.995
> Number of obs: 193, groups: year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 3.0848 0.8465
>
>
>
> Juvenile secondary size model:
> [1] 1
>
>
>
> Juvenile tertiary size model:
> [1] 1
>
>
>
> Juvenile reproduction model:
> [1] 1
>
>
>
> Juvenile maturity model:
> [1] 1
>
>
>
>
>
> Number of models in survival table: 2
>
> Number of models in observation table: 2
>
> Number of models in size table: 2
>
> Number of models in secondary size table: 1
>
> Number of models in tertiary size table: 1
>
> Number of models in reproduction status table: 1
>
> Number of models in fecundity table: 4
>
> Number of models in juvenile survival table: 2
>
> Number of models in juvenile observation table: 2
>
> Number of models in juvenile size table: 2
>
> Number of models in juvenile secondary size table: 1
>
> Number of models in juvenile tertiary size table: 1
>
> Number of models in juvenile reproduction table: 1
>
> Number of models in juvenile maturity table: 1
>
>
>
>
>
> General model parameter names (column 1), and
> specific names used in these models (column 2):
> parameter_names mainparams
> 1 time t year2
> 2 individual individ
> 3 patch patch
> 4 alive in time t+1 surv3
> 5 observed in time t+1 obs3
> 6 sizea in time t+1 size3
> 7 sizeb in time t+1 sizeb3
> 8 sizec in time t+1 sizec3
> 9 reproductive status in time t+1 repst3
> 10 fecundity in time t+1 fec3
> 11 fecundity in time t fec2
> 12 sizea in time t size2
> 13 sizea in time t-1 size1
> 14 sizeb in time t sizeb2
> 15 sizeb in time t-1 sizeb1
> 16 sizec in time t sizec2
> 17 sizec in time t-1 sizec1
> 18 reproductive status in time t repst2
> 19 reproductive status in time t-1 repst1
> 20 maturity status in time t+1 matst3
> 21 maturity status in time t matst2
> 22 age in time t age
> 23 density in time t density
> 24 individual covariate a in time t indcova2
> 25 individual covariate a in time t-1 indcova1
> 26 individual covariate b in time t indcovb2
> 27 individual covariate b in time t-1 indcovb1
> 28 individual covariate c in time t indcovc2
> 29 individual covariate c in time t-1 indcovc1
> 30 stage group in time t group2
> 31 stage group in time t-1 group1
>
>
>
>
>
> Quality control:
>
> Survival model estimated with 931 individuals and 2246 individual transitions.
> Survival model accuracy is 0.977.
> Observation status model estimated with 858 individuals and 2121 individual transitions.
> Observation status model accuracy is 0.903.
> Primary size model estimated with 845 individuals and 1916 individual transitions.
> Primary size model R-squared is 0.499.
> Secondary size model not estimated.
> Tertiary size model not estimated.
> Reproductive status model not estimated.
> Fecundity model estimated with 931 individuals and 2246 individual transitions.
> Fecundity model R-squared is 0.43.
> Juvenile survival model estimated with 281 individuals and 281 individual transitions.
> Juvenile survival model accuracy is 0.747.
> Juvenile observation status model estimated with 210 individuals and 210 individual transitions.
> Juvenile observation status model accuracy is 1.
> Juvenile primary size model estimated with 193 individuals and 193 individual transitions.
> Juvenile primary size model R-squared is 0.303.
> Juvenile secondary size model not estimated.
> Juvenile tertiary size model not estimated.
> Juvenile reproductive status model not estimated.
> Juvenile maturity status model not estimated.
### 7.2.4 Bringing our discretized IPMs to life
The typical IPM is ahistorical and so will utilize only an ahistorical set of vital rate models to populate its matrices. Let’s do that and take a look at the result.
lathmat2ipm <- flefko2(stageframe = lathframeipm, modelsuite = lathmodels2ipm,
supplement = lathsupp2, data = lathvertipm, reduce = FALSE)
summary(lathmat2ipm)
>
> This ahistorical lefkoMat object contains 3 matrices.
>
> Each matrix is square with 103 rows and columns, and a total of 10609 elements.
> A total of 26947 survival transitions were estimated, with 8982.333 per matrix.
> A total of 600 fecundity transitions were estimated, with 200 per matrix.
> This lefkoMat object covers 1 population, 1 patch, and 3 time steps.
>
> Vital rate modeling quality control:
>
> Survival estimated with 931 individuals and 2246 individual transitions.
> Observation estimated with 858 individuals and 2121 individual transitions.
> Primary size estimated with 845 individuals and 1916 individual transitions.
> Secondary size transition not estimated.
> Tertiary size transition not estimated.
> Reproduction probability not estimated.
> Fecundity estimated with 931 individuals and 2246 individual transitions.
> Juvenile survival estimated with 281 individuals and 281 individual transitions.
> Juvenile observation estimated with 210 individuals and 210 individual transitions.
> Juvenile primary size estimated with 193 individuals and 193 individual transitions.
> Juvenile secondary size transition not estimated.
> Juvenile tertiary size transition not estimated.
> Juvenile reproduction probability not estimated.
> Juvenile maturity transition probability not estimated.
>
> Survival probability sum check (each matrix represented by column in order):
> [,1] [,2] [,3]
> Min. 0.000 0.000 0.000
> 1st Qu. 0.994 0.970 0.996
> Median 1.000 1.000 1.000
> Mean 0.961 0.929 0.965
> 3rd Qu. 1.000 1.000 1.000
> Max. 1.000 1.000 1.000
The ahistorical IPM is composed of three matrices, covering each of the time steps. These are large matrices - with 103 rows and columns, they include 10,609 elements each. Of these, on average 9182.33 elements in each matrix are non-zero, meaning that these matrices are not only large but also quite dense (86.6% of elements are estimated).
We will now create the historical suite of matrices covering the years of study. These matrices will be extremely large - large enough that some computers might have difficulty with them. If you encounter an error message telling you that you have run out of memory, then please try this on a more powerful computer :) .
lathmat3ipm <- flefko3(stageframe = lathframeipm, modelsuite = lathmodels3ipm,
supplement = lathsupp3, data = lathvertipm, reduce = FALSE)
summary(lathmat3ipm)
>
> This historical lefkoMat object contains 3 matrices.
>
> Each matrix is square with 10609 rows and columns, and a total of 112550881 elements.
> A total of 2684709 survival transitions were estimated, with 894903 per matrix.
> A total of 60600 fecundity transitions were estimated, with 20200 per matrix.
> This lefkoMat object covers 1 population, 1 patch, and 3 time steps.
>
> Vital rate modeling quality control:
>
> Survival estimated with 931 individuals and 2246 individual transitions.
> Observation estimated with 858 individuals and 2121 individual transitions.
> Primary size estimated with 845 individuals and 1916 individual transitions.
> Secondary size transition not estimated.
> Tertiary size transition not estimated.
> Reproduction probability not estimated.
> Fecundity estimated with 931 individuals and 2246 individual transitions.
> Juvenile survival estimated with 281 individuals and 281 individual transitions.
> Juvenile observation estimated with 210 individuals and 210 individual transitions.
> Juvenile primary size estimated with 193 individuals and 193 individual transitions.
> Juvenile secondary size transition not estimated.
> Juvenile tertiary size transition not estimated.
> Juvenile reproduction probability not estimated.
> Juvenile maturity transition probability not estimated.
>
> Survival probability sum check (each matrix represented by column in order):
> [,1] [,2] [,3]
> Min. 0.000 0.000 0.000
> 1st Qu. 0.993 0.986 0.992
> Median 0.998 0.998 0.998
> Mean 0.958 0.947 0.957
> 3rd Qu. 0.999 0.999 0.999
> Max. 1.000 1.000 1.000
These are giant matrices. With 10,609 rows and columns, there are a total of 112,550,881 elements per matrix. But they are also amazingly sparse - with 915,103 elements estimated, only 0.8% of elements per matrix are non-zero. The survival probability sums all look good, so we appear to have no problems with overly large given and proxy survival transitions provided through our supplemental tables.
At this stage, we have created our IPMs. Congratulations! We can also create arithmetic mean matrix versions of each, as below.
lath2ipmmean <- lmean(lathmat2ipm)
lath3ipmmean <- lmean(lathmat3ipm)
summary(lath2ipmmean)
>
> This ahistorical lefkoMat object contains 1 matrix.
>
> Each matrix is square with 103 rows and columns, and a total of 10609 elements.
> A total of 9111 survival transitions were estimated, with 9111 per matrix.
> A total of 200 fecundity transitions were estimated, with 200 per matrix.
> This lefkoMat object covers 1 population, 1 patch, and 0 time steps.
>
> Vital rate modeling quality control:
>
> Survival estimated with 931 individuals and 2246 individual transitions.
> Observation estimated with 858 individuals and 2121 individual transitions.
> Primary size estimated with 845 individuals and 1916 individual transitions.
> Secondary size transition not estimated.
> Tertiary size transition not estimated.
> Reproduction probability not estimated.
> Fecundity estimated with 931 individuals and 2246 individual transitions.
> Juvenile survival estimated with 281 individuals and 281 individual transitions.
> Juvenile observation estimated with 210 individuals and 210 individual transitions.
> Juvenile primary size estimated with 193 individuals and 193 individual transitions.
> Juvenile secondary size transition not estimated.
> Juvenile tertiary size transition not estimated.
> Juvenile reproduction probability not estimated.
> Juvenile maturity transition probability not estimated.
>
> Survival probability sum check (each matrix represented by column in order):
> [,1]
> Min. 0.000
> 1st Qu. 0.987
> Median 1.000
> Mean 0.952
> 3rd Qu. 1.000
> Max. 1.000
summary(lath3ipmmean)
>
> This historical lefkoMat object contains 1 matrix.
>
> Each matrix is square with 10609 rows and columns, and a total of 112550881 elements.
> A total of 920292 survival transitions were estimated, with 920292 per matrix.
> A total of 20200 fecundity transitions were estimated, with 20200 per matrix.
> This lefkoMat object covers 1 population, 1 patch, and 0 time steps.
>
> Vital rate modeling quality control:
>
> Survival estimated with 931 individuals and 2246 individual transitions.
> Observation estimated with 858 individuals and 2121 individual transitions.
> Primary size estimated with 845 individuals and 1916 individual transitions.
> Secondary size transition not estimated.
> Tertiary size transition not estimated.
> Reproduction probability not estimated.
> Fecundity estimated with 931 individuals and 2246 individual transitions.
> Juvenile survival estimated with 281 individuals and 281 individual transitions.
> Juvenile observation estimated with 210 individuals and 210 individual transitions.
> Juvenile primary size estimated with 193 individuals and 193 individual transitions.
> Juvenile secondary size transition not estimated.
> Juvenile tertiary size transition not estimated.
> Juvenile reproduction probability not estimated.
> Juvenile maturity transition probability not estimated.
>
> Survival probability sum check (each matrix represented by column in order):
> [,1]
> Min. 0.000
> 1st Qu. 0.989
> Median 0.998
> Mean 0.954
> 3rd Qu. 0.999
> Max. 1.000
## 7.3 Quality control
IPMs are difficult to inspect because of their size. Package lefko3 includes a number of ways to assess the overall quality of an IPM. Here, we will cover three main methods, each covering a different aspect of the process. First, we may look at quality control information about our vital rate models. Let’s look at a summary of the ahistorical vital rate models.
summary(lathmodels2ipm)
> This LefkoMod object includes 7 linear models.
> Best-fit model criterion used: aicc&k
>
>
>
> Survival model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: alive3 ~ (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 709.9383 727.0890 -351.9691 703.9383 2243
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 14.6
> year2 (Intercept) 0.0
> Number of obs: 2246, groups: individ, 931; year2, 3
> Fixed Effects:
> (Intercept)
> 10.03
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Observation model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: obsstatus3 ~ (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 1337.3193 1354.2982 -665.6596 1331.3193 2118
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 1.216
> year2 (Intercept) 0.000
> Number of obs: 2121, groups: individ, 858; year2, 3
> Fixed Effects:
> (Intercept)
> 2.788
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Size model:
> Linear mixed model fit by REML ['lmerMod']
> Formula: sizea3 ~ sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> REML criterion at convergence: 29294.15
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 0.0
> year2 (Intercept) 210.9
> Residual 504.6
> Number of obs: 1916, groups: individ, 845; year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 164.0695 0.6211
> optimizer (nloptwrap) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Secondary size model:
> [1] 1
>
>
>
> Tertiary size model:
> [1] 1
>
>
>
> Reproductive status model:
> [1] 1
>
>
>
> Fecundity model:
> Formula: feca2 ~ sizea2 + (1 | year2) + (1 | individ)
> Zero inflation: ~sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik df.resid
> 2905.970 2957.422 -1443.985 2237
> Random-effects (co)variances:
>
> Conditional model:
> Groups Name Std.Dev.
> year2 (Intercept) 0.1432
> individ (Intercept) 0.3305
>
> Zero-inflation model:
> Groups Name Std.Dev.
> year2 (Intercept) 0.3898
> individ (Intercept) 1.0047
>
> Number of obs: 2246 / Conditional model: year2, 3; individ, 931 / Zero-inflation model: year2, 3; individ, 931
>
> Dispersion parameter for nbinom2 family (): 2.17
>
> Fixed Effects:
>
> Conditional model:
> (Intercept) sizea2
> 1.736911 0.000286
>
> Zero-inflation model:
> (Intercept) sizea2
> 4.014336 -0.001969
>
>
> Juvenile survival model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: alive3 ~ (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 323.6696 334.5847 -158.8348 317.6696 278
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 0.0003658
> year2 (Intercept) 0.0000000
> Number of obs: 281, groups: individ, 281; year2, 3
> Fixed Effects:
> (Intercept)
> 1.084
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Juvenile observation model:
> Generalized linear mixed model fit by maximum likelihood (Laplace
> Approximation) [glmerMod]
> Family: binomial ( logit )
> Formula: obsstatus3 ~ sizea2 + (1 | year2) + (1 | individ)
> Data: subdata
> AIC BIC logLik deviance df.resid
> 61.3733 74.7617 -26.6867 53.3733 206
> Random effects:
> Groups Name Std.Dev.
> individ (Intercept) 53.003
> year2 (Intercept) 1.206
> Number of obs: 210, groups: individ, 210; year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 12.83279 0.03985
> optimizer (Nelder_Mead) convergence code: 0 (OK) ; 0 optimizer warnings; 1 lme4 warnings
>
>
>
> Juvenile size model:
> Linear mixed model fit by REML ['lmerMod']
> Formula: sizea3 ~ sizea2 + (1 | year2)
> Data: subdata
> REML criterion at convergence: 1243.682
> Random effects:
> Groups Name Std.Dev.
> year2 (Intercept) 1.955
> Residual 5.995
> Number of obs: 193, groups: year2, 3
> Fixed Effects:
> (Intercept) sizea2
> 3.0848 0.8465
>
>
>
> Juvenile secondary size model:
> [1] 1
>
>
>
> Juvenile tertiary size model:
> [1] 1
>
>
>
> Juvenile reproduction model:
> [1] 1
>
>
>
> Juvenile maturity model:
> [1] 1
>
>
>
>
>
> Number of models in survival table: 2
>
> Number of models in observation table: 2
>
> Number of models in size table: 2
>
> Number of models in secondary size table: 1
>
> Number of models in tertiary size table: 1
>
> Number of models in reproduction status table: 1
>
> Number of models in fecundity table: 4
>
> Number of models in juvenile survival table: 2
>
> Number of models in juvenile observation table: 2
>
> Number of models in juvenile size table: 2
>
> Number of models in juvenile secondary size table: 1
>
> Number of models in juvenile tertiary size table: 1
>
> Number of models in juvenile reproduction table: 1
>
> Number of models in juvenile maturity table: 1
>
>
>
>
>
> General model parameter names (column 1), and
> specific names used in these models (column 2):
> parameter_names mainparams
> 1 time t year2
> 2 individual individ
> 3 patch patch
> 4 alive in time t+1 surv3
> 5 observed in time t+1 obs3
> 6 sizea in time t+1 size3
> 7 sizeb in time t+1 sizeb3
> 8 sizec in time t+1 sizec3
> 9 reproductive status in time t+1 repst3
> 10 fecundity in time t+1 fec3
> 11 fecundity in time t fec2
> 12 sizea in time t size2
> 13 sizea in time t-1 size1
> 14 sizeb in time t sizeb2
> 15 sizeb in time t-1 sizeb1
> 16 sizec in time t sizec2
> 17 sizec in time t-1 sizec1
> 18 reproductive status in time t repst2
> 19 reproductive status in time t-1 repst1
> 20 maturity status in time t+1 matst3
> 21 maturity status in time t matst2
> 22 age in time t age
> 23 density in time t density
> 24 individual covariate a in time t indcova2
> 25 individual covariate a in time t-1 indcova1
> 26 individual covariate b in time t indcovb2
> 27 individual covariate b in time t-1 indcovb1
> 28 individual covariate c in time t indcovc2
> 29 individual covariate c in time t-1 indcovc1
> 30 stage group in time t group2
> 31 stage group in time t-1 group1
>
>
>
>
>
> Quality control:
>
> Survival model estimated with 931 individuals and 2246 individual transitions.
> Survival model accuracy is 0.977.
> Observation status model estimated with 858 individuals and 2121 individual transitions.
> Observation status model accuracy is 0.903.
> Primary size model estimated with 845 individuals and 1916 individual transitions.
> Primary size model R-squared is 0.499.
> Secondary size model not estimated.
> Tertiary size model not estimated.
> Reproductive status model not estimated.
> Fecundity model estimated with 931 individuals and 2246 individual transitions.
> Fecundity model R-squared is 0.43.
> Juvenile survival model estimated with 281 individuals and 281 individual transitions.
> Juvenile survival model accuracy is 0.747.
> Juvenile observation status model estimated with 210 individuals and 210 individual transitions.
> Juvenile observation status model accuracy is 1.
> Juvenile primary size model estimated with 193 individuals and 193 individual transitions.
> Juvenile primary size model R-squared is 0.303.
> Juvenile secondary size model not estimated.
> Juvenile tertiary size model not estimated.
> Juvenile reproductive status model not estimated.
> Juvenile maturity status model not estimated.
At the very bottom of our output is a section labelled Quality control. We see, first of all, a statement of which of our fourteen possible vital rate models were estimated. For each estimated model, we see the number of individuals and actual transitions used to estimate the respective model. In general, the higher the number of individuals and transitions used to estimate the model, the better the quality of the model and the higher the statistical power. The former number, the number of individuals, particularly gives us a sense of the overall level of pseudoreplication that might be inherent in our analysis, since transitions from the same individual are obviously related and not statistically independent of one another.
Our output also includes information on the accuracy of binomial models and simple R2 of size and fecundity models. Accuracy is calculated as the proportion of predicted responses from a binomial model equal to the actual responses given each data point. Simple R2 is calculated as $$1-\frac{\sum_i(y_i-E(y_i))^2}{\sum_i(y_i-\bar{y})^2}$$. Accuracy and simple R2 both vary from 0.0 to 1.0, and the higher the number the better the quality of the model. What is a “good” quality of model is difficult to say, but prediction should probably not be attempted with vital rate models under 90% accuracy or simple R2. Naturally, such values may be difficult to achieve in most analyses.
The next method of assessing quality control focuses on the IPMs, themselves. Let’s take a look at a summary of the ahistorical IPM.
summary(lathmat2ipm)
>
> This ahistorical lefkoMat object contains 3 matrices.
>
> Each matrix is square with 103 rows and columns, and a total of 10609 elements.
> A total of 26947 survival transitions were estimated, with 8982.333 per matrix.
> A total of 600 fecundity transitions were estimated, with 200 per matrix.
> This lefkoMat object covers 1 population, 1 patch, and 3 time steps.
>
> Vital rate modeling quality control:
>
> Survival estimated with 931 individuals and 2246 individual transitions.
> Observation estimated with 858 individuals and 2121 individual transitions.
> Primary size estimated with 845 individuals and 1916 individual transitions.
> Secondary size transition not estimated.
> Tertiary size transition not estimated.
> Reproduction probability not estimated.
> Fecundity estimated with 931 individuals and 2246 individual transitions.
> Juvenile survival estimated with 281 individuals and 281 individual transitions.
> Juvenile observation estimated with 210 individuals and 210 individual transitions.
> Juvenile primary size estimated with 193 individuals and 193 individual transitions.
> Juvenile secondary size transition not estimated.
> Juvenile tertiary size transition not estimated.
> Juvenile reproduction probability not estimated.
> Juvenile maturity transition probability not estimated.
>
> Survival probability sum check (each matrix represented by column in order):
> [,1] [,2] [,3]
> Min. 0.000 0.000 0.000
> 1st Qu. 0.994 0.970 0.996
> Median 1.000 1.000 1.000
> Mean 0.961 0.929 0.965
> 3rd Qu. 1.000 1.000 1.000
> Max. 1.000 1.000 1.000
Some of the output should be familiar, particularly the output related to the vital rate models. The key output for us to look at here is at the bottom, under Survival probability sum check. The columns in our U matrices should always sum to values between 0 and 1. The reason is that the sum of each column should equal the probability of survival from whatever stage the column is associated with in time t to time t+1, regardless of what stage the organism is in in the latter time. In at least three circumstances, these sums may be greater than 1.0, and the user would need to correct their IPMs in these cases to prevent odd analytical results and erroneous inferences. The first circumstance is through the incorporation via supplement tables of fixed transition probabilities or proxy values that are too high. Fixing an IPM in this case would mean reducing these fixed or proxy values in the supplement table. The second circumstance is through the use of the midpoint method in size transition probability estimation. The correct way to fix this is to use the cumulative density function (CDF) method instead of the midpoint method. Fortunately, lefko3 uses the CDF method by default. The third circumstance is through the incorporation of sizes not observed, or representing strong outlier sizes. In these cases, it is possible for at least some of the resulting probabilities to be estimated at unnatural levels. In this third case, the way to correct the problem is to remove the outlier size from the classification in the stageframe.
Finally, there is at least one more method that we can use to assess the overall quality of an IPM. That method is to assess the overall structure of the IPM. The best way to do this is to inspect the elements themselves, perhaps by opening the IPM matrix in R Studio, or exporting it to Microsoft Excel or another spreadsheet program for assessment. A more visual approach assessing just the structure itself is to use the image3() function, which provides users with a means of assessing whether the overall structure of the model “looks right”. For example, here we look at an image of the 1st matrix in our ahistorical IPM (figure 7.5).
image3(lathmat2ipm, used = 1)
> [[1]]
> NULL
We can also focus in on just the survival or fecundity transitions, as below. Note that in an ahistorical IPM, we expect all fecundity values to be located toward the top of the matrix (figure 7.6).
image3(lathmat2ipm, used = 1, type = "F")
> [[1]]
> NULL
Other approaches to quality control are provided for other aspects of matrix analysis in lefko3.
At this point, we may move ahead and analyze the IPMs in the same ways that we might analyze other kinds of MPMs.
## 7.4 Points to remember
1. IPMs assume a continuous distribution for size and integrate vital rates across this size metric typically assuming a Gaussian size distribution. While theoretically different from MPMs, ultimately IPMs are discretized in ways that allow them to be created and analyzed exactly the same way as function-based MPMs.
2. Package lefko3 allows the development of even extremely large numbers of size bins across the range of a continuous size metric, using the "ipm" shorthand in the sf_create() function. With only a few lines, a stageframe with hundreds of discretized size bins can be created and used to create an IPM for analysis.
3. Quality control tools include the linear model accuracy and pseudo-R2 output from modelsearch(), the output from summary() calls for IPMs created with flefko2() or flefko3(), and visualization with image3().
### References
Broek, J. van den. (1995). A score test for zero inflation in a Poisson distribution. Biometrics, 51, 738–743.
Doak, D.F., Waddle, E., Langendorf, R.E., Louthan, A.M., Isabelle Chardon, N., Dibner, R.R., et al. (2021). A critical comparison of integral projection and matrix projection models for demographic analysis. Ecological Monographs, 91, e01447.
Easterling, M.R., Ellner, S.P. & Dixon, P.M. (2000). Size-Specific Sensitivity: Applying a New Structured Population Model. Ecology, 81, 694–708.
Ehrlén, J. (2000). The dynamics of plant populations: Does the history of individuals matter? Ecology, 81, 1675–1684.
Ellner, S.P. & Rees, M. (2006). Integral projection models for species with complex demography. American Naturalist, 167, 410–428.
Merow, C., Dahlgren, J.P., Metcalf, C.J.E., Childs, D.Z., Evans, M.E.K., Jongejans, E., et al. (2014). Advancing population ecology with integral projection models: A practical guide. Methods in Ecology and Evolution, 5, 99–110.
Metcalf, C.J.E., McMahon, S.M., Salguero-Gómez, R. & Jongejans, E. (2013). IPMpack: An R package for integral projection models. Methods in Ecology and Evolution, 4, 195–200. | 2022-09-24 22:42:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.5972681045532227, "perplexity": 11434.40806735993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00224.warc.gz"} |
http://search.howtofixtheelection.com/bqpr.html | # Searching for Parliament
(the computer science kind of search)
Selecting a group of people to represent a nation is an example of what computer science calls an assignment search problem. Which reps (candidates) are the most representative of the voters?
We'll construct a mathematical model of the assignment search problem, implement this model in the Python programming language with an optimization package called Gurobi, and compute and visualize an optimal solution.
The implementation of the model and the structure of this explanation is forked from the work of Emilien Dupont (on github and hosted as a featured example on Gurobi's website). The code for this website is on github.
## Problem Description
Northern England gets rid of district boundaries because there was too much debate about where the boundaries should go. It wants to pick 5 reps. And it wants to space them out instead of having them all from the middle. It wants regional interests to be represented and it doesn't want all the seats to be taken by people from the same neighborhood in the middle of Northern England.
Several good reps (lets say "reps" instead of "potential representatives" or "candidates") have decided to run, but it remains to decide which of the reps should win.
Selecting reps from the middle of Northern England would be advantageous as they would have the most votes overall. However, voters from the middle of Northern England would be overrepresented and voters from the coast would be very far from their from their reps.
We will find the optimal tradeoff between reps being near all the voters and voters having reps nearby.
### Search Trees
Our problem is related to a very famous problem in computer science called the knapsack problem. It differs in many ways. It is similar in how it can be solved. Both are kinds of binary search. A search problem tries to find a combination of variables that is in some way the best combination. One way to approach such a problem is the branch and bound method. Here is a video on the branch and bound method from a fantastic Coursera course in the field of discrete optimization. From this video, just understand that there is a large number of possible choices to make and that they can all be organized in a tree structure. (The actual algorithm for searching through the tree will be taken care of by Gurobi, and we don't talk about it below.) We still haven't said what the score is for each choice (or path) in the tree, so let's do that. Most of the writing below is about that. Each choice in the tree represents whether a rep wins. The first split is for the first rep, the second split is for the second rep, etc.
This tree is not for the problem at hand, but our problem is similar because it has the same binary variables $x_i$.
## Mathematical Model
Let us now formulate a mathematical model for our problem.
We need to say when we choose a rep. Let's list all the reps as rep #1, rep #2, rep #3, etc, and let's make variables for each one. If we choose rep #1, then $x_1=1$ and if we don't choose rep #2, then $x_2=0$. Basically, $x_j$ is binary. $x_j = \left\{\begin{array}{ll} 1 & \text{if we choose rep #j, }\\ 0 & \mathrm{otherwise.} \end{array}\right.$ We set a constraint on the $x_j$ variables because there are only a limited number of seats in the legislature. We set the sum of all $x_j$ to this number of seats. $\sum_{j \in Reps} x_j = \text{number of seats}$
We need to say how close a voter is to a rep. Let's list all the voters as voter #1, voter #2, etc, and let's make measurements for each voter and rep combination. If the distance between voter #1 and rep #3 is 100 km, then $d_{13}=100$. $d_{ij} = \text{distance between voter i and rep j}$
What does a ballot look like? We want to make this problem easy to show on a map so we're going to make a simplification of the problem. We're going to let the voters give the reps scores. This is actually nicer for the voters because they get to say what they think of every rep. Also, we're going to make every voter use the same scoring scale, and they will base their score entirely on how far away the rep is. Basically, the inverse of distance. $\begin{array}{ll} a_{ij} & = {\displaystyle \frac{1}{d_{ij}/10 + 1} } & \text{Inverting distance}\nonumber \\ b_{ij} & = {\displaystyle a_{ij} * \frac{1}{\max_{m \in Voters}{a_{mj}}}} &\text{Ballot normalized} \nonumber \end{array}$
(The 10 is here for scaling distance. The extra 1 is here to avoid dividing by 0. Really, there are many choices for this function. One more modification we make is to let voters who don't have a very close rep vote with their full vote for their closest rep. This normalization is here because voters would give the rep closest to them the highest score possible even if they aren't actually all that close.)
We need a way to count the ballots. The easiest way is to add them together to get the reps' tallies. $tally_j = \sum_{i \in Voters} b_{ij}$
If we stop here then we just end up picking the reps in the middle because they have the highest tallies.
We want to avoid overrepresenting the middle voters. Our solution is to let the winning reps keep a fraction of their voters' ballots. Basically, these voters have the reps, so the reps get to keep the voters. Each winning rep keeps the same amount of support. $keep = \left[ \begin{array}{ll} & \text{an amount of score,} \\ & \text{same units as ballots} \\ & \text{same for each pair of reps} \\ \end{array} \right]$
The idea of the rep keeping some ballots is not new. The election method called Single Transferable Vote (STV) does the same thing. It is different because it doesn't use scores. It uses ranking. Still, it is very similar. STV calls it a quota. When a rep is declared a winner, he keeps a quota of votes and the extra go to other reps similar to him if the voters have both on their ballot. $\text{STV quota} = \frac{\text{voters}}{\text{seats} + 1}$ The quota is set as the number of voters divided by the number of seats to fill with reps. An additional 1 is added to the number of seats because of the iterative nature of STV (a longer explanation is needed).
Let's try something similar: $keep = \frac{\text{voters}}{\text{seats}}$ STV subtracts this for each pair of reps when one of the pair wins and they are supported by the same group of voters. We want to subtract this for each similar pair, and we want to show why it selects the correct number of reps for a group of voters.
### Mini example to show proportionality
Ok, we'll show an example. We can actually see the method at work. \begin{align} 3& \text{ seats} \nonumber \\ 6& \text{ voters} \nonumber \\ 2& \text{ voters for party A: A1,A2,A3,...} \nonumber \\ 4& \text{ voters for party B: B1,B2,B3,...} \nonumber \\ \end{align} Here are the ballots for just the parties: $\begin{bmatrix} A: & 1 & 1 & 0 & 0 & 0 & 0 \\ B: & 0 & 0 & 1 & 1 & 1 & 1 \end{bmatrix}$ Here are the tallies for each of the reps: $\begin{bmatrix} A_1: & 2 \\ A_2: & 2 \\ B_1: & 4 \\ B_2: & 4 \\ \end{bmatrix}$ We can also write the tallies along the diagonals of this table $\begin{bmatrix} &A_1 &A_2 &B_1 &B_2 \\ A_1: & 2 & & & \\ A_2: & & 2 & & \\ B_1: & & & 4 & \\ B_2: & & & & 4 \\ \end{bmatrix}$ Keeps happen when two reps from the same party get elected. For every pair, there is a table entry, so lets fill in those entries with keeps. $keep = \frac{\text{voters}}{\text{seats}} = \frac{\text{6}}{\text{3}} = 2$ There are two table entries for each pair, so we divide this $keep$ between them. $\left[ \begin{array}{rr} &A_1 &A_2 &B_1 &B_2 \\ A_1: & 2 & -1 & & \\ A_2: & -1 & 2 & & \\ B_1: & & & 4 & -1 \\ B_2: & & & -1 & 4 \\ \end{array} \right]$ We want to choose the 3 reps with the most votes. So we try each combination, and for each we add the tallies and subtract the keeps.
1 from A and 2 from B is the best combination: $\left[ \begin{array}{rr} &A_1 & \text{ } &B_1 &B_2 \\ A_1: & 2 & & & \\ & & & & \\ B_1: & & & 4 & -1 \\ B_2: & & & -1 & 4 \\ \end{array} \right] = 2+4+4-1-1=8$ The keeps can be drawn on a Venn diagram. There are 4 voters here. First take 1 from the shared voter pool for one rep. Then take 1 for the other.
This animation show two reps that share the same group of voters. On the right, it shows how setting a keep level takes voters from supporting both to supporting just one.
2 from A is not as good because there are a disproportionate number from A: $\left[ \begin{array}{rr} &A_1 &A_2 &B_1 & \text{ } \\ A_1: & 2 & -1 & & \\ A_2: & -1 & 2 & & \\ B_1: & & & 4 & \\ & & & & \\ \end{array} \right] =2+2+4-1-1=6$ If there were 3 winners from B, the total votes would be even less. $\left[ \begin{array}{rr} & & \text{ } &B_1 &B_2& B_3 \\ & & & & & \\ & & & & & \\ B_1: & & & 4 & -1 & -1 \\ B_2: & & & -1 & 4 & -1 \\ B_3: & & & -1 & -1 & 4 \\ \end{array} \right] = 4+4+4-1-1-1-1-1-1=6$ The best combination was when there the proportion of votes matched the proportion of seats. That means the way we count votes is proportional. Also notice we lost 2 points whether we went forward or back, so we're right on the maximum. We can get proportionality by just maxmizing the sum of this table. We also gained a nice table graphic for counting votes.
### Proportionality
Lets do some maximization to show that this is the right number of keeps. It might seem that each rep should hold on to the keeps rather than splitting them. So we will find out if the amount of keeps should be some other number. \begin{align} C_t& \text{ seats} \nonumber \\ N_t& \text{ voters} \nonumber \\ N_A& \text{ voters for party A: A1,A2,A3,...} \nonumber \\ N_B& \text{ voters for party B: B1,B2,B3,...} \nonumber \\ C_A& \text{ number of winners for party A} \nonumber \\ C_B& \text{ number of winners for party B} \nonumber \\ \end{align} For each party, let's look at the winners add the tallies and subtract the keeps: $\left[ \begin{array}{rr} & A_1 & A_2 & \text{...} & A_{C_A} \\ A_1: & N_A & -K/2 & & -K/2 \\ A_2: & -K/2 & N_A & & -K/2 \\ \text{...} & & & & \\ A_{C_A}: & -K/2 & -K/2 & & N_A \\ \end{array} \right] = N_A*C_A-\frac{1}{2}K*C_A*(C_A-1)$ For both parties, $Obj = N_A*C_A-\frac{1}{2}K*C_A*(C_A-1) + N_B*C_B-\frac{1}{2}K*C_B*(C_B-1)$ Let's maximize this objective function on $C_A$ by setting its derivative to $0$ and solving for $K$: \begin{align} \frac{d \ Obj}{d \ C_A} &= 0 \nonumber \\ \nonumber \\ 0 &= N_A-N_B-K*C_A+K*C_B \nonumber \\ \nonumber \\ K &= \frac{N_A-N_B}{C_A-C_B} \nonumber \end{align} Now lets enforce proportionality. Proportionality means every party has the same number of voters per rep: $\frac{N_A}{C_A}=\frac{N_B}{C_B}=\frac{N_t}{C_t}$ So we substitute and do algebra to get a confirmation of what we already guessed. $K =\frac{N_A-N_B}{C_A-C_B}=\frac{N_t}{C_t}=\frac{\text{voters}}{\text{seats}} \nonumber \\$ $keep =\frac{\text{voters}}{\text{seats}} \nonumber$ So we know we have a proportional system.
### Finishing the Model: Similarity
What if the voters don't fall into parties?
A party is just a group of people who vote the same. Take an example. Say rep A and B say they are from different parties. What if a voter votes for both? How do we account for proportionality?
Take an example of 3 voters. One for A, one for B, and one for both: $\text{A partisan voter for A: } \ \begin{bmatrix} A: & 1 \\ B: & 0 \end{bmatrix}$ $\text{A partisan voter for B: } \ \begin{bmatrix} A: & 0 \\ B: & 1 \end{bmatrix}$ $\text{A voter for A and B: } \ \begin{bmatrix} A: & 1 \\ B: & 1 \end{bmatrix}$ Let's look at all the ballots together. We have 3 kinds of voter. 1 out of 3 treats A and B as a single party. It is like A and B are part of the same party, the AB party: $\text{Three voter pools: } \ \begin{bmatrix} A: & 1 & 1 & 0 \\ B: & 0 & 1 & 1 \end{bmatrix} \quad \rightarrow \begin{bmatrix} AB: & 1 \\ A: & 1 \\ B: & 1 \\ \end{bmatrix}$ How do we account for this in our bookkeeping about kept votes? 1/3 of the votes for A or B were for both A and B. So we reduce the number of kept votes to 1/3 of the usual. $\frac{1}{3} = \frac{0 + 1 + 0}{1 + 1 + 1}$ This measure is actually common. It is called the Jaccard similarity. Stated mathematically, \begin{align} \nonumber \\ s_{jk} & = \frac{\sum_{i \in Voters} min(b_{ij}, b_{ik})}{\sum_{i \in Voters} max(b_{ij}, b_{ik})} \text{ } \text{ } \text{ } \text{ (Similarity between rep j and rep k)}\nonumber \\ \end{align}
#### Mixture Example
Let's try to justify this more and build an intuition by working through an example: $\text{mixture example: } \ \begin{bmatrix} A: & 1 & 1 & 1 & 1 & 0 & 0 \\ B: & 0 & 0 & 1 & 1 & 1 & 1 \end{bmatrix} \quad \rightarrow \begin{bmatrix} AB: & 2 \\ A: & 2 \\ B: & 2 \\ \end{bmatrix}$ This is actually a mixture of two extremes. A 1-party extreme where A and B are actually from the same party, and a 2 party extreme where A and B are from different parties: $\text{1-party extreme: } \ \begin{bmatrix} A: & 1 & 1 & 1 & 1 & 1 & 1 \\ B: & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix} \quad \rightarrow \begin{bmatrix} AB: & 6 \\ A: & 0 \\ B: & 0 \\ \end{bmatrix}$ $\text{2-party extreme: } \ \begin{bmatrix} A: & 1 & 1 & 1 & 0 & 0 & 0 \\ B: & 0 & 0 & 0 & 1 & 1 & 1 \end{bmatrix} \quad \rightarrow \begin{bmatrix} AB: & 0 \\ A: & 3 \\ B: & 3 \\ \end{bmatrix}$ Across both extreme examples, the same number of candidates win from each party. So the middle example should have the same result. $\text{different examples, same result: } \ \begin{bmatrix} & \text{2P:} & \text{M:} & \text{1P:}\\ AB: & 0 & 2 & 6 \\ A: & 3 & 2 & 0 \\ B: & 3 & 2 & 0 \\ \end{bmatrix}$ To visualize what is happening, we draw a Venn diagram. Area is proportional to votes. Even if we change the proportion of shared votes, we are still representing the same voters and they should get the same number of reps:
When we use Jaccard similarity to figure the number of keeps, we get the same composition of winners no matter how much the voters split their support. We can include this similarity in our venn diagram from before, which only applied to identical reps (clones).
Identical Reps Similar Reps
Next we will see how to include Jaccard similarity into our calculation.
Let's include this similarity measure in our model. We need to adjust the number of votes that are kept when a pair of candidates share the same voter support: $keeps_{jk} = keep * s_{jk} \nonumber$ We only do this when both reps win. We want to make an indicator for when both reps win. Remember the $x$ is binary and it indicates the winners. $\begin{array}{l} x_j * x_k & \text{Indicates when both reps have won} \end{array}$ We can now refine our model to maximize the following, called an objective function: $\text{Maximize} \sum_{j \in Reps} tally_j * x_j - \sum_{j \in Reps,k \neq j} \frac{1}{2} keeps_{jk} * x_j * x_k$ This expression is just the sum of the table, so it's basically just letting us add up the tally and subtract the keeps for the winning reps. The 1/2 factor is in there because there are 2 entries $x_{jk}$ and $x_{kj}$ for each pair. So! We have a new and better table to add up and maximize! It is better because now we don't need parties! It's the voters that matter, not the parties.
### The model is actually like a sudoku
There is a really easy way to visualize what the model is trying to do. We reduced the problem to crossing out the rows and columns in a table. Cross out the row and column for each rep that lost. Add the remaining tallies and keeps. Try to find the 4 winners by crossing out the losing rep's row and column. $\begin{bmatrix} 10 & -4 & -3 & 0 & -1 \\ -4 & 10 & -3 & 0 & -1 \\ -3 & -3 & 8 & -2 & 0 \\ 0 & 0 & -2 & 11 & 0 \\ -1 & -1 & 0 & 0 & 17 \\ \end{bmatrix}$ The solution is to cross out the 3rd rep's row and column. He had the lowest tally overall and he also overlapped in similarity with most of the other reps, so basically his territory was covered better by the other reps.
### Formal Model
To sum up, the model maximizes a sum of linear and quadratic terms involving binary variables $x_j$ and a table of measurements. There is a constraint that there are a specified number of winners. Gurobi takes care of the search algorithm to find the maximum among a really large number of combinations (think factorial!).
The problem is defined by the following model in the variable $x_j$ : $\begin{array}{ll} \text{Maximize} & {\displaystyle \sum_{j \in Reps} tally_j * x_j - \sum_{j \in Reps,k \neq j} \frac{1}{2} keeps_{jk} * x_j * x_k } \\ \\ \text{Subject to} & {\displaystyle \sum_{j \in Reps} x_j} = \text{number of seats} \\ & x_j \in \{ 0, 1 \} \\ \\ \end{array} \\ \begin{array}{lrll} \text{Constants } & keeps_{jk} & = keep * s_{jk}& \text{Rep j keeps these votes from rep k}\\ \\ & s_{jk} & = {\displaystyle \frac{\sum_{i \in Voters} min(b_{ij}, b_{ik})}{\sum_{i \in Voters} max(b_{ij}, b_{ik})} }& \text{Similarity between reps j and k} \nonumber \\ \\ & keep & = {\displaystyle \frac{\text{voters}}{\text{seats}} } & \text{Amount of ballots kept (if s=1, e.g. for clones)}\\ \\ & tally_j & = {\displaystyle \sum_{i \in Voters} b_{ij} } & \text{Add up the ballots}\\ \\ & b_{ij} & = {\displaystyle a_{ij} * \frac{1}{\max_{m \in Voters}{a_{mj}}}} &\text{Ballot} \\ \\ & a_{ij} & = {\displaystyle \frac{1}{d_{ij}/10 + 1} } & \text{Inverting distance}\\ \\ & d_{ij} & & \text{Distance between voter i and rep j on the map} \end{array}$
## Implementation
Below is an example implementation of the model with example data in Gurobi's Python interface:
For the full implementation, see https://github.com/paretoman/searchingforparliament.
Here is a link to Gurobi. I kept it here because I forked this project from Emilien Dupont (on github and hosted as a featured example on Gurobi's website)
I suggest using conda to install it. And then there is a license to download, so you have to make an account on the gurobi website. I used Python 2.
from gurobipy import * import math import numpy # Problem data voters = [[c1,c2] for c1 in range(10) for c2 in range(10)] reps = [[f1*3+1.5,f2*3+1.7] for f1 in range(3) for f2 in range(3)] numReps = len(reps) numVoters = len(voters) # Add variables m = Model() x = {} for j in range(numReps): x[j] = m.addVar(vtype=GRB.BINARY, name="x%d" % j) # Add constants numWinners = 5 d = numpy.zeros((numVoters,numReps)) a = numpy.zeros((numVoters,numReps)) b = numpy.zeros((numVoters,numReps)) s = numpy.zeros((numReps,numReps)) t = numpy.zeros(numReps) def distance(a,b): dx = a[0] - b[0] dy = a[1] - b[1] return math.sqrt(dx*dx + dy*dy) for i in range(numVoters): for j in range(numReps): d[i,j] = distance(voters[i], reps[j]) a = 1 /( d/10 + 1 ) for i in range(numVoters): b[i,:] = a[i,:] / max(a[i,:]) keep = numVoters / numWinners def jaccard_similarity(a,b): return numpy.sum(numpy.minimum(a,b)) / numpy.sum(numpy.maximum(a,b)) for j in range(numReps): t[j] = sum(b[:,j]) for k in range(numReps): s[j,k] = jaccard_similarity(b[:,j],b[:,k]) m.update() # Add constraints m.addConstr(quicksum(x[j] for j in range(numReps)) == numWinners) d_obj = LinExpr() for j in range(numReps): d_obj += t[j]*x[j] for k in range(numReps): if k != j: d_obj += -.5*keep*s[j,k]*x[j]*x[k] m.setObjective( d_obj , GRB.MAXIMIZE) m.optimize() # Output print(["%d" % x[j1].X for j1 in range(9)])
## Live Demo
Below is a visualization of our example. We are using the location data from GeoLytix for a large supermarket chain in the UK, and visualizing its outlets in Northern England. (This is an approximation to population distribution.)
The voters are represented by:
By clicking the map you can add rep locations. These are drawn as:
Click "Compute Winners" to find the winners. These will be drawn as:
A few rep locations have already been set up, but you can add more by clicking the screen.
5 Winners
Election Results:
Voter Communities: | 2020-12-02 15:52:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9308169484138489, "perplexity": 1000.1156424861642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00599.warc.gz"} |
https://quant.stackexchange.com/questions/37306/log-returns-volatility-outperformance-sharpe-information-ratios | # Log returns: volatility, outperformance, Sharpe/information ratios
I have developed the habit of simply stating that a 21% return compared to a 10% benchmark return means that the outperformance was 10% (not 11%). So, treating the whole thing in a multiplicative way, as opposed to taking the differences.
Also, when I use standard deviations, I take them from the log returns. And then just have the result of that be the volatility of the investment/portfolio (without clarifying that it is the standard deviation of the log returns).
Together this gives me (variants of) Sharpe/information ratios. However, this is not how, say, Wikipedia defines such ratios (going off regular returns).
Is this an unconventional habit and/or am I doing it right?
The Sharpe ratio is typically computed on relative returns, not log returns. The reason is that you get a bigger number! (This is salesmanship for fund managers.) Consider the equation linking relative returns to log returns: $$l = \log(1 + r).$$ For valid values of relative return ($r > -1$) it is simple to prove that $l \le r$. Thus log returns have a lower mean than relative returns. | 2019-06-26 08:01:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8733213543891907, "perplexity": 1516.0637661852365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00333.warc.gz"} |
https://leanprover-community.github.io/archive/stream/116395-maths/topic/subring.html | ## Stream: maths
### Topic: subring
#### Kevin Buzzard (Jul 20 2020 at 16:24):
So for better or worse, we have subgroup and is_subgroup, and the same for monoids. One advantage of doing things the way we did them (didn't remove is_X but added X anyway) was that is_X became available immediately. Another advantage is that in both cases I got an Imperial student to write is_X and in both cases I believe the student learnt a lot. I am proposing doing the same with subrings now. We don't have subring, we do have is_subring, so I'm proposing making a new file subring.lean, moving the current subring.lean to is_subring.lean and getting a student to bundle subrings so they can have access to bundled subrings for another project.
The refactoring is happening. Scott and I (mostly Scott so far) are working on removing is_subgroup from a bunch of files, in WIP PR #3321 . The advantage of the procedure I'm outlining above is that students of mine can access subring immediately.
Is there anyone who thinks this is a terrible idea, or knows of someone who has started on this already?
#### Anne Baanen (Jul 20 2020 at 20:57):
I saw that @Yury G. Kudryashov recently defined bundled subsemirings.
#### Kevin Buzzard (Jul 20 2020 at 22:33):
Yury, would it be a good first PR to make subrings on top of this?
#### Yury G. Kudryashov (Jul 22 2020 at 05:57):
I think yes. It's mostly about replacing add_submonoid by add_subgroup everywhere.
Last updated: May 19 2021 at 02:10 UTC | 2021-05-19 03:24:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4500529170036316, "perplexity": 3608.3881444102917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00099.warc.gz"} |
https://www.hpmuseum.org/forum/thread-8209-post-74180.html | Programming puzzles: processing lists!
05-28-2017, 07:10 PM
Post: #101
DavidM Senior Member Posts: 873 Joined: Dec 2013
RE: Programming puzzles: processing lists!
(05-28-2017 06:31 PM)John Keith Wrote: DOH!
I realized what the problem was: I had a program in my HOME directory called LREPL, a similar program I had been working on a few weeks ago. I renamed that program and everything is working fine now. ;-)
Homer Simpson (aka John)
Mystery solved! Thanks for keeping me from beating my head against the wall trying a bunch more tests in an effort to track it down. No time was wasted.
Here's some brief descriptions of a few more commands I've created:
S→NL (String to Numeric List)
Converts string to a list of numbers. The numbers are the same as if you executed "NUM" on each character of the string in sequence.
Ex.: "ABCDEFG" => { 65. 66. 67. 68. 69. 70. 71. }
NL→S (Numeric List to String)
Reciprocal command to the above.
LCLLT (List Collate)
Given a list of lists, returns a single list with the contents of each sublist extracted one item at a time in sequence. A picture is easier to understand than the description:
{ { 1 1 1 } { 2 2 2 } { 3 3 3 } } => { 1 2 3 1 2 3 1 2 3 }
LDST (List Distribute)
Reciprocal of the above; needs a "groups" argument. For the above example, the group count is 3, so:
{ 1 2 3 1 2 3 1 2 3 } 3 => { { 1 1 1 } { 2 2 2 } { 3 3 3 } }
(you can also think of LDST as "List Deal", because it is analogous to dealing a deck of cards into "groups" hands)
LSPLT (List Split)
Splits a list into two lists where the length of the first sublist is given as a numeric argument. The result is a list of the two lists. The number given must be less than or equal to the length of the list, and also non-negative. If equal, the list and an empty list are returned. If 0, an empty list and the list are returned.
LSDIV (List Subdivide)
Given a list and a number, divides the list into the number of sublists indicated. Number must evenly divide the list contents.
05-28-2017, 08:03 PM
Post: #102
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
The last commands seems pretty interesting, surely a possible new challenge is to compare your library with existing ones. Nice work!
Wikis are great, Contribute :)
05-28-2017, 08:42 PM
Post: #103
DavidM Senior Member Posts: 873 Joined: Dec 2013
RE: Programming puzzles: processing lists!
(05-28-2017 08:03 PM)pier4r Wrote: The last commands seems pretty interesting, surely a possible new challenge is to compare your library with existing ones. Nice work!
I actually started looking at the GoferList library (finally) after completing these last few commands. It's got a very nice set of commands! I was intrigued to find the "Zip" commands there, which are somewhat similar to my LCLLT and LDST. There are some differences, but I think I could still compare them by combining some commands together.
I think I'll take a break from creating new commands for a bit and do some performance testing to compare similar GoferList commands with the ones I've already done.
05-28-2017, 09:59 PM
Post: #104
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
(05-28-2017 08:42 PM)DavidM Wrote: I think I'll take a break from creating new commands for a bit and do some performance testing to compare similar GoferList commands with the ones I've already done.
If you do it, please post the results, otherwise I'll do eventually the same.
Wikis are great, Contribute :)
05-31-2017, 04:02 PM
Post: #105
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
Inspired by the commands of DavidM (that still I have to compare to GoFerList or to use for solutions, they are meant for it) I added some more challenges in the first post.
If you find errors of whatever kind, please report them.
Wikis are great, Contribute :)
05-31-2017, 05:44 PM (This post was last modified: 05-31-2017 05:46 PM by DavidM.)
Post: #106
DavidM Senior Member Posts: 873 Joined: Dec 2013
RE: Programming puzzles: processing lists!
(05-31-2017 04:02 PM)pier4r Wrote: Inspired by the commands of DavidM (that still I have to compare to GoFerList or to use for solutions, they are meant for it) I added some more challenges in the first post.
I'm still in the process of doing some performance testing on GoferList commands as compared to similar commands from my in-progress library, so it may take a few days before I can start to dig into your newest challenges.
I can share some preliminary findings of the performance testing, with the promise of more details to come:
Commands that are functionally equivalent are mostly on par with each other, though almost all of my library commands have a very slight speed advantage over their GoferList counterparts. Commands in this category include Copy/LMRPT, Repeat/LNDUP, Split/LSPLT, Chars/S->SL, and Strcat/LSUM.
There are a couple of exceptions to that rule, though:
GoferList's Nub performed better on larger lists than LDDUP, at least on my test data (which was intentionally chosen to have 50% duplicates). The reverse was true for lists of 0-100 elements, where LDDUP had the advantage. At 1000 elements, Nub averaged 78.4 seconds vs. LDDUP at 135.7 seconds. Since LDDUP is essentially a wrapper around a ROM routine, I'm thinking it might be worth coming up with my own version for comparison.
GoferList's Copy is orders of magnitude slower than LMRPT, and the performance degradation is stark as the quantity of element duplication grows. When duplicating a list of 1 item 1000 times (the worst case in my tests), Copy required 528.8 seconds. LMRPT completed the same task in 0.11 seconds.
I'm still in the process of comparing some others, but it's taking a bit more time because the commands aren't exactly the same. To get a more complete picture, I do two separate test suites for each of these: For the first I use a sequence of UserRPL commands (based on my library) to compare to a single GoferList command, and then I reverse the perspective and use a single command from my library to compare with a sequence of GoferList/UserRPL commands. There's been some interesting results with these tests, and a couple of surprises. Commands in this category are Chars/S->NL, Strcat/NL->S, Zip/LCLLT, and Unzip/LDST.
Then there's all the remaining commands that have no counterparts. That's a large list from GoferList, but only a few from my library (LCNT, LEQ, LGRP, LREPL, LROT, LRPCT, LSDIV, LSHF, LSWP). That may grow over time, though. The results of testing so far have convinced me that there's enough of a value proposition to warrant further development effort for "yet another list library".
05-31-2017, 06:30 PM (This post was last modified: 05-31-2017 06:36 PM by pier4r.)
Post: #107
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
thanks for the info David!
(05-31-2017 05:44 PM)DavidM Wrote: GoferList's Copy is orders of magnitude slower than LMRPT, and the performance degradation is stark as the quantity of element duplication grows. When duplicating a list of 1 item 1000 times (the worst case in my tests), Copy required 528.8 seconds. LMRPT completed the same task in 0.11 seconds.
...
Then there's all the remaining commands that have no counterparts. That's a large list from GoferList, but only a few from my library (LCNT, LEQ, LGRP, LREPL, LROT, LRPCT, LSDIV, LSHF, LSWP). That may grow over time, though. The results of testing so far have convinced me that there's enough of a value proposition to warrant further development effort for "yet another list library".
That is exactly the point that I see over and over, in contrast with https://xkcd.com/927/ . One starts a project for his personal experience, looking at existing known projects like "oh, it will be surely be dismal compared to that result" (existing known because we never know how many others are there that are not shared, what a pity)
But instead the "yet another attempt" is great for many reasons. First and foremost for personal experience. Second, one never knows where a new development can lead over time. Third it is yet another example for someone that wants to learn. If the conditions around the attempt are not the same (i.e: a product that is commercial or closed source vs a product that is open source), the new attempt is surely fitting one condition than the older did not have. Maybe the new attempt is easier to expand since it is more clean, well documented, whatever. And I could continue mentioning many other factors. (with my poor grammar. I should proofread a bit more damn me)
In other words, if one can do a "yet another attempt", then he has all my support.
Already the library from David can be a complement to goferlist (even in the case it is restricted to a certain input type, it does not matter) and has inspired some more challenges that can help someone (at least me) getting better with RPL and processing lists.
Also, I'm pretty sure I will go through some challenges with the newRPL.
Wikis are great, Contribute :)
06-03-2017, 05:14 PM
Post: #108
DavidM Senior Member Posts: 873 Joined: Dec 2013
RE: Programming puzzles: processing lists!
As promised, I've put together some timing data for the commands in the library I posted recently. The process of measuring and comparing their performance also helped to show where a couple of new commands would be very useful, so I added them to the mix.
For each test, the respective libraries were installed in Port 0 to minimize the impact of code relocation on the final results. The timing process included a forced garbage collection just prior to command invocation in order to "level the playing field" for all commands.
In all of the following tables, the entries in the "50-100-500-1000" columns represent the time required for a single invocation of the command with a list containing the quantity of elements in the column header. The times listed are in seconds. Most commands were tested multiple times (the count designated as "Cycles") and the reported time is an average of all of the tests.
Several of my commands are very similar to GoferList commands, and not surprisingly have similar performance characteristics:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{Repeat} & 0.0188 & 0.0245 & 0.0670 & 0.1195 & 50 \\
\textbf{LNDUP} & 0.0161 & 0.0197 & 0.0477 & 0.0816 & 50 \\
\hline
\textbf{Split} & 0.0183 & 0.0239 & 0.0705 & 0.1257 & 50 \\
\textbf{LSPLT} & 0.0179 & 0.0208 & 0.0451 & 0.0755 & 50 \\
\hline
\textbf{Chars} & 0.0530 & 0.0911 & 0.5576 & 1.1568 & 50 \\
\textbf{S→SL} & 0.0488 & 0.0831 & 0.5237 & 1.0705 & 50 \\
\hline
\textbf{Strcat} & 0.1816 & 0.3429 & 1.6582 & 3.6404 & 50 \\
\textbf{LSUM} & 0.1635 & 0.3137 & 1.5390 & 3.3770 & 50 \\
\hline
\end{array}
A couple more are functionally equivalent, but there were significant differences in performance, especially with larger lists:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{Copy} & 0.1954 & 0.6454 & 41.9425 & 528.8188 & 1 \\
\textbf{LMRPT} & 0.0274 & 0.0319 & 0.0663 & 0.1075 & 50 \\
\hline
\textbf{Nub} & 0.1848 & 0.5596 & 10.9966 & 78.4019 & 3 \\
\textbf{LDDUP} & 0.1368 & 0.4644 & 12.4826 & 135.7304 & 3 \\
\hline
\end{array}
LMRPT uses a Saturn code routine at its core to replicate the needed objects, so I wasn't surprised to see it perform well. LDDUP is simply a wrapper around a built-in ROM command (COMPRIMext). I will spend some time later in an effort to see if I can come up with a faster alternative.
I initially had created commands to convert strings to numeric lists and back (S→NL and NL→S), but saw that the GoferList library approached that concept differently (strings are converted to lists of characters instead). It seemed like both approaches had merit, so I created a similar string list command for my library (S→SL). I didn't see a need for the reverse function, as ΣLIST (which is used by LSUM) handles that job nicely. It's an easy thing to convert a list of characters to numbers by simply passing the list to the built-in NUM command, so I thought I'd see how the combination of GoferList's Chars command followed by NUM compared with S→NL:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{Chars NUM} & 0.2399 & 0.4569 & 2.7059 & 6.1926 & 50 \\
\textbf{S→NL} & 0.0551 & 0.0951 & 0.5922 & 1.2080 & 50 \\
\hline
\end{array}
The reverse of the above functions can easily be tested with another combination (CHR Strcat), and the results took a surprising turn:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{CHR Strcat} & 0.3747 & 0.7166 & 3.8709 & 9.1431 & 10 \\
\textbf{NL→S} & 0.0879 & 0.1654 & 0.9203 & 9.4550 & 10 \\
\hline
\end{array}
Notice the timing of NL→S for a list of 1000 numbers. My suspicion is that a garbage collection is getting triggered during the processing of the large list, as the other list sizes showed much better performance. I will add that command to my list of things to check later for improvement.
The next category of commands could be described as "similar but not functionally equivalent", and is essentially based on GoferList's Zip/Unzip commands. I actually came up with the idea for my similar functions (LCLLT/LDST) before I ever saw the GoferList commands, so they aren't designed to do quite the same things. It's not too difficult to compare them, though, as long as you understand the following:
Zip is designed to handle a specific instance of the type of data that LCLLT can process, and will return results even when the sublists involved aren't the same size. LCLLT requires all sublists to be the same size, but can handle any number of lists. Zip returns its results as a list of sublists, but LCLLT returns all of the data in one single list. So conversions are required in both directions when comparing these complementary functions:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{Zip} & 0.1134 & 0.1863 & 0.8451 & 1.8365 & 50 \\
\textbf{LCLLT DUP SIZE 2 / LSDIV} & 0.0705 & 0.1128 & 0.7510 & 2.0134 & 50 \\
\hline
\textbf{Zip ΣLIST} & 0.2651 & 0.6426 & 21.6099 & 319.9845 & 1 \\
\textbf{LCLLT} & 0.0322 & 0.0426 & 0.1271 & 0.2285 & 50 \\
\hline
\textbf{Unzip} & 0.1468 & 0.2423 & 1.1414 & 2.6013 & 10 \\
\textbf{ΣLIST 2 LDST} & 0.1780 & 0.4396 & 6.5892 & 24.4145 & 10 \\
\hline
\textbf{DROP DUP SIZE 2 / LSDIV Unzip 2 →LIST} & 0.1814 & 0.3041 & 1.7102 & 4.2611 & 5 \\
\textbf{LDST} & 0.0272 & 0.0346 & 0.0948 & 0.1667 & 5 \\
\hline
\end{array}
The above tests had two glaring standouts, and both of them involved the ΣLIST command. This caused me to look more closely at what was happening, and it should come as no surprise that I discovered that using ΣLIST to "explode" the contents of a list of lists was not a good idea. Whereas ΣLIST is good at many things, that particular purpose is not its strongest. And that's putting it nicely. So I opted to create my own version of a command for that purpose: LXIL (List eXplode Inner Lists). It won't win any awards for speed, but compared to ΣLIST it is much faster for its stated purpose:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{ΣLIST} & 0.3333 & 0.9277 & 53.9323 & 658.0647 & 1 \\
\textbf{LXIL} & 0.0527 & 0.1102 & 1.4303 & 4.8115 & 20 \\
\hline
\end{array}
Using LXIL instead of ΣLIST in the above tests gave much better results:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{Zip ΣLIST} & 0.2651 & 0.6426 & 21.6099 & 319.9845 & 1 \\
\textbf{Zip LXIL} & 0.1388 & 0.2382 & 1.4557 & 3.7946 & 10 \\
\hline
\textbf{ΣLIST 2 LDST} & 0.1780 & 0.4396 & 6.5892 & 24.4145 & 10 \\
\textbf{LXIL 2 LDST} & 0.0530 & 0.0866 & 0.7026 & 2.1245 & 10 \\
\hline
\end{array}
...so the final results for the Zip/LDST comparisons became:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{Zip LXIL} & 0.1388 & 0.2382 & 1.4557 & 3.7946 & 10 \\
\textbf{LCLLT} & 0.0328 & 0.0422 & 0.1264 & 0.2272 & 10 \\
\hline
\textbf{Unzip} & 0.1467 & 0.2401 & 1.1347 & 2.5791 & 10 \\
\textbf{LXIL 2 LDST} & 0.0530 & 0.0866 & 0.7026 & 2.1245 & 10 \\
\hline
\end{array}
The final category of testing was simply the commands in my library that had no GoferList counterparts. I thought it would be nice just to see how each command performed with those same list sizes:
\begin{array}{|lrrrrc|}
\hline
\textbf{Command} & \textbf{50} & \textbf{100} & \textbf{500} & \textbf{1000} & \textbf{Cycles} \\
\hline
\textbf{LCNT - none matched} & 0.0245 & 0.0347 & 0.1113 & 0.2086 & 20 \\
\textbf{LCNT - 100\% matched} & 0.0334 & 0.0515 & 0.2954 & 0.6088 & 20 \\
\hline
\textbf{LEQ} & 0.0240 & 0.0346 & 0.1189 & 0.2249 & 50 \\
\hline
\textbf{LGRP - Sorted Data} & 0.0304 & 0.0568 & 0.3541 & 1.0933 & 10 \\
\textbf{LGRP - Shuffled Data} & 0.0456 & 0.1030 & 1.2533 & 3.6518 & 10 \\
\hline
\textbf{LRPCT - Sorted Data} & 0.0500 & 0.1068 & 0.7310 & 2.3372 & 10 \\
\textbf{LRPCT - Shuffled Data} & 0.0907 & 0.2513 & 3.1239 & 9.4632 & 10 \\
\hline
\textbf{LREPL - nothing replaced - 1 target} & 0.0716 & 0.1479 & 1.4741 & 5.0333 & 50 \\
\textbf{LREPL - 100\% replaced - 1 target} & 0.0724 & 0.1507 & 1.4747 & 5.0292 & 50 \\
\textbf{LREPL - nothing replaced - 20 targets} & 0.1283 & 0.3022 & 2.2166 & 6.4884 & 50 \\
\textbf{LREPL - 100\% replaced - 20 targets} & 0.1486 & 0.2664 & 2.0512 & 6.1830 & 50 \\
\hline
\textbf{LROT} & 0.0257 & 0.0338 & 0.1008 & 0.1819 & 50 \\
\hline
\textbf{LSDIV} & 0.0611 & 0.1145 & 1.1616 & 3.3086 & 20 \\
\hline
\textbf{LSHF} & 0.2523 & 0.5041 & 2.9164 & 5.9838 & 20 \\
\hline
\textbf{LSWP} & 0.0245 & 0.0309 & 0.0823 & 0.1436 & 50 \\
\hline
\end{array}
The process of going through all of this testing showed me that there was definitely some value in pursuing the development of these additional commands, and even the similar ones to GoferList's have varying degrees of performance improvement in most instances. As a result, I'll continue to refine my contributions and probably add a few more commands before I'm done with it.
06-03-2017, 08:00 PM
Post: #109
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
Awesome!
I hope you'll put this also in the general software library!
Wikis are great, Contribute :)
06-04-2017, 02:31 PM
Post: #110
DavidM Senior Member Posts: 873 Joined: Dec 2013
RE: Programming puzzles: processing lists!
(06-03-2017 08:00 PM)pier4r Wrote: I hope you'll put this also in the general software library!
It's still "in progress", and not quite ready for a GSL post.
Yesterday I took a look at how STREAM was implemented, and realized that it was utilizing an unusual method for biting off list elements one piece at a time for feeding to some process. Under the right conditions, this method can not only save loop time, but it also has benefits regarding memory fragmentation that can improve performance both immediately and later on.
I'm able to make use of this technique in several places in my code, so I've been going through the library and re-coding segments where appropriate. From the testing I've done, it's definitely worth the trouble.
06-05-2017, 11:31 AM (This post was last modified: 06-05-2017 12:05 PM by Gilles59.)
Post: #111
Gilles59 Member Posts: 136 Joined: Jan 2017
RE: Programming puzzles: processing lists!
#8 is easy with GoferList
Code:
Spoiler . . . . . . . . . IF DUPDUP SIZE 2 / Split ≠ THEN DROP {} END
You can replace {} by "Invalid" but I dont like when a program dont return always the same type of object.
06-05-2017, 11:38 AM (This post was last modified: 06-05-2017 12:54 PM by pier4r.)
Post: #112
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
(06-04-2017 02:31 PM)DavidM Wrote: It's still "in progress", and not quite ready for a GSL post.
Yesterday I took a look at how STREAM was implemented, and realized that it was utilizing an unusual method for biting off list elements one piece at a time for feeding to some process. Under the right conditions, this method can not only save loop time, but it also has benefits regarding memory fragmentation that can improve performance both immediately and later on.
I'm able to make use of this technique in several places in my code, so I've been going through the library and re-coding segments where appropriate. From the testing I've done, it's definitely worth the trouble.
Ok, at least you post it here, better than nothing.
Anyway added some challenge to the first post. Now there are 30.
The 30th looks particularly evil. I need to go through it for some statistics returned by a program of mine.
edit:
@DavidM. One thing I noted on hpcalc.org is that people put there great work (even if still in progress) but with little documentation. You have your list package in some post before and that is great, but you missed the documentation in the package. That is, the user should copy your post to procide a "txt" on the 50g to mention how the commands works because one does not always have a link to your post.
So my request/tip is. Could you put your nice documentation together in the zip package, so one knows that he has an help guide when needed. For the rest, kudos! I hope you will share your most updated version when you can.
Wikis are great, Contribute :)
06-05-2017, 11:48 AM (This post was last modified: 06-05-2017 11:54 AM by Gilles59.)
Post: #113
Gilles59 Member Posts: 136 Joined: Jan 2017
RE: Programming puzzles: processing lists!
(06-03-2017 08:00 PM)pier4r Wrote: Awesome!
I hope you'll put this also in the general software library!
I agree ;D
What about a DOPERM command to handle list permutations.
Something like :
Code:
INPUT { 1 2 3 4 } @ A list 2 @ Number of elements for permutation << My program >> @ to handle each permutation DOPERM OUPUT (If MyProgram does nothing} { {1, 2}, {1, 3}, {1, 4},{2, 1}, {2, 3}, {2, 4},{3, 1}, {3, 2}, {3, 4},{4, 1}, {4, 2}, {4, 3} }
06-05-2017, 12:01 PM (This post was last modified: 06-05-2017 12:06 PM by Gilles59.)
Post: #114
Gilles59 Member Posts: 136 Joined: Jan 2017
RE: Programming puzzles: processing lists!
#30 with GoferList
Code:
Spoiler . . . . . . . . . . . . SWAP Zip SORT Unzip SWAP
06-05-2017, 12:20 PM
Post: #115
Gilles59 Member Posts: 136 Joined: Jan 2017
RE: Programming puzzles: processing lists!
#11
Code:
Spoiler . . . . . . . . . . . . DUPDUP IF REVLIST ≠ THEN DROP "Inv." END
06-05-2017, 12:32 PM (This post was last modified: 06-05-2017 12:33 PM by Gilles59.)
Post: #116
Gilles59 Member Posts: 136 Joined: Jan 2017
RE: Programming puzzles: processing lists!
#22
Code:
Spoiler . . . . . . . . . . . DUP2 « ≠ » DOLIST * ADD
06-05-2017, 12:55 PM
Post: #117
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
(06-05-2017 12:01 PM)Gilles59 Wrote: #30 with GoferList
Uh that is short. Clever!
Thanks for contributing Gilles59, but could be that you are Gilles Carpentier ? I remember that that user was an enthusiast of the goferlist. (I discovered it also through his posts)
Wikis are great, Contribute :)
06-05-2017, 01:05 PM
Post: #118
Gilles59 Member Posts: 136 Joined: Jan 2017
RE: Programming puzzles: processing lists!
(06-05-2017 12:55 PM)pier4r Wrote:
(06-05-2017 12:01 PM)Gilles59 Wrote: #30 with GoferList
Uh that is short. Clever!
Thanks for contributing Gilles59, but could be that you are Gilles Carpentier ? I remember that that user was an enthusiast of the goferlist. (I discovered it also through his posts)
Hi, Yes I'm ;D
06-05-2017, 01:07 PM
Post: #119
pier4r Senior Member Posts: 2,111 Joined: Nov 2014
RE: Programming puzzles: processing lists!
(06-05-2017 11:48 AM)Gilles59 Wrote: I agree ;D
What about a DOPERM command to handle list permutations.
Something like :
Code:
INPUT { 1 2 3 4 } @ A list 2 @ Number of elements for permutation << My program >> @ to handle each permutation DOPERM OUPUT (If MyProgram does nothing} { {1, 2}, {1, 3}, {1, 4},{2, 1}, {2, 3}, {2, 4},{3, 1}, {3, 2}, {3, 4},{4, 1}, {4, 2}, {4, 3} }
actually that is one of the challenges. To write a program that gives you all the permutations of a list (sure, when the list is large it may hog the system).
But I did not find any DOPERM on the AUR. Where do you get this command from?
Wikis are great, Contribute :)
06-05-2017, 01:42 PM
Post: #120
Gilles59 Member Posts: 136 Joined: Jan 2017
RE: Programming puzzles: processing lists!
(06-05-2017 01:07 PM)pier4r Wrote:
(06-05-2017 11:48 AM)Gilles59 Wrote: I agree ;D
What about a DOPERM command to handle list permutations.
Something like :
Code:
INPUT { 1 2 3 4 } @ A list 2 @ Number of elements for permutation << My program >> @ to handle each permutation DOPERM OUPUT (If MyProgram does nothing} { {1, 2}, {1, 3}, {1, 4},{2, 1}, {2, 3}, {2, 4},{3, 1}, {3, 2}, {3, 4},{4, 1}, {4, 2}, {4, 3} }
actually that is one of the challenges. To write a program that gives you all the permutations of a list (sure, when the list is large it may hog the system).
But I did not find any DOPERM on the AUR. Where do you get this command from?
As far I know this command does not exist neither in the AUR nor in a Library :/
I wrote such a thing far ago in User RPL but it was awfully slow and very badly coded! It will be great if somebody can create this in sysRPL.
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s) | 2022-10-03 20:32:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39783602952957153, "perplexity": 1261.43647694509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00109.warc.gz"} |
https://forum.zkoss.org/question/74518/menu-links-on-left-side-and-page-content-on-right-side/ | # Menu Links on Left Side and Page Content on Right Side
raviteja
90
Hai All,
i want to show my page as menulinks on left side and the related content on right side of the same page how can i achevie this by using zk.
i want to use only div tags not an iframe .................
can any body help me out of this by posting any example code.......................
delete retag edit
## 17 Replies
twiegand
1807 3
raviteja,
I would suggest that the easiest way to accomplish this is through the use of ZK's borderlayout component - like this:
<zk>
<window height="100%">
<borderlayout>
<vlayout>
</vlayout>
</west>
<center>
<div align="center">
Content goes here
</div>
</center>
</borderlayout>
</window>
</zk>
However, if you really want to do this with only <div> tags, then you could do something like this:
<zk>
<window height="100%">
<div width="30%" style="float:left;">
<vlayout>
</vlayout>
</div>
<div width="70%" style="float:left;">
<div align="center">
Content goes here
</div>
</div>
<div style="clear:both;"/>
</window>
</zk>
Hopefully that will give you a couple of ideas.
Regards,
Todd
zknewbie1
370 4
I'd like to see the code to dynamically update the content of the right div area when clicking the menu item on the left div area, for example, if I click "Menu Item 1", then the right div area will display the text "This is Menu Item 1 contents....", if clicking "Menu Item 2", then right div will display "This is Menu Item 2 contents", etc....
twiegand
1807 3
zknewbie1,
Which version of the example do you want your modifications?
Todd
zknewbie1
370 4
I'd like to see both if not too much trouble. Else, if I could see the easiest way to do it. Thanks very much Twiegand..
twiegand
1807 3
zknewbie1,
Let's start with the <borderlayout> version. This first example just uses event handling to set the label in the center section.
<zk>
<window height="100%">
<borderlayout>
<vlayout>
<label value="Section 1" onClick='lbl.setValue("Section 1 Content Here")'/>
<label value="Section 2" onClick='lbl.setValue("Section 2 Content Here")'/>
<label value="Section 3" onClick='lbl.setValue("Section 3 Content Here")'/>
</vlayout>
</west>
<center>
<div align="center">
<label id="lbl"/>
</div>
</center>
</borderlayout>
</window>
</zk>
Next, let's add a controller to the above example:
Composer
import org.zkoss.zk.ui.util.GenericForwardComposer;
public class MyComposer extends GenericForwardComposer {
protected Label lbl;
lbl.setValue(event.getOrigin().getTarget().getValue() + " goes here");
}
}
zul
<zk>
<window apply="MyComposer" height="100%">
<borderlayout>
<vlayout>
</vlayout>
</west>
<center>
<div align="center">
<label id="lbl"/>
</div>
</center>
</borderlayout>
</window>
</zk>
Using the forward event in this manner lets you have a single method that determines which menu item the user clicked and then sets the content label accordingly.
Note that if you wanted to set an ID on the menu label, you could interrogate that instead of the label's value by changing the statement in the onClickMenu() method to look like this instead:
lbl.setValue(event.getOrigin().getTarget().getId() + " goes here");
I hope that helps,
Todd
zknewbie1
370 4
Thanks Todd. I was thinking about the contents of each menu item as a separate Zul page, for example, MenuItem1 is tied to Content1.zul, MenuItem2 is to Content2.zul, .... So, from your example, which method should I use to dynamically associate the <center> area with the appropriate content zul file?
twiegand
1807 3
zknewbie1,
Try this:
Composer
import org.zkoss.zk.ui.util.GenericForwardComposer;
public class MyComposer extends GenericForwardComposer {
protected Include centerSection;
centerSection.setSrc(event.getOrigin().getTarget().getId() + ".zul");
}
}
main.zul
<zk>
<window apply="MyComposer" height="100%">
<borderlayout>
<vlayout>
</vlayout>
</west>
<center>
<div align="center">
<include id="centerSection" />
</div>
</center>
</borderlayout>
</window>
</zk>
section1.zul
<zk>
<separator height="50px"/>
Section 1 Content
</zk>
section2.zul
<zk>
<separator height="50px"/>
Section 2 Content
</zk>
section3.zul
<zk>
<separator height="50px"/>
Section 3 Content
</zk>
Hope that helps,
Todd
zknewbie1
370 4
Hi Todd, that's perfect. Thanks so much for all your help..
boloi
105
Hi all,
I have layout consist of : North,west,center
West : Ha ve tree menu
The same http://www.zkoss.org/zksandbox/#l1 demo.When click menu on the left,the content will be load into center.
9393 3 7 16
http://www.oxitec.de/
You must modify the attached sample codes to your component names which are declared in your zul-file..
/* get an instance of the borderlayout defined in the zul-file */
Borderlayout bl = (Borderlayout) Path.getComponent("/outerIndexWindow/borderlayoutMain");
/* get an instance of the searched CENTER layout area */
Center center = bl.getCenter();
/* clear the center child comps */
center.getChildren().clear();
/* create the page and put it in the center layout area */
Executions.createComponents(zulFilePathName, center, null);
best
Stephan
[hide preview] | 2020-09-26 14:31:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19426195323467255, "perplexity": 7785.504734175405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00472.warc.gz"} |
http://vm.udsu.ru/issues/archive/issue/2019-2-1 | +7 (3412) 91 60 92
Archive of Issues
Russia Izhevsk
Year
2019
Volume
29
Issue
2
Pages
135-152
Section Mathematics Title On the extension of a Rieman-Stieltjes integral Author(-s) Derr V.Ya.a Affiliations Udmurt State Universitya Abstract In this paper, the properties of the regular functions and the so-called $\sigma$-continuous functions (i.e., the bounded functions for which the set of discontinuity points is at most countable) are studied. It is shown that the $\sigma$-continuous functions are Riemann-Stieltjes integrable with respect to continuous functions of bounded variation. Helly's limit theorem for such functions is also proved. Moreover, Riemann-Stieltjes integration of $\sigma$-continuous functions with respect to arbitrary functions of bounded variation is considered. To this end, a $(*)$-integral is introduced. This integral consists of two terms: (i) the classical Riemann-Stieltjes integral with respect to the continuous part of a function of bounded variation, and (ii) the sum of the products of an integrand by the jumps of an integrator. In other words, the $(*)$-integral makes it possible to consider a Riemann-Stieltjes integral with a discontinuous function as an integrand or an integrator. The properties of the $(*)$-integral are studied. In particular, a formula for integration by parts, an inversion of the order of the integration theorem, and all limit theorems necessary in applications, including a limit theorem of Helly's type, are proved. Keywords functions of bounded variation, regulated functions, $\sigma$-continuous functions, Rieman-Stieltjes integral, $(*)$-integral UDC 517.518.126 MSC 26B30, 26A42 DOI 10.20537/vm190201 Received 18 March 2019 Language Russian Citation Derr V.Ya. On the extension of a Rieman-Stieltjes integral, Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki, 2019, vol. 29, issue 2, pp. 135-152. References Kurzweil J. Linear differential equations with distributions as coefficients, Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys., 1959, vol. 7, no. 9, pp. 557-560. Levin A.Yu. Questions on the theory of ordinary linear differential equations. II, Vestn. Yaroslav. Univ., 1974, issue 8, pp. 122-144 (in Russian). Derr V.Ya. Ordinary linear differential equations with generalized functions in coefficients: survey, Funktsional'no-differentsial'nye uravneniya: teoriya i prilozheniya (Functional differential equations: theory and applications), Perm: Perm National Research Polytechnic University, 2018, pp. 60-86 (in Russian). Atkinson F.V. Discrete and continuous boundary problems, New York: Academic Press, 1964. Translated under the title Diskretnye i nepreryvnye granichnye zadachi, Moscow: Mir, 1968. Schwabik S., Tvrdý M., Vejvoda O. Differential and integral equations. Boundary value problems and adjoints, Praha: Academia, 1979. Derr V.Ya. A generalization of Riemann-Stieltjes integral, Functional Differential Equations, 2002, vol. 9, no. 3-4, pp. 325-341. Derr V.Ya., Kinzebulatov D.M. Alpha-integral of Stieltjes type, Vestn. Udmurtsk. Univ. Mat., 2006, issue 1, pp. 41-62 (in Russian). http://mi.mathnet.ru/eng/vuu245 Rodionov V.I. The adjoint Riemann-Stieltjes integral, Russian Mathematics, 2007, vol. 51, issue 2, pp. 75-79. https://doi.org/10.3103/S1066369X07020107 Dieudonné J. Foundations of modern analysis, New York-London: Academic Press, 1960. Translated under the title Osnovy sovremennogo analiza, Moscow: Mir, 1964. Schwartz L. Analyse Mathématique. Vol. I, Paris: Hermann, 1967. Translated under the title Analiz. Tom I, Moscow: Mir, 1972. Tolstonogov A.A. Properties of the space of proper functions, Mathematical Notes of the Academy of Sciences of the USSR, 1984, vol. 35, issue 6, pp. 422-427. https://doi.org/10.1007/BF01139944 Derr V.Ya. Teoriya funktsii deistvitel'noi peremennoi. Lektsii i uprazhneniya (Theory of functions of real argument. Lectures and exercises), Moscow: Vysshaya Shkola, 2008. Derr V.Ya. Funktsional'nyi analiz. Lektsii i uprazhneniya (Functional analysis. Lectures and exercises), Moscow: Knorus, 2013. Dunford N., Schwartz J.T. Linear operators. Part I: General theory, New York-London: Interscience Publishers, 1958. Translated under the title Lineinye operatory. Tom 1. Obshchaya teoriya, Moscow: Inostrannaya Literatura, 1962. Whittaker E.T., Watson G.N. A course of modern analysis, Cambridge, 1927. Translated under the title Kurs sovremennogo analisa, Moscow: Fizmatgiz, 1963. Full text | 2020-12-01 18:54:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820374608039856, "perplexity": 3745.5540726976733}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681209.60/warc/CC-MAIN-20201201170219-20201201200219-00706.warc.gz"} |
https://www.neetprep.com/question/51384-input-resistance-silicon-transistor-Base-current-changed-Awhich-results-change-collector-currentby--mA-transistor-used-commomemitter-amplifier-load-resistance--kThe-voltage-gain-amplifier-isa-b-c-d-/55-Physics--Nuclei/703-Nuclei | # NEET Physics Nuclei Questions Solved
NEET - 2012
The input resistance of a silicon transistor is
100 $\mathrm{\Omega }$. Base current is changed by 40 $\mathrm{\mu A}$
which results in a change in collector current
by 2 mA. This transistor is used as a commom-
emitter amplifier with a load resistance of 4 k$\mathrm{\Omega }$.
The voltage gain of the amplifier is
(a) 2000
(b) 3000
(c) 4000
(d) 1000
We know
On putting given values
${\mathrm{V}}_{\mathrm{g}}=\frac{2×{10}^{-3}}{40×{10}^{-6}}×\frac{4×{10}^{3}}{100}\phantom{\rule{0ex}{0ex}}{\mathrm{V}}_{\mathrm{g}}=2000$
Difficulty Level:
• 43%
• 29%
• 24%
• 5%
Crack NEET with Online Course - Free Trial (Offer Valid Till August 23, 2019) | 2019-08-21 22:22:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7499306797981262, "perplexity": 13338.25120473421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00119.warc.gz"} |
https://math.stackexchange.com/questions/1501298/show-that-for-every-set-of-18-integers-there-will-be-two-that-are-divisible-by-1?noredirect=1 | # Show that for every set of 18 integers there will be two that are divisible by 17 [closed]
I understand the pigeonhole principle is needed here and I see the solution in the back of the book, but the explanation is week. If anyone could explain step-by-step that would be awesome!
## closed as off-topic by Daniel, Harish Chandra Rajpoot, user147263, N. F. Taussig, yoknapatawphaOct 28 '15 at 13:36
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Daniel, Harish Chandra Rajpoot, yoknapatawpha
If this question can be reworded to fit the rules in the help center, please edit the question.
• As it is stated it's false : the set $\{17k+1 : k\in[|1,18|]\}$ has no number divisible by $17$. Maybe it's "the difference between two integers" instead – Tryss Oct 28 '15 at 4:27
• – user147263 Oct 28 '15 at 5:31
Given a set of $18$ integers $x_1,x_2,\dots,x_{18}$, there will exist two integers $x_i, x_j$ with $1\leq i<j\leq 18$ such that $x_i-x_j$ is divisible by $17$.
By the quotient remainder theorem, each of the integers $x_i$ can be written in a unique way as $17q_i+r_i$ where $q_i$ and $r_i$ are both integers and $0\leq r_i<17$. We treat the values of $r_i$ as the holes and the elements in our set as the pigeons. There are then $17$ holes and $18$ pigeons.
By the pigeon-hole principle, there must be two terms with the same remainder. Without loss of generality, suppose that it was $x_i$ and $x_j$ where $x_i=17q_i+r$ and $x_j=17q_j+r$. Then $x_i-x_j = 17q_i+r-17q_j-r = 17(q_i-q_j)$ is therefore divisible by $17$.
The way it is currently worded (Show that for every set of 18 integers there will be two that are divisible by 17) is false. We can have a set of $18$ integers such that none of them are divisible by $17$, for example the set $\{1,2,3,4,\dots,16,18,19\}$.
Even if we were to add additional constraints that each of the integers is consecutive, we have the counterexample $\{1,2,3,\dots,18\}$ has exactly one element divisible by $17$, not two. | 2019-08-24 21:56:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7381237149238586, "perplexity": 190.14229470685257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00100.warc.gz"} |
http://commons.apache.org/proper/commons-rng/commons-rng-simple/apidocs/org/apache/commons/rng/simple/package-summary.html | # Package org.apache.commons.rng.simple
Randomness providers
See: Description
• Class Summary
Class Description
JDKRandomBridge
Subclass of Random that delegates to a RestorableUniformRandomProvider instance but will otherwise rely on the base class for generating all the random types.
• Enum Summary
Enum Description
RandomSource
This class provides the API for creating generators of random numbers.
## Package org.apache.commons.rng.simple Description
### Randomness providers
This package provides factory methods by which low-level classes implemented in module "commons-rng-core" are instantiated.
Classes in package org.apache.commons.rng.simple.internal should not be used directly.
The generators are not thread-safe: Parallel applications must use different generator instances in different threads.
In the case of pseudo-random generators, the source of randomness is usually a set of numbers whose bits representation are scrambled in such a way as to produce a random-looking sequence.
The main property of the sequence is that the numbers must be uniformly distributed within their allowed range.
Classes in this package do not provide any further processing of the number generation such as to match other types of distribution.
Which source of randomness to choose may depend on which properties are more important. Considerations can include speed of generation, memory usage, period size, equidistribution, correlation, etc.
For some of the generators, interesting properties (of the reference implementations) are proven in scientific papers. Some generators can also suffer from potential weaknesses.
For simple sampling, any of the generators implemented in this library may be sufficient.
For Monte-Carlo simulations that require generating high-dimensional vectors), equidistribution and non-correlation are crucial. The Mersenne Twister and Well generators have equidistribution properties proven according to their bits pool size which is directly related to their period (all of them have maximal period, i.e. a generator with size n pool has a period 2n-1). They also have equidistribution properties for 32 bits blocks up to s/32 dimension where s is their pool size.
For example, Well19937c is equidistributed up to dimension 623 (i.e. 19937 divided by 32). It means that a Monte-Carlo simulation generating vectors of n (32-bits integer) variables at each iteration has some guarantee on the properties of its components as long as n < 623. Note that if the variables are of type double, the limit is divided by two (since 64 bits are needed to create a double).
Reference to the relevant publications are listed in the specific documentation of each class.
Memory usage can vary a lot between providers. The state of MersenneTwister is composed of 624 integers, using about 2.5 kB. The Well generators use 6 integer arrays, the length of each being equal to the pool size; thus, for example, Well44497b uses about 33 kB. | 2017-12-15 21:40:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5927026867866516, "perplexity": 1622.5838805027684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579567.73/warc/CC-MAIN-20171215211734-20171215233734-00626.warc.gz"} |
https://earthscience.stackexchange.com/tags/fluid-dynamics/hot | # Tag Info
24
The answer is the scale. The fluid movement of a sink has a much smaller curvature radius than the grand-scale movements of a hurricane. This curvature radius plays a big role on whether your movement due to a pressure gradient will be balanced by coriolis, or centrifugal forces, as thorougly discussed here. You can read this wikipage, but the essence is ...
17
This question can be answered with a scaling argument. Let us start with the momentum equation (Navier-Stokes) in a non-intertial reference frame (e.g. on the rotating earth) and assuming inviscid flow (roughly true above the surface). $$\dfrac{\partial\mathbf u}{\partial t} = - \mathbf u \cdot \nabla \mathbf u -\dfrac{1}{\rho}\nabla p-2 \mathbf \Omega \... 16 It's partly historical, partly point-of-view, but it's not a mistake. The friction coefficient emphasises the effect of the surface on a property of the boundary layer, i.e., greater surface friction slows the near-surface wind more. Aerodynamic resistance emphasises the effect of the boundary layer on surface-atmosphere exchange, i.e., greater mixing ... 12 You can think about it like this: It takes one day for the earth to perform a full rotation (about 86k seconds), on the other hand, it takes a few seconds for your sink to drain (lets say 10 seconds). So it takes 8600 times longer for the earth to do a full rotation than it takes the water to drain down the sink. It is not too hard to imagine that the earth'... 10 This is a good question, and the answer is, aerodynamic resistance is not defined inversely. It is rather, defined in a context that is often misinterpreted. In your question, you state that aerodynamic resistance is basically how much the roughness of the surface slows air movement down. This statement is not correct, and it seems to stem from the ... 10 Here are your choices with regard to modeling the atmosphere. There aren't many, and only one of them makes sense. Model the atmosphere from the perspective of an inertial frame of reference. Good luck with that! As an advisor told me decades ago, "Name one!" It's certainly not an Earth-centered frame; the Earth is orbiting the Sun. It's certainly not Sun-... 7 The Coriolis acceleration is only present in a rotating reference frame as is the case with Earth. The Coriolis effect is caused by Earth's rotation and the inertia of the mass experiencing the effect. If you are in an inertial frame of reference, thus non-accelerating, there will be no Coriolis effect. Let's assume that you are capable of modeling the ... 6 If your question is: I have an equation with force term F(x,t), and suppose that F(x,t) is caused by effect A, then will the solution of the equation be the same as if the force F(x,t) had been caused by effect B, then the answer is of course "yes". Same force (i.e., same magnitude, same direction) will always cause the same reaction of the system, ... 5 The flow accumulation algorithm essentially determines the upstream contributing area of every grid cell; in other words, what area or how many other cells will drain into a given cell. The flow accumulation algorithm is independent of rainfall as it simply determines which areas drain where, which will later be used to determine how much water actually ... 5 Inertial instability is similar to the centrifugal instability in that we are looking at the stability of parcels to horizontal perturbations. In the inertial case, however, the initial state is geostrophic balance rather than cyclostrophic balance. Symmetric instability is the case where a parcel is inertially stable to horizontal perturbations and ... 4 I think you are asking a question with a variety of different constraints. I'll tackle a couple of them. What is the simplest atmospheric model to operate? That would be the Zero-dimensional energy balance model. It has almost zero resolution and no temporal capacity. What is an atmospheric model that can be easily installed and run? The Weather ... 3 Some time ago I posted this answer about how rainbows are formed, and the Wikipedia link Trond Hansen posted mentions droplet size relative to the wavelength of light. For a rainbow to form, the droplet size has to be large enough, relative to the color with the longest wavelength of visible light, for it to be refracted before reflecting off the backside ... 2 No, clouds don't really have a 'surface' that could have tension like a body of water. The different looks in these two examples (left Cumulonimbus Calvus and right Cumulus Humilis) are greatly dependent on how they have formed and how are they evolving now. The large Cumulonimbus is still growing in a relatively rapid speed. The cloud is reaching higher ... 2 The Laplace equation, (d^2 Ψ)/(dx^2 )+(d^2 Ψ)/(dy^2 )+(d^2 Ψ)/(dz^2 )=0, is just a steady state 3D flow equation. It's a black box conservation of hydraulic potential. Diffusion doesn't come into it. The Diffusion equation (assuming homogeneous isotropic conditions) is (∂^2 Ψ)/(∂x^2)+(∂^2Ψ)/(∂y^2)+(∂^2 Ψ)/(∂z^2)= S_s/K ∂h/∂t. This discretizes the time ... 2 It's not clear exactly what is being modelled here, but it seems to me that there are two ways in which the concentration can 'go negative'. Firstly, the rate of change of concentration can be massive, in which case see what happens when modelling with much smaller time steps. Or, the diffusion term substantially exceeds the advection term, which is ... 2 Recent literature points to an attempt to understand the theoretical dynamics behind the MJO as seen in these two publications - Dynamics moisture mode vs. moisture mode in MJO dynamics and A general theoretical framework for understanding essential dynamics of Madden–Julian oscillation As concluded in the original paper by Madden and Julien oscillation the ... 2 If you assume both hydrostatic and pure geostrophic balance, that is a valid assumption. In Einstein notation,$$u_i=-\frac{1}{f \rho}\frac{\partial P}{\partial x_j}\epsilon_{ij3}$$If we look at the equation for the streamline:$$u_i=-\frac{\partial \psi}{\partial x_j}\epsilon_{ij3}$$, then we can see that$$-\frac{\partial \psi}{\partial x_j}\epsilon_{...
1
No, the rate of flow would usually be unaffected. The same volume of water has to get to the sea, so unless the ice was so thick and so well anchored to the riverbank as to exert pressure on the flow of water beneath, which is very unlikely, the rate of flow would remain the same. If, in the unlikely event that the ice exerted pressure on the water, the rate ...
1
This equation is a form of the Shallow-water equations, which are derived from Navier-Stokes in the incompressible limit and the vertical direction is integrated out. One then takes the equation for the vorticity (which has only one component then) from the Shallow-water equations. The vorticity is split into planetary vorticity, as is the Coriolis term ...
1
If the pumping well is partially penetrating in the aquifer or the source above the aquifer (e.g., unconfined aquifer, leaky aquifer, under the stream), the vertical flow should be accounted. The groundwater equation like Thies' solution is not considering the vertical flow. Incorporating the vertical flow would change the governing equation (P.D.E.) form ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-09-28 21:49:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7845425605773926, "perplexity": 1200.2193489252438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00118.warc.gz"} |
http://clay6.com/qa/1823/the-value-of-cos-bigg-cos-frac-bigg-is- | Browse Questions
# The value of $\cos^{-1}\bigg(\cos\frac{14\pi}{3}\bigg)$ is _____________.
Toolbox:
• Principal interval of cos is $[o,\pi]$
• $cos(2n\pi+x)=cosx$ for all $n\in\:N$
Ans: $\large\frac{2\pi}{3}$
Since $\large\frac{14\pi}{3}$ is not in the principal interval reduce it to its principal interval
$\large\frac{14\pi}{3}=4\pi+\large\frac{2\pi}{3}$
From the above formula we get
$\Rightarrow\:cos\large\frac{14\pi}{3}=cos(4\pi+\large\frac{2\pi}{2})=cos\large\frac{2\pi}{3}$
$\large\frac{2\pi}{3}$ lies within the principal interval
$\Rightarrow\: cos^{-1} \bigg[ cos \bigg( 4\pi+\large\frac{2\pi}{3} \bigg) \bigg] = cos^{-1}cos \large\frac{2\pi}{3} =\large\frac{2\pi}{3}$
edited Mar 16, 2013 | 2017-01-24 17:25:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791209101676941, "perplexity": 1849.2959946502713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00090-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://byjus.com/question-answer/underset-x-rightarrow-0-lim-frac-1-x-5-1-3x-5x-2-is-equal-2/ | Question
# limx→0(1+x)5−13x+5x2 is equal to
A
B
C
D
Solution
## The correct option is B Here, we can see that both numerator and denominator are approaching to zero. So it's a 0/0 form. Hence we can apply L'Hospital's rule to find the limit. Differentiating both Numerator and denominator - limx→05(1+x)43+10x We can see that 0/0 form is removed so we can directly put limit value in the expression. Therefore the answer will be = 53 Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-28 09:47:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987964391708374, "perplexity": 1098.7780808182524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00399.warc.gz"} |
https://complexanalysis.gitlab.io/2004Apr19.html | # Examination 6¶
## 2004 Apr¶
Instructions. Use a separate sheet of paper for each new problem. Do as many problems as you can. Complete solutions to five problems will be considered as an excellent performance. Be advised that a few complete and well written solutions will count more than several partial solutions.
Notation. $$D(z_0,R) = \{z\in \mathbb{C}: |z-z_0|<R\}$$ $$R>0$$. For an open set $$G\subseteq \mathbb{C}$$, $$H(G)$$ will denote the set of functions which are analytic in $$G$$.
Problem 34
Let $$\gamma$$ be a rectifiable curve and let $$\varphi \in C(\gamma^*)$$. (That is, $$\varphi$$ is a continuous complex function defined on the trace, $$\gamma^*$$, of $$\gamma$$.)
Let $$F(z) = \int_\gamma \frac{\varphi(\omega)}{(\omega-z)} \, d\omega, \quad z\in \mathbb{C}\setminus \gamma^*$$.
Prove that $$F'(z) = \int_\gamma \frac{\varphi(\omega)}{(\omega-z)^2} \, dw, \quad z\in \mathbb{C}\setminus \gamma^*$$, without using Leibniz’s Rule.
Problem 35
1. State the Casorati-Weierstrass theorem.
2. Evaluate the integral
$I =\frac{1}{2\pi i} \int_{|z|=R} (z-3) \sin\left(\frac{1}{z+2}\right)\, dz \; \text{ where } R\geq 4.$
Problem 36
Let $$f(z)$$ be an entire function such that $$f(0)=1$$, $$f'(0)=0$$ and
(17)$0<|f(z)|\leq e^{|z|}, \text{ for all } z\in \mathbb C.$
Prove that $$f(z)=1$$ for all $$z\in \mathbb{C}$$.
Problem 37
Let $$C$$ be an arbitrary circle through $$-1$$ and $$1$$. Suppose that $$z_1$$ and $$z_2$$ are two points which do lie on the circle $$C$$ and satisfy $$z_1z_2 = 1$$. Show that one of these points lies inside $$C$$ and the other lies outside $$C$$.
Problem 38
Show that there is no one-to-one analytic function that maps $$G = \{z : 0 < |z| < 1\}$$ onto the annulus $$\Omega = \{z : r < |z| < R\}$$, where $$r>0$$.
Problem 39
1. State a theorem that gives a sufficient condition for a family $$\mathcal F$$ of analytic functions to be normal in a domain $$G$$.
2. Let $$\mathcal F\subseteq H(D)$$ be a family of analytic functions on the open unit disk $$D = D(0,1)$$. Let $$\{M_n\}$$ be a sequence of positive real numbers such that $$\varlimsup_{n\to \infty} \sqrt[n]{M_n} < 1$$. If for each $$f(z) = \sum_{n=0}^\infty a_n z^n \in \mathcal F$$, $$|a_n| \leq M_n$$ for all $$n$$, prove that $$\mathcal F$$ is a normal family.
Problem 40
Is there a harmonic function $$u(z)$$ defined on the open unit disk, $$D(0, 1)$$, such that $$u(z_n) \to \infty$$ whenever $$|z_n|\to 1^-$$? Prove your answer.
Problem 41
Let $$G$$ be a simply connected domain with at least 2 boundary points. Let
$S = \{\psi \in H(G) \mid \psi\colon G \to D(0,1), \psi \text{ is one-to-one} \}.$
Prove, without using the Riemann mapping theorem, that the set $$S$$ is nonempty.
## Solutions¶
Solution to Problem 34
We give two alternative proofs of this result.
Proof 1. Consider, for some $$z_0\in \mathbb C \setminus \gamma^*$$,
$\begin{split}\frac{1}{\omega - z} &= \frac{1}{(\omega - z_0) - (z - z_0)}= \frac{1}{(\omega - z_0)\left(1 -\frac{z - z_0}{\omega - z_0}\right)} \\[4pt] &= \frac{1}{(\omega - z_0)}\sum_{n=0}^\infty \left(\frac{z - z_0}{\omega - z_0}\right)^n,\end{split}$
and the latter converges absolutely and uniformly for $$|z - z_0| < |\omega - z_0|$$.
Now fix $$z_0\in \mathbb C \setminus \gamma^*$$. Then, for all $$z$$ satisfying $$|z - z_0| < \mathrm{dist}(z_0, \gamma^*)$$,
$\begin{split}F(z) &= \int_\gamma \frac{\varphi(\omega)}{(\omega-z)} \, d\omega = \int_\gamma \frac{\varphi(\omega)}{(\omega-z_0)}\sum_{n=0}^\infty\left(\frac{z - z_0}{\omega - z_0}\right)^n \, d\omega\\[4pt] &= \sum_{n=0}^\infty \left(\int_\gamma \frac{\varphi(\omega)}{(\omega-z_0)^{n+1}}\, d\omega\right) (z - z_0)^n = \sum_{n=0}^\infty a_n (z - z_0)^n,\end{split}$
where $$a_n = \int_\gamma \frac{\varphi(\omega)}{(\omega-z_0)^{n+1}}\, d\omega$$.
(Note: interchanging the summation and integration is okay since the sum converges absolutely and uniformly.)
Since we can represent $$F$$ as such a power series about every $$z_0 \in \mathbb C \setminus \gamma^*$$, this proves $$F \in H(\mathbb C \setminus \gamma^*)$$.
By Taylor’s theorem, if $$F(z) = \sum_{n=0}^\infty a_n (z-z_0)^n$$, then $$a_n = \frac{F^{(n)}(z_0)}{n!}$$.
Comparing this with the result above, we see that
$F^{(n)}(z_0) = n! \int_\gamma \frac{\varphi(\omega)}{(\omega - z_0)^{n+1}}\, d\omega,$
holds for every $$z\in \mathbb C \setminus \gamma^*$$. In particular,
$F'(z_0) = \int_\gamma \frac{\varphi(\omega)}{(\omega - z_0)^2}\, d\omega.$
Proof 2. We first recall two easy facts.
Fact 1. $$\varphi$$ is uniformly continuous on the compact set $$\gamma^*$$ and the image $$\varphi[\gamma^*]$$ is compact.
Fact 2. $$F(z)$$ is a Cauchy integral of a continuous function, so $$F \in H(\mathbb C \setminus \gamma^*)$$.
Fix $$z_0\in \mathbb C \setminus \gamma^*$$ and let
$d_0 = \mathrm{dist}(z_0, \gamma^*) = \inf\{|\zeta - z_0| : \zeta \in \gamma^*\}.$
Assume $$0 < |a - z_0| < d_0/2$$. Then
$\begin{split}\frac{F(a) - F(z_0)}{a - z_0} &= \frac{1}{a-z_0} \int_\gamma \frac{\varphi(\omega)}{\omega - a} \, d \omega - \int_\gamma \frac{\varphi(\omega)}{\omega - z_0} \, d \omega\\[4pt] &= \frac{1}{a-z_0} \int_\gamma \frac{\varphi(\omega)[(\omega-z_0) - (\omega-a)]}{(\omega - a)(\omega - z_0)} \, d \omega\\[4pt] &= \int_\gamma \frac{\varphi(\omega)}{(\omega - a)(\omega - z_0)} \, d \omega\\[4pt]\end{split}$
Let $$M = \sup\{|\varphi(\omega)| : \omega \in \gamma^*\}$$ and observe that $$M < \infty$$. Also, for $$\omega \in \gamma$$, we have $$|\omega - a| > d_0/2$$ and $$|\omega - z_0| \geq d_0$$. Therefore,
$\frac{|\varphi(\omega)|}{|\omega - a| |\omega - z_0|} < \frac{M}{\frac{d_0}{2}\cdot d_0}= \frac{2M}{d_0^2}.$
This holds for all $$a \in B(z_0, d_0/2) \setminus \{z_0\}$$. Therefore,
$\begin{split}F'(z) &= \lim_{a \to z_0} \frac{F(a) - F(z_0)}{a - z_0} \\[4pt] &= \lim_{a \to z_0} \int_\gamma \frac{\varphi(\omega)}{(\omega-a)(\omega-z_0)} \, d\omega\\[4pt] &= \int_\gamma \lim_{a \to z_0} \frac{\varphi(\omega)}{(\omega-a)(\omega-z_0)} \, d\omega\\[4pt] &= \int_\gamma \frac{\varphi(\omega)}{(\omega-z_0)^2} \, d\omega.\end{split}$
Solution to Problem 35
1. Theorem. (Casorati-Weierstrass) Let $$f$$ be holomorphic in a region $$G \subset \mathbb C$$ except for at isolated singularity at $$z_0 \in G$$. Then for every $$\omega \in \mathbb C$$ there is a sequence $$\{z_n\} \subset G$$ such that $$z_n \to z_0$$ and $$\lim_{n\to \infty}f(z_n) = \omega$$.
In other words, in a neighborhood $$U$$ of an isolated singularity $$z_0$$, the image set $$f[U \setminus \{z_0\}]$$ is dense in $$\mathbb C$$.
1. We give two solutions.
Solution 1. Consider the expansion of $$(z-3) \sin\bigl(\frac{1}{z+1}\bigr)$$. The factor $$f(z) := \sin\bigl(\frac{1}{z+1}\bigr)$$ has an isolated singularity at $$z_0 = -2$$ and this is the only singularity of $$f$$ in $$|z|\leq R$$.
Now, recall,
$\sin z = \sum_{n=0}^\infty \frac{(-1)^n z^{2n+1}}{(2n+1)!}.$
Let $$\omega = z+2$$. Then $$z = \omega-2$$, so $$z-3 = \omega-5$$, so
$\begin{split}(z-3) \sin\bigl(\frac{1}{z+1}\bigr) &= (\omega-5)\sin \bigl(\frac{1}{\omega}\bigr)\\ &= (\omega-5)\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} \frac{1}{\omega^{2n+1}}\\ &= (\omega-5)\bigl[\frac{1}{\omega} - \frac{1}{3!\omega^3} + \frac{1}{5!\omega^5} - \cdots\bigr] \\ &= \bigl(1 - \frac{1}{3!\omega^2} + \frac{1}{5!\omega^4} - \cdots \bigr) - \bigl(\frac{5}{\omega} - \frac{5}{3!\omega^3} +\frac{5}{5!\omega^5} - \cdots \bigr)\\ &= 1 - \frac{5}{\omega} - \frac{1}{3!\omega^2} + \frac{5}{3!\omega^3} +\frac{1}{5!\omega^4} - \frac{5}{5!\omega^5} -\cdots\\ &= 1 - \frac{5}{z+2} - \frac{1}{3!(z+2)^2} + \frac{5}{3!(z+2)^3} +\frac{1}{5!(z+2)^4} - \cdots.\end{split}$
Thus, $$-2$$ is an essential singularity of $$f$$.
Since $$-2$$ is in the interior of $$\{z : |z| = R\}$$, and since $$-5$$ is the coefficient of $$(z-(-2))^{-1}$$, we have
$\frac{1}{2\pi i} \int_{|z| = R}(z-3)\sin\bigl(\frac{1}{z+2}\bigr) \, dz = -5.$
Solution 2. Let $$\omega = \frac{1}{z + 2}$$. Then $$d\omega = -(z + 2)^{-2}\, dz$$, or $$d\omega = -\omega^{2}\, dz$$; also, $$z = \frac{1}{\omega}-2$$. Therefore, the integral in question is
$\begin{split}I &= \frac{1}{2\pi i} \int_{C} \left(\frac{1}{\omega} - 2 - 3\right) \sin \omega \, \frac{d\omega}{-\omega^2}\\[4pt] &= \frac{1}{2\pi i} \int_{C} \left(\frac{5\omega -1}{\omega^3}\right) \sin \omega \, d\omega\\[4pt] &= \frac{5}{2\pi i} \int_{C} \frac{\sin \omega}{\omega^2} \, d\omega - \frac{1}{2\pi i} \int_{C} \frac{\sin \omega}{\omega^3} \, d\omega\end{split}$
Now, consider the curve $$C := \{\frac{1}{2 + Re^{i\theta}} : 0 \leq \theta \leq 2\pi\}$$, where $$R\geq 4$$. Of course, $$\{2 + Re^{i\theta} : 0 \leq \theta \leq 2\pi\}$$ is the circle of radius $$R$$ centered at 2, so $$\frac{1}{2 + Re^{i\theta}}$$ is the reflection of $$2 + Re^{i\theta}$$ across the unit circle. (Note that $$2 + Re^{i\theta}$$ lies entirely outside the unit circle, since $$R\geq 4$$.)
We see that $$C$$ is a closed circle containing the origin in its interior. Therefore, $$I$$ is computed as
$I = 5\mathrm{Res}\bigl(\frac{\sin \omega}{\omega^2}, 0\bigr) - \mathrm{Res}\bigl( \frac{\sin \omega}{\omega^3}, 0\bigr).$
To compute the residues, we recall the Laurent expansion of a function $$f$$ with a pole of order $$m$$ at $$z = z_0$$. That is,
$f(z) = \sum_{n=-m}^\infty a_n (z - z_0)^n.$
It follows that
$(z-z_0)^m f(z) = \sum_{n=-m}^\infty a_n (z - z_0)^{n+m} = a_{-m} + a_{-m+1}(z-z_0) + a_{-m+2}(z-z_0)^2 + \cdots,$
so $$\frac{d}{dz}(z-z_0)^m f(z) = a_{-m+1} + 2a_{-m+2}(z-z_0) + \cdots$$ and
$\left(\frac{d}{dz}\right)^{m-1}(z-z_0)^m f(z) = (m-1)(m-2)\cdots 2a_{-1} + m(m-1)\cdots 2 a_0(z-z_0) + \cdots.$
Thus,
$a_{-1} = \lim_{z\to z_0} \frac{1}{(m-1)!}\left(\frac{d}{dz}\right)^{m-1}(z-z_0)^m f(z) = \mathrm{Res}(f, z_0).$
It follows that
$\mathrm{Res}(\frac{\sin \omega}{\omega^3},0) = \lim_{\omega\to 0} \frac{1}{2}\left(\frac{d}{d\omega}\right)^2\sin \omega= \lim_{\omega\to 0} \frac{-\sin \omega}{2} = 0.$
A similar calculation yields,
$\mathrm{Res}\bigl(\frac{\sin \omega}{\omega^2},0\bigr) = \lim_{\omega\to 0}\frac{d}{d\omega}\sin \omega= \lim_{\omega\to 0} \cos \omega = 1.$
Plugging these residues into the expression for $$I$$ that we derived earlier, we see that $$I = 5$$.
Solution to Problem 36
We give two proofs.
Proof 1.
Since $$f$$ is entire and non-vanishing in $$\mathbb C$$, there exists an entire function $$g$$ such that $$f(z) = e^{g(z)}$$ for all $$z\in \mathbb C$$.
We first show $$g$$ is a constant belonging to the set $$\{i2\pi n : n \in \mathbb Z\}$$.
By (17)
(18)$0 < |f(z)| = |e^{g(z)}| \leq e^{|z|}.$
Let $$g(z) = u(z) + i v(z) = \mathfrak{Re} g(z) + i \mathfrak{Im} g(z)$$. Then, $$|e^{g(z)}| = e^{u(z)}$$. By (18), this implies $$u(z) \leq |z|$$ for all $$z\in \mathbb C$$.
Now, since $$g$$ is entire, we can write it as a power series expansion about $$z=0$$:
(19)$g(z) = \sum_{n=0}^\infty a_n z^n$
and $$f(z) = e^{g(z)}$$ implies $$f'(z) = g'(z)e^{g(z)} = g'(z) f(z)$$. Therefore, the assumptions $$f(0) = 1$$ and $$f'(0) = 0$$ yield $$g'(0)\cdot 1 = 0$$, so $$g'(0) = 0$$.
By the expansion (19), we have $$g'(z) = a_1 + 2a_2 z + \cdots$$. This and $$g'(0) = 0$$ imply $$a_1 = 0$$.
By (18), $$|e^{g(z)}| \leq e^{|z|}$$, that is,
$|e^{a_0 + a_1z + a_2z^2 + \cdots} | \leq e^{|z|},$
which holds iff
$|e^{a_0}| |e^{a_1z}| |e^{a_2z^2}| \cdots \leq e^{|z|}$
In particular, this holds when $$z = a$$ is a large real number, which is only possible if $$a_2 = a_3 = \cdots = 0$$ (and we already showed $$a_1 = 0$$).
Therefore, $$a_0$$ is the only nonzero coefficient in the expansion of $$g(z)$$. In other words, $$g(z)$$ is constant.
We now have $$f(z) = e^{a_0}$$ for some constant $$a_0 \in \mathbb C$$. Therefore, $$f$$ is constant, and, by assumption, $$f(0)=1$$. Therefore, $$f(z) = 1$$ for all $$z\in \mathbb C$$.
Proof 2. 1 By Hadamard’s factorization theorem, an entire function $$f$$ with zeros at $$\{a_n\}\subset \mathbb{C}\setminus \{0\}$$ and $$m$$ zeros at $$z=0$$ has the form
$f(z) = e^{P(z)} z^m \prod_{n=0}^\infty \left(1-\frac{z}{a_n}\right) e^{z/a_n}, :label: had$
where $$P(z)$$ is a polynomial of degree $$\rho$$, the “order of growth,” and $$k\leq \rho < k+1$$. For the function in question, we have $$|f(z)|>0$$ so $$\{a_n\} = \emptyset$$ and $$m=0$$. Also, since $$|f(z)|\leq e^{|z|}$$, the order of growth is $$\rho=1$$, which implies that $$P(z)$$ is a polynomial of degree 1. Therefore, ([eq:had]) takes the simple form $$f(z) = e^{Bz+C}$$, for some constants $$B, C$$. We are given that $$f(0)=1$$ and $$f'(0)=0$$, so $$e^{C} = 1$$, and $$f'(0) = Be^{C} = B = 0$$. It follows that $$f(z) = 1$$.
(An alternative proof of this result appears in Rudin’s Functional Analysis book ([Rud91]) on page 250.)
Solution to Problem 37. (coming soon)
Solution to Problem 38.
Suppose there is a holomorphic bijection of the open punctured unit disk, $$G = \{z \mid 0< z < 1\}$$ onto the annulus, $$A(0; r:R) = \Omega = \{z \mid r < |z < R\}$$.
Then, since $$|f(z)|$$ remains bounded, $$z=0$$ is a removable singularity and $$f \in H(D)$$ where $$f(0) = \lim_{z\to 0}f(z)$$.
Consider the possible values of $$f(0)$$. Since $$f$$ is continuous, $$f(0) \in \{\omega \mid r \leq |\omega| \leq R\}$$.
Also $$f(0) \notin \{\omega \mid |\omega|=r\}\cup \{\omega \mid |\omega|=R\}$$, the boundary of the annulus, since that would violate the open mapping theorem.
(Indeed, $$f$$ maps the open disk $$\{z \mid |z| < 1\}$$ onto the set $$\{w \mid r < |w| < 1\} \cup \{f(0)\}$$, which is not open if $$|f(0)| = r$$ or $$|f(0)| = R$$.)
So, suppose $$f(0) = \omega_0 \in \Omega$$; then, since $$f$$ maps $$G$$ onto $$\Omega$$, there exists $$z_0 \in \Omega \setminus \{0\}$$ such that $$f(z_0) = \omega_0 = f(0)$$.
Let $$U, V$$ be disjoint open subsets of the unit disk such that $$0\in U$$ and $$z_0\in V$$. Then (by the open mapping theorem) $$f(U) \cap f(V)$$ is a nonempty open set.
Therefore, there exists $$\omega_1 \in f(U) \cap f(V)$$ such that $$\omega_1 \neq \omega_0$$. Thus, there exists $$z_1 \in U$$ such that $$f(z_1) = \omega_1$$ and, of course, $$z_1 \notin \{0, z_0\}$$, since $$f(z_1) \neq \omega_0$$.
Similarly, $$\omega_1 \in f(U) \cap f(V)$$ implies there exists $$z_2 \in V$$ such that $$f(z_2) = \omega_1$$ and $$z_2 \notin \{0, z_0, z_1\}$$.
But then we have $$f(z_1) = \omega_1 = f(z_2)$$ and $$z_1 \neq z_2$$, contradicting the assumption that $$f$$ is a bijection (in particular, one-to-one).
Solution to Problem 39.
1. See Montel’s theorem in the Appendix.
2. By the root test, $$\sum_{n=0}^\infty M_n < \infty$$. Therefore, for every $$0\leq r < 1$$, we have,
$|f(z)| \leq \sum_{n=0}^\infty |a_n| |z|^n \leq \sum_{n=0}^\infty |a_n| \leq \sum_{n=0}^\infty M_n < \infty,$
for all $$|z| < r$$ and all $$f \in \mathcal F$$.
Since this is stronger than the sufficient condition in Montel’s theorem (i.e., local boundedness), $$\mathcal F$$ is a normal family.
Solution to Problem 40. (coming soon)
Solution to Problem 41. (coming soon)
Footnotes
1
I was fortunate to have worked on this exam after having just read a beautiful treatment of the Hadamard factorization theorem in Stein and Sharkachi’s book [SS03]. If you need convincing that this theorem is worth studying, take a look at how easily it dispenses with this and other, otherwise challenging exam problems. Stein and Sharkachi seem to have set things up just right, so that the theorem is very easy to apply.
Real Analysis Exams | 2020-01-26 16:07:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799452424049377, "perplexity": 111.35514945299515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251689924.62/warc/CC-MAIN-20200126135207-20200126165207-00120.warc.gz"} |
https://sunglee.us/mathphysarchive/ | # Modular Arithmetic
Recall the equivalence relation $\equiv\ \mathrm{mod}\ n$ on $\mathbb{Z}$ (we discussed it here) $$p\equiv q\ \mathrm{mod}\ n\ \Longleftrightarrow\ n|(p-q)$$ Let us denote by $p\ \mathrm{mod}\ n$ the remainder when $p$ is divided by $n$. The definition of the equivalence relation implies that $$p\equiv q\ \mathrm{mod}\ n\ \Longleftrightarrow\ p\ \mathrm{mod}\ n=q\ \mathrm{mod}\ n$$ (The proof is left as an exercise.) When an integer $p$ is divided by $n$, the remainder $p\ \mathrm{mod}\ n$ is an integer satisfying the inequality $0\leq p\ \mathrm{mod}\ n\leq n-1$. Therefore, the equivalence class $[p]$ is one of $[0],[1],\cdots,[n-1]$, i.e. the quotient set $\mathbb{Z}_n=\mathbb{Z}/\equiv\mathrm{mod}\ n$ is $$\mathbb{Z}_n=\{[0],[1],[2],\cdots,[n-1]\}$$ Also recall that in here we defined $+$ and $\cdot$ on $\mathbb{Z}_n$ by \begin{align*}[a]+[b]&=[a+b]\\ [a]\cdot[b]&=[a\cdot b]\end{align*} These operations are well-defined due to the following properties (see Problem Set 12 #7): if $a\equiv b\ \mathrm{mod}\ n$ and $c\equiv d\ \mathrm{mod}\ n$, then \begin{aligned}a+c&\equiv b+d\ \mathrm{mod}\ n\\a\cdot c&\equiv b\cdot d\ \mathrm{mod}\ n\end{aligned}\label{eq:congrel} The following is another useful property: if $a\equiv b\ \mathrm{mod}\ n$ and $c$ is an integer then $$\label{eq:congrel2}c\cdot a\equiv c\cdot b\ \mathrm{mod}\ n$$ Its proof is straightforward so it is left as an exercise. The properties \eqref{eq:congrel} and \eqref{eq:congrel2} can be used to calculate the remainder when a significantly large integer is divided by an integer $n$.
Example. What is $N\ \mathrm{mod}\ 21$ where $$N=113\times (167+484)+192\times 145?$$
Solution. $N=113\times (167+484)+192\times 145=101403$, which is not really a large number. One can simply divide 101403 by 21 and find the remainder 15. Let us now try a different way. Note that $113\equiv 8\ \mathrm{mod}\ 21$, $167\equiv 20\ \mathrm{mod}\ 21$, $484\equiv\ 1\mathrm{mod}\ 21$, $192\equiv 3\ \mathrm{mod}\ 21$, and $145\equiv 19\ \mathrm{mod}\ 21$. Hence by using \eqref{eq:congrel} we have $$N\equiv 8\times (20+1)+3\times 19=225\ \mathrm{mod}\ 21$$ 225 is a much smaller number and one can easily divide it by 21 and obtain the remainder 15. On the other hand, $21\equiv 0\ \mathrm{mod}\ 21$ and $19\equiv -2\ \mathrm{mod}\ 21$ since $19=21-2$. So by using the properties \eqref{eq:congrel} and \eqref{eq:congrel2}, we have $$N\equiv 8\times 0+3\times (-2)=-6\equiv 15\ \mathrm{mod}\ 21$$ since $-6=21(-1)+15$.
Example. What is $10^3\ \mathrm{mod}\ 7$?
Solution. $1000$ is not a large number and one can easily divide it by 7 to obtain $1000=142\cdot 7+6$ and so $10^3\equiv 6\ \mathrm{mod}\ 7$. There is a smarter way to do this though. Note that $10=7\cdot 1+3$, so $10\equiv 3\ \mathrm{mod}\ 7$. Using \eqref{eq:congrel}, we obtain $$10^3\equiv 3\cdot 3\cdot 3=27\equiv 6\ \mathrm{mod}\ 7$$
The remainder when a large number is divided by an integer sometimes can be calculated using repeated squaring as seen in the following example.
Example. Find $3^{25}\ \mathrm{mod}\ 7$.
Solution. \begin{align*}3^{25}&=(3^{12})^2\cdot 3\\&=((3^6)^2)^2\cdot 3\\&=(((3^3)^2)^2)^2\cdot 3\\&=(((3^2\cdot 3)^2)^2)^2\cdot 3\\&\equiv (((2\cdot 3)^2)^2)^2\cdot 3\\&=((36)^2)^2\cdot 3\\&\equiv 3\ \mathrm{mod}\ 7\end{align*} where we use the fact $36\equiv 1\ \mathrm{mod}\ 7$.
Definition. $[y]$ is said to be a multiplicative inverse of $[x]$ in $\mathbb{Z}_n$ if $x\cdot y\equiv 1\ \mathrm{mod}\ n$.
It is not necessarily that every non-zero element of $\mathbb{Z}_n$ has a multiplicative inverse. For example, let us consider the multiplication table of $\mathbb{Z}_4$. Here, we denoted $[a]$ by simply $a$. $$\begin{array}{|c|c|c|c|c|}\hline\cdot & 0 & 1 & 2 & 3\\\hline 0 & 0 & 0 & 0 &0\\\hline1 & 0 & 1 & 2 & 3\\\hline2 & 0 & 2 & 1 & 2\\\hline3 & 0 &3 & 0 &3\\\hline\end{array}$$ As clearly seen in the table, 3 does not have a multiplicative inverse. On the other hand, in $\mathbb{Z}_5$ all non-zero elements have multiplicative inverses as shown in the following table. $$\begin{array}{|c|c|c|c|c|c|}\hline\cdot & 0 & 1 & 2 & 3 & 4\\\hline 0 & 0 & 0 & 0 & 0 & 0\\\hline 1 & 0 & 1 & 2 & 3 & 4\\\hline 2 & 0 & 2 & 4 & 1 & 3\\\hline 3 & 0 &3 & 1 & 4 & 2\\\hline 4 & 0 & 4 & 3 & 2 & 1\\\hline\end{array}$$ Notice that 5 is a prime number. In fact, we have the following cool theorem.
Theorem. If $p$ is prime, then every non-zero element of $\mathbb{Z}_p$ has a multiplicative inverse. Therefore, $\mathbb{Z}_p$ is a finite field.
Proof. Let $a$ be an integer such that $1\leq a\leq p-1$. Then clearly $$\mathbb{Z}_p[a]=\{[0],[1\cdot a],\cdots,[(p-1)\cdot a]\}\subseteq\mathbb{Z}_p$$ If $[0],[1\cdot a],\cdots,[(p-1)\cdot a]$ are mutually distinct, $\mathbb{Z}_p[a]=\mathbb{Z}$ and therefore, $[i]\cdot[a]=[i\cdot a]=1$ for some $1\leq i\leq p-1$. That is, $[a]$ has a multiplicative inverse. So the proof of this theorem boils down to showing that $\mathbb{Z}_p[a]$ has $p$ distinct elements for any $1\leq a\leq p-1$. This is a consequence of the lemma below.
Lemma. If $p$ is a prime number such that $p\not| a$ and $i\cdot a\equiv j\cdot a\ \mathrm{mod}\ p$ then $i\equiv j\ \mathrm{mod}\ p$.
Proof. If $i\cdot a\equiv j\cdot a\ \mathrm{mod}\ p$ then $p|a(i-j)$. Since $p\not| a$, $p|i-j$ i.e. $i\equiv j\ \mathrm{mod}\ p$.
References.
[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019
[2] A Short Course in Discrete Mathematics, Edward A. Bender and S. Gil Williamson, Dover Publications, 2012
# Equivalence Relations
Definition. Let $A$ be a nonempty set. Then $R\subseteq A\times A$ is called a binary relation on $A$. A binary relation $R$ on $A$ is said to be an equivalence relation if
1. $R$ is reflexive: $\forall x\in A$, $(x,x)\in R$
2. $R$ is symmetric: $(x,y)\in R\Longrightarrow (y,x)\in R$
3. $R$ is transitive: $(x,y)\in R\ \mathrm{and}\ (y,z)\in R\Longrightarrow (x,z)\in R$
Definition. Let $R$ be an equivalence relation on a set $A$. For each $x\in A$, let $$[x]=\{y\in A: (y,x)\in R\}$$ Then $[x]$ is called the ($R$-)equivalence class of $x\in A$.
Theorem 1. Let $R$ be an equivalence relation on a set $A$. Then
1. $\forall x,y\in A$, $[x]\cap [y]=\emptyset$ or $[x]=[y]$.
2. $A=\bigcup_{x\in A}[x]$.
Proof.
1. Let $x,y\in A$. If $[x]\cap [y]=\emptyset$, then we are done. Now suppose that $[x]\cap [y]\ne\emptyset$. Then $\exists z\in[x]\cap [y]$ so that \begin{align*}(z,x)\in R\ \mathrm{and}\ (z,y)\in R&\Longrightarrow (x,z)\in R\ \mathrm{and}\ (z,y)\in R\\&\Longrightarrow (x,y)\in R\end{align*} This implies that $[x]=[y]$.
2. Left as an exercise.
Definition. Let $A$ be a set. By a partition of $A$ we mean a family $\{A_i\}_{i\in I}$of nonempty subsets of $A$ such that
1. $\forall i,j\in I$, $A_i\cap A_j=\emptyset$ or $A_i=A_j$.
2. $A=\bigcup_{i\in I}A_i$
Theorem 1 says that given an equivalence relation $R$ on $A$, the $R$-equivalence classes form a partition of $A$. Conversely, given a partition $\{A_i\}_{i\in I}$, one can define an equivalence relation $R$ whose equivalence classes coincide with the $A_i$´s.
Theorem 2. Let $\{A_i\}_{i\in I}$ be a partition of a set $A$. Define a relation $R$ on $A$ as follows: $$\forall x,y\in A,\ (x,y)\in R\Longleftrightarrow x,y\in A_i\ \mbox{for some}\ i\in I$$ Then $R$ is an equivalence relation on $A$ and the $R$-equivalence classes coincide with the $A_i$´s.
Proof. Left as an exercise.
Given an equivalence relation $R$ on a set $A$, the set of all equivalence classes is called the quotient set of $A$ modulo (or mod in short) $R$ and is denoted by $A/R$ i.e. $$A/R=\{[x]: x\in A\}$$
There is a map $\gamma: A\longrightarrow A/R$ defined by $$\forall x\in A,\ \gamma(x)=[x]$$ $\gamma$ is called the canonical map from $A$ to $A/R$.
Example 1. (The Vector Space $\mathbb{R}^3$) Let $V$ be the set of all directed arrows in 3-space $\mathbb{R}^3$. Define a relation $\equiv$ on $V$ as follows: $$\forall \overrightarrow{AB},\overrightarrow{CD}\in V,\ \overrightarrow{AB}\equiv\overrightarrow{CD}\Longleftrightarrow\ \overrightarrow{AB}\ \mathrm{and}\ \overrightarrow{CD}\ \mbox{have the same direction and magnitude}$$ Then clearly $\equiv$ is an equivalence relation on $V$. Denote by $\mathcal{V}$ the quotient set $V/\equiv$ and each equivalence class $[\overrightarrow{AB}]\in\mathcal{V}$ is called a vector. Each directed arrow $\overrightarrow{AB}$ whose starting point is $A$ and terminal point is $B$ is $\equiv$-related to a directed arrow $\vec{a}$ whose starting point is the origin $O=(0,0,0)$ and terminal point is $B-A$. $\vec{a}$ can be identified with its terminal point $B-A$ which is an ordered triple $(a_1,a_2,a_3)$. Hence from here on, without loss of generality, we may assume that each equivalence class is represented by such a vector. Define the vector addition $+:\mathcal{V}\times\mathcal{V}\longrightarrow\mathcal{V}$ by $$\forall [\vec{a}],[\vec{b}]\in\mathcal{V},\ [\vec{a}]+[\vec{b}]:=[\vec{a}+\vec{b}]$$ Here, $$\vec{a}+\vec{b}=(a_1,a_2,a_3)+(b_1,b_2,b_3)=(a_1+b_1,a_2+b_2,a_3+b_3)$$ Also define the scalar multiplication $\cdot :\mathbb{R}\times\mathcal{V}\longrightarrow\mathcal{V}$ by $$\forall c\in\mathbb{R},\ \forall [\vec{a}]\in\mathbb{V},\ c[\vec{a}]:=[c\vec{a}]$$ Here, $$c\vec{a}=(ca_1,ca_2,ca_3)$$ Let $[\vec{a}]=[\vec{a}’]$ and $[\vec{b}]=[\vec{b}’]$. Then $\vec{a}=\vec{a}’$ and $\vec{b}=\vec{b}’$ i.e. $a_i=a’_i$ and $b_i=b’_i$, $i=1,2,3$. Thus $$\vec{a}+\vec{b}=\vec{a}’+\vec{b}’\Longrightarrow [\vec{a}]+[\vec{b}]=[\vec{a}+\vec{b}]=[\vec{a}’+\vec{b}’]=[\vec{a}’]+[\vec{b}’]$$ Hence, the vector addition is well-defined. It can be shown similarly that the scalar multiplication is also well-defined. We will see in the next example that there is a bijective from $\mathcal{V}$ to $\mathbb{R}^3$.
Example 2. (The Kernel of a Function) Let $f: A\longrightarrow B$ be a function. The kernel of $f$, $\ker f$ is defined by $$\ker f=\{(x,y)\in A\times A: f(x)=f(y)\}$$ It is easy to see that $\ker f$ is an equivalence relation on $A$. It turns out that the quotient set $A/\ker f$ coincides with the partition of $A$ by the pre-images $f^{-1}(z)$, $\forall z\in A$. (See Problem Set 12 #5 (b).) Now we suppose that $f: A\longrightarrow B$ is surjective (onto). Define a map $\psi: A/\ker f\longrightarrow B$ by $$\forall [x]\in A/R,\ \psi([x])=f(x)$$ Then $\psi$ is a bijective and $\gamma=\psi\circ f$, where $\gamma: A\longrightarrow A/\ker f$ is the canonical map. (Its proof is left as an exercise.)
In Example 1, we have seen that given a directed arrow $\overrightarrow{AB}$ there is a $\equiv$-related directed arrow whose starting point the origin $O$ and terminal point $B-A$ and that such a directed arrow can be identified with its terminal point $\vec{a}:=B-A=(a_1,a_2,a_3)\in\mathbb{R}^3$. This defines a surjective map $f: V\longrightarrow\mathbb{R}^3$ and $\ker f=\equiv$. Therefore, we see that there is a bijective $\psi$ from $V/\ker f=V/\equiv$ to $\mathbb{R}^3$ defined by $$\forall [\vec{a}]\in V/\equiv,\ \psi([\vec{a}])=(a_1,a_2,a_3)$$ Furthermore, this bijective is a vector space homomorphism i.e. it preserves the vector addition and the scalar multiplication: \begin{align*}\psi([\vec{a}]+[\vec{b}])&=\psi([\vec{a}])+\psi([\vec{b}])\\\psi(c[\vec{a}])&=c\psi([\vec{a}])\end{align*} Therefore $V/\equiv$ is isomorphic to $\mathbb{R}^3$ as a vector space.
Example 3. (The Construction of Rational Numbers From Integers) Let $\mathbb{Z}$ be the set of all integers. Define a relation $\sim$ on $\mathbb{Z}\times(\mathbb{Z}\setminus\{0\})$ by $$\forall (a,b),(c,d)\in \mathbb{Z}\times(\mathbb{Z}\setminus\{0\}),\ (a,b)\sim (c,d)\Longleftrightarrow ad=bc$$ Then $\sim$ is an equivalence relation. (Its proof is left as an exercise.) The equivalence class of $(a,b)$ is denoted by $\frac{a}{b}$. For example, $(1,2)\sim (2,4)$ thus $\frac{1}{2}=\frac{2}{4}$. We see that the quotient set $\mathbb{Z}\times(\mathbb{Z}\setminus\{0\})/\sim$ can be identified with $\mathbb{Q}$, the set of all rational numbers.
Example 4. (Residue Classes Modulo $n$) Let $\mathbb{Z}$ be the set of all integers. Define a relation $\equiv\ \mathrm{mod}\ n$ on $\mathbb{Z}$ by $$\forall p,q\in\mathbb{Z},\ p\equiv q\ \mathrm{mod}\ n\Longleftrightarrow n|(p-q)$$ Then $\equiv\ \mathrm{mod}\ n$ is an equivalence relation. (Its proof is left as an exercise.) An integer $m$ can be written as $m=nk+r$ where $k$ is the quotient and $0\leq r<n$ is the remainder when $m$ is divided by $n$. Thus $m\equiv\ \mathrm{mod}\ n r$. This means that there are exactly $n-1$ equivalence classes $$[0],[1],\cdots,[n-1]$$ These equivalence classes are called the residue classes modulo $n$. The quotient set $\mathbb{Z}/\equiv\ \mathrm{mod}\ n$ is usually denoted by $\mathbb{Z}_n$. We can define addition and multiplication on $\mathbb{Z}_n$ as follows: $\forall [a],[b]\in\mathbb{Z}_n$, \begin{align*}[a]+[b]&=[a+b]\\ [a]\cdot [b]&=[ab]\end{align*} These operations are well-defined because the equivalence relation $\equiv\ \mathrm{mod}\ n$ preserves the operations, namely if $a\equiv b\ \mathrm{mod}\ n$ and $c\equiv d \mathrm{mod}\ n$ then \begin{align*}a+c&\equiv b+d\ \mathrm{mod}\ n\\ ac&\equiv bd\ \mathrm{mod}\ n\end{align*} An equivalence which preserves operations are called a congruence relation. Congruence relations play a crucial role in constructing quotient algebras. For more details about congruence relations, see for example the reference [3]. Note that the equivalence relations in Examples 1 and 3 are also congruence relations. $(\mathbb{Z}_n,+)$ is an abelian group and $(\mathbb{Z}_n,+,\cdot)$ is a commutative ring with unity. $(\mathbb{Z},+,\cdot)$ is a field if $n$ is a prime.
Example. (Digraphs) Let $G=(V,A)$ be a digraph. Let $\sim$ be mutual reachability relation on $V$, i.e. $\forall x,y\in V$, $x\sim y$ if and only if $x$ and $y$ are mutually reachable. It can be easily seen that mutual reachability relation is an equivalence relation. The equivalence classes are called the strongly connected components or simply strong components of the digraph $G$ and they are subgraphs of $G$.
Theorem. Let $G=(V,A)$ be a digraph and let $T$ be the partition of $V$ into the strong components of $G$. Construct a new digraph $G’=(T,A’)$, where $\forall X,Y\in T$, $X\to Y\in A’$ if and only if $X\ne Y$ and $x\to y\in A$ for some $x\in X$ and $y\in Y$. Then $G’$ is a DAG.
Proof. Suppose that there is a cycle in $G’$, $x_0\to x_1\to\cdots x_k\to x_0$ for some $x_0,\cdots,x_k$ belonging to different strong components of $G$. But all theses vertices are mutually reachable in $G$ so this is a contradiction. Therefore $G’$ is acyclic.
References.
[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019
[2] Set Theory, Charles C. Pinter, Addison Wesley Publishing Company, 1971
[3] A Course in Universal Algebra, Stanley N. Burris and H. P. Sankappanavar, Graduate Texts in Mathematics, Springer-Verlag, 1981. The book is available online for free at the first named author’s web page here.
# Directed Graphs
Directed graphs represent binary relations. They can be visualized as diagrams made up of points (called vertices or nodes) and arrows (called arcs or edges). Draw an arc from a vertex $v$ to a vertex $w$ to represent that $v$ is related to $w$ i.e. the ordered pair $(v,w)$ is in the relation.
Example 1. An example of a directed graph
Definition. A directed graph or digraph in short is an ordered pair $(V,A)$ where $V$ is an nonempty set and $A\subseteq V\times V$ (i.e. $A$ is a binary relation on $V$). The members of $V$ are called vertices or nodes and the members of $A$ are called arcs or edges. We write arcs as $v\to w$ rather than $(v,w)$.
In Example 1, $V=\{a,b,c,d,e\}$ and \begin{align*}A&=\{(a,b), (b,c), (a,c), (c,c), (c,d), (b,d),(d,b)\}\\&=\{a\to b, b\to c, a\to c, c\to c, c\to d, b\to d, d\to b\}\end{align*}
Transportation and computer networks have natural representations of digraphs.
A walk in a digraph is a way of proceeding through a sequence of vertices by following arcs i.e. a walk in a digraph $(V,A)$ is a sequence of vertices $v_0,v_1,\cdots,v_n\in V$ for some $n\geq 0$ such that $v_i\to v_{i+1}\in A$ for each $i<n$. The length of this walk is $n$ which is the number of arcs.
Example 2. In Example 1, $b\to d$, $b\to c\to d$, $b\to c\to c\to d$, $b\to d\to b\to d$ are examples of walks from vertex $b$ to vertex $d$. The length of $b\to d$ is 1, the length of $b\to c\to d$ is 2, the length of $b\to c\to c\to d$ is 3 and the length of $b\to d\to b\to d$ is also 3. There is no walk from vertex $b$ to vertex $a$.
A path is a walk that doesn’t repeat any vertex.
Example 3. Among the walks in Example 2, $b\to d$ and $b\to c\to d$ are paths from vertex $b$ to $d$.
A walk in which the first and the last vertex are the same is called a circuit. A circuit is called a cycle if the first and the last vertices are the only repeated vertex. For example, $b\to c\to c\to b\to b$ in the digraph in Example 1 is a circuit but is not a cycle. On the other hand, $b\to c\to d\to b$ is a cycle of length 3, $b\to d\to b$ is a cycle of length 2, and $c\to c$ is a cycle of length 1.
A digraph without any cycles is said to be acyclic. The digraph in Exam 1 is not acyclic as it contains cycles.
A walk can be reduced to a path by removing nontrivial cycles. Suppose that a walk from $v$ to $w$ $$v=v_0\to\cdots\to v_n=w$$ includes a cycle $v_i\to v_{i+1}\to\cdots\to v_j$ where $i<j$ and $v_i=v_j$. Then $$v=v_0\to\cdots\to v_i\to v_{j+1}\to\cdots\to v_n=w$$ is a shorter walk from $v$ to $w$.
For example, the walk $b\to d\to b\to d$ in Example 1 can be reduced to the path $b\to d$ by removing the cycle $b\to d\to d$.
A vertex $w$ is said to be reachable from vertex $v$ if there is a walk or a path from $v$ to $w$. The distance from vertex $v$ to vertex $w$ in a digraph $G$, denoted by $d_G(v,w)$, is the length of the shortest path from $v$ to $w$, or is defined to be $\infty$ if there is no path from $v$ to $w$. For example, the distance from $a$ to $d$ in the digraph in Example 1 is 2 because the shortest path is $a\to c\to d$.
Lemma. The distance from one vertex of a graph to another vertex is at most the length of any walk from the first to the second.
Proof. It follows from the fact that any walk from $v$ to $w$ includes among its arcs a path from $v$ to $w$.
A digraph in which every vertex is reachable from every other vertex is said to be strongly connected.
Let $G=(V,A)$ be a digraph. Let $V’\subseteq V$ and $A’\subseteq A$. Then $(V’,A’)$ is called a subgraph of $G$. $(V,\emptyset)$ is a subgraph of $G$.
Let $G=(V,A)$ be a digraph and $V’\subset V$. Then $$(V’,\{v\to w\in A: v,w\in V’\})$$ is called the subgraph induced by $V’$.
Example 4. The subgraph induced by $V\setminus\{e\}$ in Example 1 is not strongly connected. But the subgraph induced by $\{b,c,d\}$ is strongly connected.
An acyclic digraph is generally called a directed acyclic group or DAG in short.
The out-degree of a vertex is the number of arcs leaving it i.e. $|\{w\in V: v\to w\ \mbox{in}A\}|$. Similarly, the in-degree of a vertex is the number of arcs entering it.
Theorem. A finite DAG has at least one vertex of out-degree 0 and at least one vertex of in-degree 0.
Proof. Let $G=(V,A)$ be a finite DAG. Suppose that $G$ has no vertex of out-degree 0.
Pick a vertex $v_0$. Since $v_0$ has a positive out-degree, there exists an arc $v_0\to v_1$ for some vertex $v_1$. Since $v_1$ has a positive out-degree, there exists an arc $v_1\to v_2$ for some vertex $v_2$. One can continue doing this. But since $V$ is finite, some vertex will be repeated, creating a cycle. This is a contradiction to the graph $G$ being acyclic. A very similar argument can be made to show that $G$ has a vertex of in-degree 0.
In a DAG, a vertex of in-degree 0 is called a source and a vertex of out-degree 0 is called a sink.
A tournament graph
A tournament graph is a digraph in which every pair of distinct vertices are connected by an arc in one direction or the other, but not both. It is a natural representation of a round-robin tournament in which each players competes with all other plays in turn.
Example 5. The tournament graph in Figure 6 shows that $H$ beats both $P$ and $Y$, $Q$ beats $H$, $Y$ beats $P$, and $P$ beats $Q$. Hence we have a cycle $H\to P\to Q\to H$. Awkwardly, there is no champion!
Example 6. The tournament graph in Figure 7 shows that $H$ beats $P$, $Y$, $D$, $Y$ beats $P$ and $D$, and $P$ beats $D$. Hence, $H$ is the first, $Y$ is the second, $P$ the third, and $D$ is the fourth places.
The total number of arcs in a tournament graph with $n$ vertices is $\frac{n(n-1)}{2}$, since there is exactly one arc between each pair of distinct vertices.
A linear order, denoted by $\preceq$, is a binary relation on a finite set $S=\{s_0,s_1,\cdots,s_n\}$ such that $s_i\preceq s_j$ if and only if $i\leq j$.
Example 7. Let $S=\{\mbox{all English words}\}$. Define $s_i\preceq s_j$ to mean that $s_i$ appears before $s_j$ alphabetically or they are equal. Then $\preceq$ is a linear order. This particular linear order is called the lexicographic order or the dictionary order.
A strict order of a finite set, denoted by $\prec$, is an ordering such that $s_i\prec s_j$ if and only if $i<j$.
Theorem. A tournament graph represents a strict linear order if and only if it is a DAG.
Proof. Let $G=(V,A)$ be a tournament graph. Suppose that $G$ represents a strict order. This means that the path $$v_0\to v_1\to\cdots\to v_n$$ represents $$v_0\prec v_1\prec\cdots\prec v_n$$ so every arc goes from $v_i$ to $v_j$ if $i<j$. The graph is then acyclic because any cycle will have to include at least one arc $v_j\to v_i$ where $i<j$. Conversely suppose $G$ is a DAG. We show by induction that $G$ represents a strict order. If $G$ has only one vertex, clearly $G$ represents a strict order. If $G$ has more than one vertex, at least one of the vertices, say $v_0$, has in-degree 0. Since $G$ is a tournament graph, $G$ must contain each of the arc $v_0\to v_i$ ($i\ne 0$). Consider the subgraph induced by $V\setminus\{v_0\}$. This induced subgraph is also a DAG, since all its arcs are arcs of $G$. It is also a tournament graph. The subgraph includes all the arcs of $G$ except those entering from $v_0$. By induction hypothesis, the induced subgraph represents a strict linear order $$v_1\prec v_2\prec\cdots\prec v_n$$ Since $v_0\prec v_i$ for all $i\ne 0$, we have $$v_0\prec v_1\prec\cdots\prec v_n$$ This completes the proof.
Example 8. The tournament graph in Example 7 is a DAG and it represents a strict order $H\prec Y\prec P\prec D$, while the tournament graph in Example 6 doesn’t. (It is not a DAG.)
References.
[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019
# Quantificational Logic
Quantificational logic is an extension of propositional logic. It is logic of expressions such as “for any”, “for all”, “there is some”, and “there is exactly one”. Quantificational logic is also called first-order logic, predicate logic, or predicate calculus. The universal quantifier $\forall$ means “for all”, “for each”, “for any”, or “for every”. The existential quantifier $\exists$ means “there exists”.
Example. $\forall x\exists y P(x,y)$
Quantificational formulae such as the one in the above example are merely strings of symbols which do not mean anything (i.e. being true or false) as logical statements until they are accompanied by proper interpretations. An interpretation of a quantificational formula has to specify the following.
1. The universe $U$, the nonempty set from which the values of the variables ($x$, $y$, $z$, etc.) are drawn.
2. For each, say $k$-ary, predicate symbol $P$, which $k$-tuples of members of $U$ the predicate is true of, and
3. What elements of the universe correspond to any constant symbols, and what functions from the universe to itself correspond to function symbols mentioned in the formula.
Example. Let $U=\{0,1\}$ and $P$ be the less-than relation i.e. $P(x,y)$ is “$x$ is less than $y$.” Then
• $P(0,0)$ is false
• $P(0,1)$ is true
• $P(1,0)$ is false
• $P(1,1)$ is false
The quantificational formula $\forall x\exists y P(x,y)$ is false because there is no value of $y$ in the universe for which $P(x,y)$ is true when $x=1$.
Example. Let $U=\{0,1\}$ and $P$ be the not-equal relation i.e. $P(x,y)$ is “$x$ is not equal to $y$.” Then
• $P(0,0)$ is false
• $P(0,1)$ is true
• $P(1,0)$ is true
• $P(1,1)$ is false
Hence $\forall x\exists y P(x,y)$ is true.
In general, the universe of an interpretation is an infinite set, so it is impossible to specify the values of the predicate for every combination of elements. For a remedy, we restate the definition in terms of relations by saying an interpretation of a quantificational formula consists of
1. a nonempty set called the universe
2. for each $k$-place predicate symbol, a $k$-ary relation on the universe
3. for each $k$-place function symbol, a $k$-ary function from the universe to itself.
Example. Let $U=\mathbb{N}$, the set of natural numbers and $P$ be the less-than relation. Then the formula $\forall x\exists y P(x,y)$ is true.
Example. Let $U=\mathbb{N}$ and $P$ be the greater-than relation. Then the formula $\forall x\exists y P(x,y)$ is false.
Example. An example of a formula involving a function. Let $U=\mathbb{N}$ and $P(x,y)$ be $\forall x\exists y(x+y=0)$. The constant symbol 0 is interpreted as zero and the binary function symbol $+$ represents addition. The formula is false. For example, when $x=1$, there is no value of $y$ in the universe such that $x+y=0$. If $U=\mathbb{Z}$, the set of integers, however, the formula is true.
Two formulae are equivalent if they are true under the same interpretation.
Example. $\forall x\exists y P(x,y)$ and $\forall y\exists x P(y,x)$ are equivalent.
If two formulae $F$ and $G$ are equivalent, we write $F\equiv G$.
A model of a formula is an interpretation in which it is true. A satisfiable formula of quantificational logic is one that has a model. A valid formula, also called theorem, is a formula that is true under every interpretation. Valid formulae are the quantificational analogs of tautologies in propositional logic.
Example. $\forall x(P(x)\wedge Q(x))\Longrightarrow\forall y P(y)$ is a valid formula. $\forall x P(x)\wedge\exists y\neg P(y)$ is unsatisfiable.
Note. $\exists x (H(x)\wedge B(x))\not\equiv\exists xH(x)\wedge\exists x B(x)$. $\exists x (H(x)\wedge B(x))\not\equiv\exists x H(x)\wedge B(x)$. $x$ in $B(x)$ in the second formula is a free variable i.e. \exists x H(x)\wedge B(x)\equiv(\exists x H(x))\wedge B(x)The laws we learned in propositional logic are carried over. For example, Distributive Laws \begin{align}\forall x(P(x)\wedge(Q(x)\vee R(x)))&\equiv\forall x((P(x)\wedge Q(x))\vee(P(x)\wedge R(x)))\label{eq:distlaw1}\\\forall x(P(x)\vee(Q(x)\wedge R(x)))&\equiv\forall x((P(x)\vee Q(x))\wedge(P(x)\vee R(x)))\label{eq:distlaw2}\end{align} Quantificational Equivalence Rule 1 (Proposition Substitutions) Suppose F and G are quantificational formulae and F’ and G’ are propositional formulae that result from F and G, respectively, by replacing each subformula by a corresponding propositional variable at all of its occurrences in both F and G. Suppose F’\equiv G’ as formulae of propositional logic. Then replacing F by G in any formula results n an equivalent formula. Example. \forall x\neg\neg P(x)\equiv\forall x P(x) since p\equiv\neg\neg p and so \neg\neg P(x) can be replaced by P(x). Example. Replacing P(x), Q(x) and R(x) by p, q and r, respectively, turns P(x)\wedge(Q(x)\vee R(x)) into p\wedge(q\vee r) and (P(x)\wedge Q(x))\vee(P(x)\wedge R(x)) into (p\wedge q)\vee(p\wedge r). Since p\wedge(q\vee r)\equiv (p\wedge q)\vee(p\wedge r), P(x)\wedge(Q(x)\vee R(x)) can be replaced by (P(x)\wedge Q(x))\vee(P(x)\wedge R(x)). Hence we have the equivalence in \eqref{eq:distlaw1}. Quantificational Equivalence Rule 2 (Change of Variables) Let F be a formula containing a subformula \Box x G, where \Box is either \forall or \exists. Assume G has no bound occurrence of x and let G’ be the result of replacing x by y everywhere in G. Then replacing \Box x G by \Box y G’ within the formula results in an equivalent formula. Example. \exists x (H(x)\wedge B(x))\equiv\exists y (H(y)\wedge B(y)) Quantificational Equivalence Rule 3 (Quantifier negation) \begin{align*}\neg\forall x F&\equiv\exists x\neg F\\\neg\exists x F&\equiv\forall x\neg F\end{align*} Quantificational Equivalence Rule 4 (Scope change) Suppose the variable x does not appear in G. Let \Box denote either \forall or \exists. Let \diamond denote either \vee or \wedge. Then \begin{align*}(\Box x F\diamond G)&\equiv\Box x(F\diamond G)\\(G\diamond\Box x F)&\equiv\Box x(G\diamond F)\end{align*} Example. Let r be “it rains”, \mathrm{outside(x)} “x is outside” and \mathrm{wet}(x) “x gets wet”. Then “If it is raining, then anything that is outside will get wet” is written as the quantificational formular\Longrightarrow\forall x(\mathrm{ouside}(x)\Longrightarrow\mathrm{wet}(x))$$This is equivalent to$$\forall x(r\Longrightarrow(\mathrm{ouside}(x)\Longrightarrow\mathrm{wet}(x)))as a consequence of scope change. The transformed formula says, in plain English, “Any object, if it is raining, will get wet if it is outside” which sounds less natural than the original statement. Example. As a consequence of scope change we obtain \begin{align*}(\forall x P(x)\vee\exists y Q(y))&\equiv\forall x\exists y (P(x)\vee Q(x))\\&\equiv\exists y\forall x(P(x)\vee Q(y))\end{align*} Example. For (\forall x P(x)\vee\exists x Q(x)), neither quantifiers can be moved out because the quantified variable x appears in both subformulae. However, instead of scope change one can use change of variable rule to turn it into (\forall x P(x)\vee\exists y Q(y)) which is shown in the previous example. Through repeated application of the quantificational equivalence rules, all quantifiers can be pulled out to the beginning of the formula. Such a formula is said to be in prenex normal form. Example. Let L(x,y) be “x loves y”. Then the statement “everyone has a unique beloved” can be written as the quantificational formula\forall x\exists y (L(x,y)\wedge\forall z (L(x,z)\Longrightarrow y=z)$$It can be transformed to the formula in prenex form$$\forall x\exists y\forall z(L(x,y)\wedge(L(x,z)\Longrightarrow y=z))$$Example. Translate into quantificational logic and put into prenex form: “If there are any ants, then one of them is the queen.” Solution. Let A(x) be “x is an ant and Q(x) “x is a queen.” Then the statement can be written as the quantificational formula$$\exists x A(x)\Longrightarrow\exists y (A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))It’s direct translation is “If there exists an x such that x is an ant, then there exists a y such that y is an ant and y is a queen and any z that is a queen is equal to y.” The formula can be transformed to \begin{align*}\neg\exists x A(x)\vee\exists y(A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))&\equiv\forall x\neg A(x)\wedge\exists y(A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))\\&\equiv\forall x\exists y\forall z(\neg A(x)\vee(A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))\end{align*} References. [1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019 # Logic and Computer Computers are built on (propositional) logic. The reason ( ) around “propositional” is that propositional logic is not the only choice of logic for computers. There are ternary computers based on 3-valued logic. (In 3-valued logic a statement can be either true, false, or neither.) In principle, you can build computers out of any n-valued logic. There are also computers based upon infinite-valued logic. Examples of such computers include fuzzy computers (based upon fuzzy logic) and quantum computers (based upon the laws of the quantum world!). There are so many interesting things to talk about these uncoventional computers but our discussion here is about conventional computers that are based upon propositional logic. Computers carry out logical calculations. Arithmetic operations such as adding two numbers are in fact logical calculations as we will see later. Computers (conventional binary computers to be clear) can understand only the bits 0 (switch on-current) and 1 (switch off-no current). Hence everything in a computer is represented as patterns of of 0s and 1s. 0 and 1 can be understood as the two truth values, true and false, of propositional logic. For example, 0 could represent true while 1 does false. Using propositional logic one can design physical devices, called logic circuits, that perform intended results by producing bits (output) from bits (input). The design of logic circuits is called computer logic. The smallest units of a circuit are called the logic gates or gates in short, i.e. logic gates are building blocks of a logic circuit. Gates correspond to the operations of propositional logic, such as \vee, \wedge, \oplus, and \neg. Example. The diagram of Or Gate. Example. The diagram of And Gate. Example. The diagram of Exclusive Or Gate. Example. The diagram of Not Gate In practice, however, not all these operations are available to a circuit designer. For example, exclusive or can be performed using \vee, \wedge and \neg as we have seen here.x\oplus y\equiv(x\wedge\neg y)\vee(\neg x\wedge y)$$The following figure shows a logic circuit that computes x\oplus y. The semi circle on a wire represents that the wire is crossing over another i.e. they are not connected. Not gate often simply represented by a bubble. With this convention, the above circuit can be simplified as in the following figure. There is a very useful gate called NAND (NOT-AND) Gate. It is defined by nand operator$$p|q\equiv\neg(p\wedge q)What’s interesting about nand operator is that any conventional operator can be expressed using only nand operator. For example, \begin{align*}\neg p&\equiv p|p\\p\wedge q&\equiv\neg(p|q)\equiv(p|q)|(p|q)\end{align*} Example. The diagram of NAND Gate. Binary Arithmetic The basic rules of binary arithmetic: \begin{aligned}0+0&=0\\0+1&=1\\1+0&=1\\1+1&=0,\ \mbox{carry}\ 1\end{aligned}\label{eq:binadd}As shown, when adding 1 to 1, the result is 0 but a 1 is carried into the next position to the left. For example,adding 1 to 10111 results in 11000 with three carries.\begin{array}{cccccc}& & 1 & 1 & 1 & \\& 1 & 0 & 1 & 1 & 1\\+& & & & & 1\\\hline & 1 & 1 & 0 & 0 & 0\end{array}$$Here is another example 10101+01111. $$\begin{array}{ccccccc}& & 1 & 1 & 1 & 1 & \\& & 1 & 0 & 1 & 0 & 1\\+& & 0 & 1 & 1 & 1 & 1\\\hline & 1 & 0 & 0 & 1 & 0 & 0\end{array}\label{eq:binadd2}$$ The question is how do we design hardware to do binary arithmetic? The simplest operation is the addition of two bits shown in \eqref{eq:binadd}. The equations in \eqref{eq:binadd} actually create two output bits, a sum s and a carry c, from two input bits x and y. The carry bit was not mentioned for the first three because it is 1 only when both input bits are 1. On the other hand, the sum is 1 only when one of the input bits is 1 and the other is 0. Hence, the device that performs the addition of two bits, called a half adder can be represented by the following diagram. However, a half adder is not adequate to compute more complex binary sums such as \eqref{eq:binadd2} because when adding numbers with multiple digits we sometimes need three input bits. A full adder takes three input bits X, Y, and the carry-in bit C_{\mathrm{in}} and produce two output bits, the sum S and the carry-out bit C_{\mathrm{out}}. Th following is the truth table for computing the two output bits S and C_{\mathrm{out}} from the three input bits X, Y, and C_{\mathrm{in}}.$$\begin{array}{|c|c|c|c|c|}\hline X & Y & C_{\mathrm{in}} & S & C_{\mathrm{out}}\\\hline 0 & 0& 0 & 0 & 0\\\hline0 & 0 & 1 & 1 & 0\\\hline0 & 1 & 0 & 1 & 0\\\hline 0 & 1 & 1 & 0 &1\\\hline 1 & 0 & 0 & 1 &0\\\hline 1 & 0 & 1 & 0 & 1\\\hline 1 & 1 &0 & 0 &1\\\hline 1 & 1 & 1 & 1 & 1\\\hline\end{array}$The following diagram shows a way to construct a full adder using two half adders (represented by a box called HA). The circuit produces the sum bit by adding$X$and$Y$and then adding$C_{\mathrm{in}}$to the result. The$C_{\mathrm{out}}\$ is 1 if there is a carry from either the first or the second of these additions.
References.
[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019 | 2019-12-08 02:57:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.920953094959259, "perplexity": 234.1973726738695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540504338.31/warc/CC-MAIN-20191208021121-20191208045121-00443.warc.gz"} |
http://turbomachinery.asmedigitalcollection.asme.org/article.aspx?articleid=1467617 | 0
Research Papers
# Combustion Instability and Emission Control by Pulsating Fuel Injection
[+] Author and Article Information
Christian Oliver Paschereit
Hermann-Föttinger-Institute, Technical University Berlin, 10623 Berlin, [email protected]
Ephraim Gutmark
Aerospace Engineering and Engineering Mechanics Department, University of Cincinnati, Cincinnati, OH [email protected]
J. Turbomach 130(1), 011012 (Jan 25, 2008) (8 pages) doi:10.1115/1.2749292 History: Received July 21, 2005; Revised January 28, 2007; Published January 25, 2008
## Abstract
Open-loop control methodologies were used to suppress symmetric and helical thermoacoustic instabilities in an experimental low-emission swirl-stabilized gas-turbine combustor. The controllers were based on fuel (or equivalence ratio) modulations in the main premixed combustion (premixed fuel injection (PMI)) or, alternatively, in the secondary pilot fuel. PMI included symmetric and asymmetric fuel injection. The symmetric instability mode responded to symmetric excitation only when the two frequencies matched. The helical fuel injection affected the symmetric mode only at frequencies that were much higher than that of the instability mode. The asymmetric excitation required more power to obtain the same amount of reduction as that required by symmetric excitation. Unlike the symmetric excitation, which destabilized the combustion when the modulation amplitude was excessive, the asymmetric excitation yielded additional suppression as the modulation level increased. The $NOx$ emissions were reduced to a greater extent by the asymmetric modulation. The second part of the investigation dealt with the control of low frequency symmetric instability and high frequency helical instability by the secondary fuel injection in a pilot flame. Adding a continuous flow of fuel into the pilot flame controlled both instabilities. However, modulating the fuel injection significantly decreased the amount of necessary fuel. The reduced secondary fuel resulted in a reduced heat generation by the pilot diffusion flame and therefore yielded lower $NOx$ emissions. The secondary fuel pulsation frequency was chosen to match the time scales typical to the central flow recirculation zone, which stabilizes the flame in the burner. Suppression of the symmetric mode pressure oscillations by up to $20dB$ was recorded. High frequency instabilities were suppressed by $38dB$, and CO emissions reduced by using low frequency modulations with 10% duty cycle.
<>
Copyright © 2008 by American Society of Mechanical Engineers
Your Session has timed out. Please sign back in to continue.
## Figures
Figure 1
Experimental facility. The figure in the inset is taken from Zajadatz (27).
Figure 2
Visualization of phase averaged OH images at two phase angles of 0deg and 180deg. (a), (b) Axisymmetric structure (premixed, St=0.58). (c), (d) Helical structure (premixed, St=1.16). (e), (f) Helical structure (premixed flame, St=7.77).
Figure 3
Frequency response of pressure and OH fluctuations to an open-loop controller with a symmetric pulsed fuel injection (F∕Fmax=15%)
Figure 4
Amplitude response of pressure and OH fluctuations to an open loop-controller with a symmetric pulsed fuel injection (St=0.61)
Figure 5
NOx emissions as a function of frequency using an open-loop controller with a symmetric pulsed fuel injection (F∕Fmax=15%) and with an antisymmetric pulsed fuel injection (F∕Fmax=50%)
Figure 6
Frequency response of pressure and OH fluctuations to an open-loop controller with an antisymmetric pulsed fuel injection (F∕Fmax=50%)
Figure 7
Suppression of pressure and OH fluctuations by SPI and its effect on NOx and CO emissions
Figure 8
Temporal combustion-pressure variation during the control of the periodical onset of high frequency instability by pulsed secondary fuel injection. 20% duty cycle.
Figure 9
Response of pressure and OH fluctuations to modulations of pilot flame, and the effect on NOx and CO emissions. Pilot fuel at 20% duty cycle.
Figure 10
Variation of pressure and OH fluctuations, and NOx and CO emissions as a function of the duty cycle. Secondary fuel at 4.4% and forcing frequency: St=0.066.
Figure 11
Variation of pressure fluctuations with an equivalence ratio for continuous and pulsed secondary fuel injection. Secondary fuel flow rate of 4.4%, 10% duty cycle, and St=0.066.
Figure 12
Variation of NOx and OH fluctuations with an equivalence ratio for continuous and pulsed secondary fuel injection. Secondary fuel flow rate of 4.4%, 10% duty cycle, and St=0.066.
Figure 13
Variation of CO with an equivalence ratio for continuous and pulsed secondary fuel injection. Secondary fuel flow rate of 4.4%, 10% duty cycle, and St=0.066.
Figure 14
Variation of pressure oscillations with a combustion power at a nominal equivalence ratio. Secondary fuel flow rate of 4.4%, 10% duty cycle, and St=0.066.
Figure 15
Suppression of high frequency pressure and OH fluctuations by continuous pilot fuel injection
Figure 16
Response of high frequency pressure and OH fluctuations to modulations of pilot flame. Pilot fuel at 20% duty cycle.
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2017-12-18 03:11:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4631330072879791, "perplexity": 6947.486034187347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948604248.93/warc/CC-MAIN-20171218025050-20171218051050-00543.warc.gz"} |
https://www.khanacademy.org/science/physics/fluids/buoyant-force-and-archimedes-principle/a/buoyant-force-and-archimedes-principle-article | # What is buoyant force?
Why the heck do things float?
## What does buoyant force mean?
Have you ever dropped your swimming goggles in the deepest part of the pool and tried to swim down to get them? It can be frustrating because the water tries to push you back up to the surface as you're swimming downward. The name of this upward force exerted on objects submerged in fluids is called the buoyant force.
So why do fluids exert an upward buoyant force on submerged objects? It has to do with differences in pressure between the bottom of the submerged object and the top. Say someone dropped a can of beans in a pool of water.
Bean pollution is a crime. If you see someone throwing beans into a pool or ocean call the Society for Bean Free Waterways immediately.
Because pressure left parenthesis, P, start subscript, g, a, u, g, e, end subscript, equals, rho, g, h, right parenthesis increases as you go deeper in a fluid, the force from pressure exerted downward on the top of the can of beans will be less than the force from pressure exerted upward on the bottom of the can.
Essentially it's that simple. The reason there's a buoyant force is because of the rather unavoidable fact that the bottom (i.e. more submerged part) of an object is always deeper in a fluid than the top of the object. This means the upward force from water has to be greater than the downward force from water.
OK, so it doesn't completely follow. After all, what if we considered an object where the area of the bottom was smaller than the area of the top (like a cone). Since F, equals, P, A, could the greater area on the top of the cone compensate for the smaller pressure on the top? Would this make the object experience a net downward buoyant force? The answer is no. It turns out that no matter what shape you make your object, the net force from water pressure will always point upward. For the example of the cone, the tapered sides make it so that a component of the pressure on the sides also points up which makes it so the net buoyant force again points upward.
It's fun to try and think of more examples of other shapes, and then try to figure out why they won't make the buoyant force point downward.
Knowing conceptually why there should be a buoyant force is good, but we should also be able to figure out how to determine the exact size of the buoyant force as well.
We can start with the fact that the water on the top of the can is pushing down F, start subscript, d, o, w, n, end subscript, and the water on the bottom of the can is pushing up F, start subscript, u, p, end subscript. We can find the total upward force on the can exerted by water pressure (which we call the buoyant force F, start subscript, b, u, o, y, a, n, t, end subscript) by simply taking the difference between the magnitudes of the upward force F, start subscript, u, p, end subscript and downward force F, start subscript, d, o, w, n, end subscript.
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, F, start subscript, u, p, end subscript, minus, F, start subscript, d, o, w, n, end subscript
We can relate these forces to the pressure by using the definition of pressure P, equals, start fraction, F, divided by, A, end fraction which can be solved for force to get F, equals, P, A . So the force exerted upward on the bottom of the can will be F, start subscript, u, p, end subscript, equals, P, start subscript, b, o, t, t, o, m, end subscript, A and the force exerted downward on the top of the can will be F, start subscript, d, o, w, n, end subscript, equals, P, start subscript, t, o, p, end subscript, A. Substituting these expressions in for each F respectively in the previous equation we get,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, P, start subscript, b, o, t, t, o, m, end subscript, A, minus, P, start subscript, t, o, p, end subscript, A
We can use the formula for hydrostatic gauge pressure P, start subscript, g, a, u, g, e, end subscript, equals, rho, g, h to find expressions for the upward and downward directed pressures. The force from pressure directed upward on the bottom of the can is P, start subscript, b, o, t, t, o, m, end subscript, equals, rho, g, h, start subscript, b, o, t, t, o, m, end subscript and the force from pressure directed downward on the top of the can is P, start subscript, t, o, p, end subscript, equals, rho, g, h, start subscript, t, o, p, end subscript . We can substitute these into the previous equation for each pressure respectively to get,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, left parenthesis, rho, g, h, start subscript, b, o, t, t, o, m, end subscript, right parenthesis, A, minus, left parenthesis, rho, g, h, start subscript, t, o, p, end subscript, right parenthesis, A
Notice that each term in this equation contains the expression rho, g, A. So we can simplify this formula by pulling out a common factor of rho, g, A to get,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, rho, g, A, left parenthesis, h, start subscript, b, o, t, t, o, m, end subscript, minus, h, start subscript, t, o, p, end subscript, right parenthesis
Now this term h, start subscript, b, o, t, t, o, m, end subscript, minus, h, start subscript, t, o, p, end subscript is important and something interesting is about to happen because of it. The difference between the depth of the bottom of the can h, start subscript, b, o, t, t, o, m, end subscript and the depth of the top of the can h, start subscript, t, o, p, end subscript is just equal to the height of the can. (see the digram below)
So we can replace left parenthesis, h, start subscript, b, o, t, t, o, m, end subscript, minus, h, start subscript, t, o, p, end subscript, right parenthesis in the previous formula with the height of the can h, start subscript, c, a, n, end subscript to get,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, rho, g, A, h, start subscript, c, a, n, end subscript
Here's the interesting part. Since A, times, h is equal to the volume of a cylinder, we can replace the term A, h, start subscript, c, a, n, end subscript with a volume V . The first instinct might be to associate this volume with the volume of the can. But notice that this volume will also be equal to the volume of the water displaced by the can. By displaced water we mean the volume of water that was once in the volume of space now occupied by the can.
Since there is no water left in the region of space where the can is now, all that water went somewhere else in the fluid.
So we are definitely going to replace the term A, h with a volume V , but should we write this volume as volume of the can or volume of the displaced fluid? This is important because the two volumes could be different if the object is only partially submerged in the fluid. The short answer is that we need to use the volume of the fluid displaced V, start subscript, f, l, u, i, d, end subscript in the formula because the displaced fluid is the factor that determines the buoyant force.
Well, imagine the can was floating with half of its volume submerged beneath the surface of the fluid.
There would no longer be any downward force from the water pressure on the top of the can. And the depth h, start subscript, b, o, t, t, o, m, end subscript of the bottom of the can would now only be a fraction of the can's height. So if we solved for the buoyant force like we did before we would get,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, rho, g, A, left parenthesis, h, start subscript, b, o, t, t, o, m, end subscript, minus, 0, right parenthesis
But the term A, h, start subscript, b, o, t, t, o, m, end subscript is not equal to the entire volume of the can. It's only equal to the volume of the can submerged, or in other words the volume of the displaced fluid V, start subscript, f, end subscript.
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, rho, g, V, start subscript, f, l, u, i, d, end subscript
You could of course choose to write the formula in terms of the volume of the can V, start subscript, c, a, n, end subscript as long as you knew that the only part of the volume that counts is the volume submerged.
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, rho, g, V, start subscript, c, a, n, end subscript
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, rho, g, V, start subscript, f, end subscript
That pretty much does it. This formula gives the buoyant force on a can of beans (or any other object) submerged wholly or partially in a fluid. Let's take stock of what we have now. Notice how the buoyant force only depends on the density of the fluid rho in which the object is submerged, the acceleration due to gravity g, and the volume of the displaced fluid V, start subscript, f, end subscript.
Surprisingly the buoyant force doesn't depend on the overall depth of the object submerged. In other words, as long as the can of beans is fully submerged, bringing it to a deeper and deeper depth will not change the buoyant force. This might seem strange since the pressure gets larger as you descend to deeper depths. But the key idea is that the pressures at the top and bottom of the can will both increase by the same amount and therefore cancel, leaving the total buoyant force the same.
Something might strike you as being wrong about all this. Some objects definitely sink, but we just proved that there is an upward force on every submerged object. How can an object sink if it has an upward force on it? Well, there is definitely an upward buoyant force on every submerged object, even those that sink. It's just that for sinking objects, their weight is greater than the buoyant force. If their weight was less than their buoyant force they would float. It turns out that it's possible to prove that if the density of a fully submerged object (regardless of its shape) is greater than the density of the fluid it's placed in, the object will sink.
The net vertical force (including gravity now) on a submerged object will be the buoyant force on the object minus the magnitude of the weight of the object.
F, start subscript, n, e, t, end subscript, equals, F, start subscript, b, end subscript, minus, W
We can use the formula we derived for buoyant force to rewrite F, start subscript, b, end subscript as rho, start subscript, f, end subscript, V, start subscript, f, end subscript, g where rho, start subscript, f, end subscript is the density of the fluid,
F, start subscript, n, e, t, end subscript, equals, rho, start subscript, f, end subscript, V, start subscript, f, end subscript, g, minus, m, g
We can make that second term in the formula look a whole lot more like the first term if we use the rearranged definition of density to write the mass of the object m in terms of the density of the object rho, start subscript, o, end subscript and the volume of the object submerged V, start subscript, o, end subscript. Subbing in the formula m, equals, rho, start subscript, o, end subscript, V, start subscript, o, end subscript for the mass in the previous formula we get,
F, start subscript, n, e, t, end subscript, equals, rho, start subscript, f, end subscript, V, start subscript, f, end subscript, g, minus, rho, start subscript, o, end subscript, V, start subscript, o, end subscript, g
If the object is fully submerged the two volumes V are the same and we can pull out a common factor of V, g to get,
F, start subscript, n, e, t, end subscript, equals, V, g, left parenthesis, rho, start subscript, f, end subscript, minus, rho, start subscript, o, end subscript, right parenthesis
So there it is! If the density of the object is greater than the density of the fluid the net force will be negative which means the object will sink if released in the fluid.
## What is Archimedes' principle?
The way you will normally see the buoyant force formula written is with the g and the V rearranged like so,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, rho, V, start subscript, f, end subscript, g
When you rearrange the formula in this way it allows you to notice something amazing. The term rho, V, start subscript, f, end subscript is the density of the displaced fluid multiplied by the volume of the displaced fluid. Since the definition of density rho, equals, start fraction, m, divided by, V, end fraction can be rearranged into m, equals, rho, V, that means the term rho, V, start subscript, f, end subscript corresponds to the mass of the displaced fluid. So, if we wanted to, we could replace the term rho, V, start subscript, f, end subscript with m, start subscript, f, end subscript in the previous equation to get,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, m, start subscript, f, end subscript, g
But look at that! The mass of the displaced fluid times the magnitude of the acceleration due to gravity is just the weight of the displaced fluid. So remarkably, we can rewrite the formula for the buoyant force as,
F, start subscript, b, u, o, y, a, n, t, end subscript, equals, W, start subscript, f, end subscript
This equation, when stated in words, is called Archimedes' principle. Archimedes' principle is the statement that the buoyant force on an object is equal to the weight of the fluid displaced by the object. The simplicity and power of this idea is striking. If you want to know the buoyant force on an object, you only need to determine the weight of the fluid displaced by the object.
Since there is no water left in the region of space where the can is now, all the water that was in that volume must have been displaced elsewhere in the fluid.
The fact that simple and beautiful (yet not obvious) ideas like this result from a logical progression of basic physics principles is part of why people find physics so useful, powerful, and interesting. And the fact that it was discovered by Archimedes of Syracuse over 2000 years ago, before Newton's laws, is impressive to say the least.
## What's confusing about the buoyant force and Archimedes' principle?
Sometimes people forget that the density rho in the formula for buoyant force F, start subscript, b, end subscript, equals, rho, V, start subscript, f, end subscript, g is referring to the density of the displaced fluid, not the density of the submerged object.
People often forget that the volume in the buoyancy formula refers to the volume of the displaced fluid (or submerged volume of the object), and not necessarily the entire volume of the object.
Sometimes people think the buoyant force increases as an object is brought to deeper and deeper depths in a fluid. But the buoyant force does not depend on depth. It only depends on volume of the displaced fluid V, start subscript, f, end subscript, density of the fluid rho, and the acceleration due to gravity g.
Many people, when asked to state Archimedes' principle, usually give a look of confused exasperation before launching into a wandering discussion about people jumping naked out of bathtubs. So, make sure you understand Archimedes' principle well enough to state it clearly: "Every object is buoyed upwards by a force equal to the weight of the fluid the object displaces."
## What do solved examples involving buoyant force look like?
### Example 1: (an easy one)
A 0, point, 650, space, k, g garden gnome went snorkeling a little too low and found himself at the bottom of a fresh water lake of depth 35, point, 0, space, m . The garden gnome is solid (with no holes) and takes up a total volume of 1, point, 44, times, 10, start superscript, minus, 3, end superscript, space, m, start superscript, 3, end superscript . The density of fresh water in the lake is 1000, start fraction, k, g, divided by, m, start superscript, 3, end superscript, end fraction .
What is the buoyant force on the gnome?
F, start subscript, b, end subscript, equals, rho, V, g, space, left parenthesis, U, s, e, space, b, u, o, y, a, n, t, space, f, o, r, c, e, space, e, q, u, a, t, i, o, n, comma, space, w, h, i, c, h, space, i, s, space, j, u, s, t, space, A, r, c, h, i, m, e, d, e, s, apostrophe, space, p, r, i, n, c, i, p, l, e, space, i, n, space, m, a, t, h, space, f, o, r, m, right parenthesis
F, start subscript, b, end subscript, equals, left parenthesis, 1000, start fraction, k, g, divided by, m, start superscript, 3, end superscript, end fraction, right parenthesis, left parenthesis, 1, point, 44, times, 10, start superscript, minus, 3, end superscript, space, m, start superscript, 3, end superscript, right parenthesis, left parenthesis, 9, point, 8, start fraction, m, divided by, s, start superscript, 2, end superscript, end fraction, right parenthesis, space, left parenthesis, P, l, u, g, space, i, n, space, n, u, m, e, r, i, c, a, l, space, v, a, l, u, e, s, right parenthesis
F, start subscript, b, end subscript, equals, 14, point, 1, space, N, space, left parenthesis, C, a, l, c, u, l, a, t, e, comma, space, a, n, d, space, c, e, l, e, b, r, a, t, e, right parenthesis
### Example 2: (a slightly harder one)
A cube, whom you have developed a strong companionship with, has a total mass of 2, point, 33, k, g .
What must be the minimum side length of the cube so that it floats in sea water of density 1025, start fraction, k, g, divided by, m, start superscript, 3, end superscript, end fraction?
We know that in order to float the buoyant force when the object is submerged must be equal to the magnitude of the weight of the cube. So we put this in equation form as,
W, start subscript, c, u, b, e, end subscript, equals, F, start subscript, b, end subscript, space, left parenthesis, W, e, i, g, h, t, space, o, f, space, c, u, b, e, space, e, q, u, a, l, s, space, m, a, g, n, i, t, u, d, e, space, o, f, space, b, u, o, y, a, n, t, space, f, o, r, c, e, right parenthesis
m, g, equals, rho, V, g, space, left parenthesis, P, l, u, g, space, i, n, space, e, x, p, r, e, s, s, i, o, n, s, space, f, o, r, space, t, h, e, space, w, e, i, g, h, t, space, o, f, space, t, h, e, space, c, u, b, e, space, a, n, d, space, b, u, o, y, a, n, t, space, f, o, r, c, e, right parenthesis
m, g, equals, rho, L, start superscript, 3, end superscript, g, space, left parenthesis, I, n, s, e, r, t, space, t, h, e, space, f, o, r, m, u, l, a, space, f, o, r, space, t, h, e, space, v, o, l, u, m, e, space, o, f, space, a, space, c, u, b, e, space, L, start superscript, 3, end superscript, right parenthesis
L, start superscript, 3, end superscript, equals, start fraction, m, g, divided by, rho, g, end fraction, space, left parenthesis, S, o, l, v, e, space, s, y, m, b, o, l, i, c, a, l, l, y, space, f, o, r, space, L, start superscript, 3, end superscript, right parenthesis
L, equals, left parenthesis, start fraction, m, divided by, rho, end fraction, right parenthesis, start superscript, 1, slash, 3, end superscript, space, left parenthesis, C, a, n, c, e, l, space, t, h, e, space, f, a, c, t, o, r, space, o, f, space, g, space, a, n, d, space, t, a, k, e, space, c, u, b, e, d, space, r, o, o, t, space, o, f, space, b, o, t, h, space, s, i, d, e, s, right parenthesis
L, equals, left parenthesis, start fraction, 2, point, 33, space, k, g, divided by, 1025, start fraction, k, g, divided by, m, start superscript, 3, end superscript, end fraction, end fraction, right parenthesis, start superscript, 1, slash, 3, end superscript, space, left parenthesis, P, l, u, g, space, i, n, space, n, u, m, b, e, r, s, right parenthesis
L, equals, 0, point, 131, m, space, left parenthesis, C, a, l, c, u, l, a, t, e, comma, space, a, n, d, space, c, e, l, e, b, r, a, t, e, right parenthesis
### Example 3: (an even harder one)
A huge spherical helium filled balloon painted to look like a cow is prevented from floating upward by a rope tying it to the ground. The balloon plastic structure plus all the helium gas inside of the balloon has a total mass of 9, point, 20, space, k, g . The diameter of the balloon is 3, point, 50, space, m . The density of the air is 1, point, 23, start fraction, k, g, divided by, m, start superscript, 3, end superscript, end fraction .
What is the tension in the rope?
This one is a little harder so we should first draw a free body diagram (i.e. force diagram) for the balloon. There are lots of numbers here too so we could include our known variables in our diagram so that we can see them visually. (Note that in this case, the fluid being displaced is the air.)
Since the spherical cow balloon is not accelerating, the forces must be balanced (i.e. no net force). So we can start with a statement that the magnitudes of the total upward and downward forces are equal.
F, start subscript, b, end subscript, equals, W, plus, F, start subscript, T, end subscript, space, left parenthesis, U, p, w, a, r, d, space, a, n, d, space, d, o, w, n, w, a, r, d, space, f, o, r, c, e, s, space, a, r, e, space, e, q, u, a, l, slash, b, a, l, a, n, c, e, d, right parenthesis
rho, V, g, equals, m, g, plus, F, start subscript, T, end subscript, space, left parenthesis, I, n, s, e, r, t, space, t, h, e, space, f, o, r, m, u, l, a, s, space, f, o, r, space, b, u, o, y, a, n, t, space, f, o, r, c, e, space, a, n, d, space, w, e, i, g, h, t, space, o, f, space, b, a, l, l, o, o, n, space, r, e, s, p, e, c, t, i, v, e, l, y, right parenthesis
F, start subscript, T, end subscript, equals, rho, V, g, minus, m, g, space, left parenthesis, S, o, l, v, e, space, s, y, m, b, o, l, i, c, a, l, l, y, space, f, o, r, space, t, h, e, space, t, e, n, s, i, o, n, space, a, n, d, space, i, s, o, l, a, t, e, space, i, t, space, o, n, space, o, n, e, space, s, i, d, e, space, o, f, space, t, h, e, space, e, q, u, a, t, i, o, n, right parenthesis
F, start subscript, T, end subscript, equals, rho, left parenthesis, start fraction, 4, divided by, 3, end fraction, pi, r, start superscript, 3, end superscript, right parenthesis, g, minus, m, g, space, left parenthesis, P, l, u, g, space, i, n, space, t, h, e, space, f, o, r, m, u, l, a, space, f, o, r, space, t, h, e, space, v, o, l, u, m, e, space, o, f, space, a, space, s, p, h, e, r, e, right parenthesis
$F_T = (1.23 \dfrac{\text{kg}}{\text{m}^3}) [\dfrac{4}{3}\pi (\dfrac{3.50 \text{ m}}{2})^3]g -(9.20 \text{ kg})g \qquad \text{(Plug in numbers. Convert diameter to radius!)}$
F, start subscript, T, end subscript, equals, 180, space, N, space, left parenthesis, C, a, l, c, u, l, a, t, e, comma, space, a, n, d, space, c, e, l, e, b, r, a, t, e, right parenthesis | 2017-06-23 01:58:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 98, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7545234560966492, "perplexity": 2252.0152855476153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319943.55/warc/CC-MAIN-20170623012730-20170623032730-00689.warc.gz"} |
https://socratic.org/questions/why-hydrogen-bonds-are-formed | # Why hydrogen bonds are formed?
Hydrogen bonds occur where hydrogen is covalently bound to a strongly electronegative element, e.g. $\text{N", "O", "X}$ ("X" = halogen).
If hydrogen is bound to a strongly electronegative element, the electrons in the bond are polarized towards the electronegative element. We could therefore represent the polarized molecule as ${\text{H"^(delta+)-"X}}^{\delta -}$.
The $\text{H"-"X}$ molecule is thus polarized, it unequally shares its electrons. In the context of other similar molecules, this becomes an intermolecular force that is reflected in the elevated boiling points of $\text{H"_2"O}$, and $\text{H"-"F}$. This intermolecular force is the hydrogen-bonding interaction. | 2020-09-21 19:47:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7308579683303833, "perplexity": 1723.3608524073952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00110.warc.gz"} |
https://blog.alexandrecarlier.com/projects/unet/ | In this project we implement an encoder-decoder convolutional neural network to segment roads in a dataset of aerial images, resulting in significant improvement compared to logistic regression and patch-wise CNN, and reaching an F score of 0.94.
## I - Introduction
In this project, we are training a classifier to segment roads in aerial images from Google Maps, given the ground-truth images, i.e. the pixel-level labeled images ($1$ corresponding to a road, $0$ to background).
The train dataset contains 100 labeled images of $400 \times 400$ pixels. After the training, we run a prediction on $50$ test images of $608 \times 608$ pixels. The predicted images are then cropped in patches of $16 \times 16$ pixels and transformed to a csv submission file containing the predicted output ($1$ for a road, $0$ for background) for each patch.
This report is structured as follows. Section II describes the architecture of our encoder-decoder CNN model. Section III details the procedure we use to further refine the predictions of the model. We include the experimental settings, baselines, and final results in Section IV. Section V concludes the report.
## II - Encoder-Decoder CNN
Here we present an encoder-decoder convolutional neural network architecture that is similar to the models proposed in [7], [1]. The network consists of two parts, a convolutional network and a deconvolutional network. The figure below gives an illustration of our model. The convolutional network extracts high-level features from the input image, and the deconvolutional network builds a segmentation based on the extracted features. The output of the whole encoder-decoder network is then a probability map with the same size as the input image, which indicates the probability of each pixel belonging to a road.
In order to down-sample the layers by a factor of $2$ once in a while, we use a convolutional layer with stride $2$ instead of a max pooling layer. Similarly, we use a deconvolutional layer with stride $2$ to up-sample the layers by a factor of $2$, allowing us to reconstruct the size of the original image.
However, the predictions of the decoder part of the model can be rather coarse. Intuitively, we want to combine those predictions with fine details from the low-level layers. To do this, we fuse information from previous layers to specific layers in the deconvolutional network [5], as indicated by the directed lines in the next figure.
An illustration of our model. Layers drawn in red are convolutional layers with stride $2$, which down-sample the previous layers. Layers drawn in blue are deconvolutional layers with stride $2$, which up-sample the previous layers. Directed lines indicate fusion of layers.
The specifications of our model are detailed in the next table. We use ReLU as the activation function. We use a padding scheme such that the output size is $\lceil\textrm{input size}/\textrm{stride}\rceil$.
Layer Kernel size Stride Output size
input $-$ $-$ $320\times 320\times 3$
conv_1_1 $3\times 3$ $1$ $320\times 320\times 64$
conv_1_2 $3\times 3$ $2$ $160\times 160\times 64$
conv_2_1 $3\times 3$ $1$ $160\times 160\times 128$
conv_2_2 $3\times 3$ $2$ $80\times 80\times 128$
conv_3_1 $3\times 3$ $1$ $80\times 80\times 256$
conv_3_2 $3\times 3$ $1$ $80\times 80\times 256$
conv_3_3 $3\times 3$ $2$ $40\times 40\times 256$
conv_4_1 $3\times 3$ $1$ $40\times 40\times 512$
conv_4_2 $3\times 3$ $1$ $40\times 40\times 512$
conv_4_3 $3\times 3$ $2$ $20\times 20\times 512$
conv_5_1 $3\times 3$ $1$ $20\times 20\times 512$
conv_5_2 $3\times 3$ $1$ $20\times 20\times 512$
conv_5_3 $3\times 3$ $2$ $10\times 10\times 512$
conv_6_1 $3\times 3$ $1$ $10\times 10\times 512$
conv_6_2 $3\times 3$ $1$ $10\times 10\times 512$
conv_6_3 $3\times 3$ $2$ $5\times 5\times 512$
deconv_6_3 $3\times 3$ $2$ $10\times 10\times 512$
deconv_6_2 $3\times 3$ $1$ $10\times 10\times 512$
deconv_6_1 $3\times 3$ $1$ $10\times 10\times 512$
deconv_5_3 $3\times 3$ $2$ $20\times 20\times 512$
deconv_5_2 $3\times 3$ $1$ $20\times 20\times 512$
deconv_5_1 $3\times 3$ $1$ $20\times 20\times 512$
deconv_4_3 $3\times 3$ $2$ $40\times 40\times 512$
deconv_4_2 $3\times 3$ $1$ $40\times 40\times 512$
deconv_4_1 $3\times 3$ $1$ $40\times 40\times 256$
deconv_3_3 $3\times 3$ $2$ $80\times 80\times 256$
deconv_3_2 $3\times 3$ $1$ $80\times 80\times 256$
deconv_3_1 $3\times 3$ $1$ $80\times 80\times 128$
deconv_2_2 $3\times 3$ $2$ $160\times 160\times 128$
deconv_2_1 $3\times 3$ $1$ $160\times 160\times 64$
deconv_1_2 $3\times 3$ $2$ $320\times 320\times 64$
deconv_1_1 $3\times 3$ $1$ $320\times 320\times 64$
output $1\times 1$ $1$ $320\times 320\times 2$
Specifications of the Encoder-Decoder CNN
## III - Post-Processing
With our current encoder-decoder CNN model, the predictions may have discontinuity points or holes in the roads, as the next figure shows.
A prediction from our encoder-decoder CNN model. We can observe the presence of holes in the top-right road.
To address this problem, we adopt a post-processing procedure [6] which enables to refine the predictions given by the encoder-decoder CNN. Let $f$ be the network trained to output the predictions of the input images $X$, which we denote by $Y_p$. We build another network $f_p$ that takes the same functional form as $f$, and train it using $Y_p$ as input and the groundtruth images as labels. The output of $f_p$ is then used as our final predictions.
## IV - Evaluation
In this section, we elaborate on the experiment settings, including how we augment the data and train the model. We then evaluate the performance of our model as well as the baselines, followed by some analysis and discussion.
### A. Data Augmentation
The original training dataset contains $100$ aerial images. For each of these images, we apply a rotation of $45$, $90$, $-45$, $-90$ degrees respectively, which aims to increase the number of examples with diagonal roads in the dataset, and crop out the central part to obtain four new aerial images of size $320\times 320$. Besides, for this original image, we also crop out a region of size $320\times 320$ at each of the four corners. Again, we rescale the original image to $320\times 320$ to get another new image. Thus, for each training image, we can generate nine new images, resulting in an augmented dataset of $900$ aerial images. We then split our augmented data into a training set ($90$%, i.e. $810$ images) and a testing set ($10$%, i.e. $90$ images).
### B. Baselines
For the baselines, the problem is framed as a binary classification task. We extract patches of $k \times k$ pixels from the training images. For each patch, the label can be obtained by looking at the corresponding groundtruth image, and we can perform a binary classification to determine if it corresponds to a road or to background. The hyper-parameter $k$ is crucial since it should optimize the following trade-off: a larger $k$ means that our prediction will be more accurate since the model has more information from the pixels to make its prediction; a smaller $k$ enables a more fine-grained grid of patches.
Logistic Regression: For each patch, we compute the mean and variance values for each of the three channels, thus obtaining a feature vector with size $6$. We then train a simple logistic regression model on the feature vectors.
Naïve CNN: We train a naïve convolutional neural network directly on the patches. The specifications of the network are detailed in the next table. Note that the input size, depending on the value of patch size $k$, is not fixed at $16\times 16$.
Layer Kernel size Stride Output size
input $-$ $-$ $16\times 16\times 3$
conv_1 $5\times 5$ $1$ $16\times 16\times 32$
pool_1 $2\times 2$ $2$ $8\times 8\times 32$
conv_2 $5\times 5$ $1$ $8\times 8\times 64$
pool_2 $2\times 2$ $2$ $4\times 4\times 64$
fc_3 $4\times 4$ $-$ $1\times 1\times 512$
output $1\times 1$ $-$ $1\times 1\times 2$
Specifications of the Naïve CNN
### C. Training
Logistic Regression: We train the logistic regression using an inverse regularization strength of $10^5$, Liblinear solver and a balanced class-weight (so that classes with a lower number of occurrences are fitted equally).
Naïve CNN: We train the CNN with 50 epochs, an initial learning rate of $0.01$ (that decays exponentially), simple momentum for the optimization and a regularization coefficient of $5 \times 10^{-4}$. For this model, the patch-size must be both a divisor of the size of the image and a multiple of 4 (since the input image is pooled twice during the forward pass).
Encoder-decoder CNN: We apply batch normalization [4] to each of the convolutional and deconvolutional layers, to alleviate the problem of internal covariate shift. We add three drop out layers [3] with keep probability $0.5$ to the network, after deconv_4_1, deconv_3_1, and deconv_2_1, respectively. To train the model, we use stochastic gradient descent with learning rate of $5.0$ and batch size of $5$. The network is trained for $100$ epochs on the training images, and $10$ epochs for the post-processing procedure. It roughly takes three hours to finish the whole training procedure on an NVIDIA GTX 1080 graphics card. The next figure shows the evolution of the cross-entropy loss during the training procedure. We can observe that after $40$ epochs, the loss ($0.070346$) is almost the same as the final loss ($0.06267$).
Cross-entropy loss value obtained during the training of the encoder-decoder CNN model, with respect to the number of epochs.
### D. Results
To evaluate the models, the predicted images are cropped into patches of $16 \times 16$ pixels, in order to compare scores on a same basis.
First of all, to see the impact of the patch size on the performance of the baselines, we train a logistic regression model using several patch sizes. The result is shown in the next figure. In particular, we get better results with a patch size of $8$ or $20$ in the case of logistic regression. Therefore, in addition to a patch size of $16$, we also include models with patch size $8$ and $20$ in the final comparison.
F score given by a logistic regression model trained on the training set, with respect to the patch size (divisors of $320$, the size of the input image). We observe two peaks at $\textrm{patch size} = 8$ and $20$.
A visualization of our segmentation models applied on three images. From left to right, the images correspond to: input, logistic regression, naïve CNN, encoder-decoder CNN, ground-truth (if available). From top to bottom, the input images are: test_5.png, satImage_001.png, satImage_005.png
We train each model on the training set ($90$% of the augmented data) and compute the accuracy, precision, recall and F score on the validation set ($10$% of the data), which are listed in the next table. As the training of our encoder-decoder CNN model is long, we compute the score only once. In particular, we do not adopt a cross-validation procedure to get this score and we thus have no information about its variability. The logistic regression and naïve CNN models with patch size $k$ are denoted by LR$k$ and CNN$k$ respectively. EDCNN is our encoder-decoder CNN model. We can see that our model outperforms both of the two baselines. A comparison of the predictions given by all the models are shown in the previous figure.
Model Accuracy Precision Recall F score
LR16 0.5919 0.3442 0.6859 0.4585
LR8 0.5522 0.3348 0.7888 0.4701
LR20 0.5589 0.3394 0.7940 0.4755
CNN8 0.7874 0.5463 0.9186 0.6852
CNN20 0.8126 0.5854 0.8772 0.7022
CNN16 0.8598 0.6703 0.8726 0.7583
EDCNN 0.9654 0.9469 0.9345 0.9406
Evaluation Results
To get a general sense of the various features extracted by the encoder-decoder CNN, we give an illustration of the intermediate layers (both convolutional and deconvolutional) in the next figure.
An illustration of the intermediate layers in the network when predicting the segmentation a single aerial image. For each of the convolutional and deconvolutional layers, we select a slice and show it as an image here.
## V - Conclusion
In this project, we implement an encoder-decoder model that we train and test on aerial images, obtaining an F score of 0.94 and observing significant improvements compared to simples baselines. Further improvements including atrous convolutions or pyramid pooling modules could be added to our encoder-decoder structure to reach state of the art performance (e.g. DeepLab v3 [2]) but seem out of reach in the context of this course.
## References
[1] V. Badrinarayanan, A. Handa, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling. arXiv preprint arXiv:1505.07293, 2015.
[2] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
[3] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
[4] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448–456, 2015.
[5] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
[6] V. Mnih and G. E. Hinton. Learning to detect roads in high-resolution aerial images. In European Conference on Computer Vision, pages 210–223. Springer, 2010.
[7] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1520–1528, 2015. | 2021-03-09 10:24:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.739765465259552, "perplexity": 942.5032315044343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00576.warc.gz"} |
https://www.bionicturtle.com/forum/tags/bayes-rule/ | What's new
# bayes-rule
1. ### P1.T2.20.2. More probabilities and Bayes rule
Learning objectives: Calculate the probability of an event for a discrete probability function. Define and calculate a conditional probability. Distinguish between conditional and unconditional probabilities. Explain and apply Bayes’ rule. Questions: 20.2.1. The probability graph below... | 2021-01-21 21:43:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811682462692261, "perplexity": 2796.291034999377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527850.55/warc/CC-MAIN-20210121194330-20210121224330-00031.warc.gz"} |
https://cstheory.stackexchange.com/questions/19746/intuition-for-the-up-class/19761#19761 | # Intuition for the UP class
UP class is defined as such:
The class of decision problems solvable by an NP machine such that
If the answer is 'yes,' exactly one computation path accepts.
If the answer is 'no,' all computation paths reject.
I'm trying to develop intuition for this definition.
Can one say that UP problems are the problems with unique solutions (e.g. prime factorisation)?
That seems close to the truth to me; but I can't help thinking that that would mean, since UP contains P and is contained in NP, that in case P = NP we'd get that P = UP = NP, so all problems in NP have unique solutions as well, which seems like something provably not true: P != NP by reductio ad absurdum. I hope there's not too much conjectures and hand-wavery in this paragraph for your taste.
• The definition of "unique solution" is problematic: solving Parity games, for instance, is in UP (UP$\cap$coUP, in fact), but there may be many winning strategies. The unique witness is more involved. Nov 11 '13 at 14:57
• hm, so that would mean there's an algorithm for a non-deterministic Turing machine, which is not "non-deterministically try every solution" (I thought that's the idea in the heart of the equivalence of definitions of NP for n.-d. and d. T.m.), but something more sophisticated, always leading to the unique result out of many possible... Is that right? Is there another way to state it, for example using only the idea of a deterministic T.m. (one can define NP using only it)? Nov 11 '13 at 15:04
• The intuition of unique witness is correct, but must be used carefully, since it doesn't mean that every NTM for it has a unique run. Nov 11 '13 at 15:14
• I love this question! I had the exact same confusion but I didn't see the clever way to translate this confusion into a simple proof that P != NP. Well done! Jan 23 '16 at 13:49
• Btw your question from your last comment has since be answered on the Wikipedia page for the UP class Jan 23 '16 at 13:50
Your confusion appears to be over the fact that $\mathsf{NP}$ problems have more than one way to define a "solution" (or witness). The type of the solution is not part of the definition of the problem. For instance, for graph coloring, the obvious type of solution is an assignment of one color for each vertex (using at most the required number of colors); however, by the Gallai–Hasse–Roy–Vitaver theorem another type of solution that works equally well is an assignment of an orientation to each edge (creating directed paths of at most the required number of vertices). These two types of solutions can both be checked in polynomial time, but by different algorithms, and they also have different combinatorial properties. For instance, for a typical problem instance, the number of vertex color assignments will be different from the number of edge orientations. A lot of research on speeding up exponential algorithms for NP type problems can be interpreted as finding a new family of solutions to the same problem that has fewer possibilities to check.
Every problem in $\mathsf{P}$ has an $\mathsf{NP}$ "solution" consisting only of the empty string. To verify that this is a solution, just check that the solution string is empty and then run the polynomial time algorithm for the problem instance. With this type of solution, every yes instance has exactly one valid solution and every no instance has zero, meeting the definition of $\mathsf{UP}$ and showing that $\mathsf{P}\subset\mathsf{UP}$. If $\mathsf{P}=\mathsf{NP}$ then the same empty-string solution would also work for every problem in $\mathsf{NP}$, showing that $\mathsf{NP}=\mathsf{UP}$. So there is no contradiction between the fact that the empty-string solution is unique and the fact that some other type of solution for the same problem is non-unique.
• So the implication $UP=NP$ is not contradictory? The following problem is NP-complete. Given N is there a factor of N in a given range $[a,b]$ say where $a,b\sim N^{\frac{1}{4}}$ and $a<b$? There could be more than one factor in that range and the solution may not be unique?
– Mr.
Nov 19 '13 at 8:49
• Again, you are assuming incorrectly that the solution can only be the factor you are looking for. There may be other ways of solving the same problem (i.e. of getting a yes or no answer for the given N) that do not consist of a factor. And if P=NP the empty string meets the technical requirements of an NP solution — you can check it in polynomial time — and is indeed not a factor but is a solution to the same problem. Nov 19 '13 at 16:18
• This answer is absolutely brilliant as it teaches us even more than is asked for! Jan 23 '16 at 13:51
I agree with Shaull's comment that the intuition of having a unique witness is correct, but subtle. The argument in your last paragraph can be made technically precise, and highlights the subtlety of $\mathsf{UP}$ versus $\mathsf{NP}$. In particular, the problem in your last paragraph is essentially the question of whether $\mathsf{NPMV} \subseteq_{c} \mathsf{NPSV}$:
$\mathsf{NPMV}$ is the class of partial multi-valued functions computable in non-deterministic polynomial time, that is, each accepting nondeterministic branch gets to output a value (if there are no accepting paths on some input, then there is no output, leading to the fact that these need only be partial functions). This is closely related to the search version of $\mathsf{NP}$ problems.
$\mathsf{NPSV}$ is the class of single-valued functions in $\mathsf{NPMV}$, that is, multiple branches can accept, but if any branches do accept, all of the accepting branches must output the same value.
Intuitively, your last paragraph is talking about whether or not you can always select, from among the witnesses for a given verifier of some $\mathsf{NP}$ problem, a single witness. This is the question of whether every $\mathsf{NPMV}$ function has an $\mathsf{NPSV}$ refinement (denoted $\mathsf{NPMV} \subseteq_{c} \mathsf{NPSV}$). If this is the case, then the polynomial hierarchy collapses (see Hemaspaandra, Naik, Ogihara, and Selman "Computing Solutions Uniquely Collapses the Polynomial Hierarchy").
To contrast with $\mathsf{UP}$, no such implication is known to follow from $\mathsf{NP} = \mathsf{UP}$. Essentially because given a language $L \in \mathsf{NP}$, the (witnesses for a) $\mathsf{UP}$ machine for $L$ need not have anything to do with (the witnesses for any) other $\mathsf{NP}$ machine(s) for $L$. | 2021-10-15 20:07:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.762250542640686, "perplexity": 302.54766335882175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00200.warc.gz"} |
https://www2.physics.ox.ac.uk/contacts/people/slyz/publications/685295 | The variations of galaxy stellar masses and colour-types with the distance to projected cosmic filaments are quantified using the precise photometric redshifts of the COSMOS2015 catalogue extracted from COSMOS field (2 deg$^{2}$). Realistic mock catalogues are also extracted from the lightcone of the cosmological hydrodynamical simulation Horizon-AGN. They show that the photometric redshift accuracy of the observed catalogue ($\sigma_z<0.015$ at $M_*>10^{10}{\rm M}_{\odot}$ and $z<0.9$) is sufficient to provide 2D filaments that closely match their projected 3D counterparts. Transverse stellar mass gradients are measured in projected slices of thickness 75 Mpc between $0.5< z <0.9$, showing that the most massive galaxies are statistically closer to their neighbouring filament. At fixed stellar mass, passive galaxies are also found closer to their filament while active star-forming galaxies statistically lie further away. The contributions of nodes and local density are removed from these gradients to highlight the specific role played by the geometry of the filaments. We find that the measured signal does persist after this removal, clearly demonstrating that proximity to a filament is not equivalent to proximity to an over-density. These findings are in agreement with gradients measured both in 2D or 3D in the Horizon-AGN simulation and those observed in the spectroscopic VIPERS survey (which rely on the identification of 3D filaments). They are consistent with a picture in which the influence of the geometry of the large-scale environment drives anisotropic tides which impact the assembly history of galaxies, and hence their observed properties. | 2020-09-28 00:21:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7713768482208252, "perplexity": 1607.338437052287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00762.warc.gz"} |
http://www.thattommyhall.com/category/sicp/ | Finding Primes In SICP
November 2nd 2010
I was reading SICP over lunch and found this lovely footnote on probabilistic methods for deciding if a number is prime. (it is #47)
Numbers that fool the Fermat test are called Carmichael numbers, and little is known about them other than that they are extremely rare. There are 255 Carmichael numbers below 100,000,000. The smallest few are 561, 1105, 1729, 2465, 2821, and 6601. In testing primality of very large numbers chosen at random, the chance of stumbling upon a value that fools the Fermat test is less than the chance that cosmic radiation will cause the computer to make an error in carrying out a “correct” algorithm. Considering an algorithm to be inadequate for the first reason but not for the second illustrates the difference between mathematics and engineering.
Posted by tom under lisp & SICP | No Comments »
Creating Something (In fact everything) out of nothing, Twice
October 25th 2010
I am reading through SICP with some of the guys in work and got to a bit that fascinated me and reminded me of my previous life as a mathematician.
The basic data structure in Scheme is the cons cell, essentially representing a ordered pair. So
```> (cons 2 3)
(2 . 3)
```
creates the pair (2 . 3)
Lets call that a,
`> (define a (cons 2 3))`
There are 2 other functions that act on these cons cells, car returns the first, cdr returns the second
```> (car a)
2
> (cdr a)
3```
You can build lists by chaining these cons cells and unpick the contents by using car and cdr
```> (define b (cons 3 a))
> b
(3 2 . 3)
> (car b)
3
> (cdr b)
(2 . 3)
> (cdr (cdr b))
3```
The interesting bit is at p91 of the book where it breaks down the distinction between procedures and data
```(define (cons x y)
(define (dispatch m)
(cond ((= m 0) x)
((= m 1) y)
(else (error "arg no 0 or 1" m))))
dispatch)
```
So here cons is a function of x and y as you would expect. The trick is that its return value is a function (internally called dispatch) of one argument that returns x if passed 0 and y if passed 1, otherwise it throws an error.
Now car and cdr can be defined
```(define (car z) (z 0))
(define (cdr z) (z 1))
```
So car takes z as an argument and tries to apply z to 0, while cdr applys it to 1. While car and cdr will accept any function as an argument it makes sense if z is the cons of something as before.
As an example
`(car (cons 1 2))`
car will pass 0 to (cons 1 2)
(cons 1 2) returns 1,
similarly
`(cdr (cons 1 2))`
returns 2 as cdr will pass 1 to (cons 1 2)
This is not how the interpreter actually works, just the introduction to the idea of code as data (as code etc) in Lisp. This is the feature that sets it aside from other languages, in particular code is not parsed, jut loaded up as it is a valid data structure itself and allows for the powerful macro systems usually seen in Lisps.
Now for twice, it reminded me of some Set Theory, in particular forming the Natual Numbers in terms of sets.
We define 0 to be the empty set
$0 = {\emptyset}$.
Then we can define 1 to be
$1 = \{0\} = \{\emptyset\}$
(this has 1 element – we say has cardinality 1)
Then 2 is
$2 = \{0, 1\} = \{\emptyset, \{\emptyset, \{\emptyset\}\}\}$
Notice this has cardinality 2 and is formed as the set containing all the previous numbers.
So we have all positive whole numbers. You can then define arithmetic and number theory in terms of these sets.
See this site for a nice introduction to set theory (it also begins by defining ordered pairs)
Loving SICP, got to do some Induction for the first time in ages last night!
Posted by tom under lisp & SICP | 1 Comment » | 2013-05-22 07:05:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 3, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5736066102981567, "perplexity": 1006.7334151333248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701445114/warc/CC-MAIN-20130516105045-00095-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.cheenta.com/infinite-number-of-charges/ | Select Page
An infinite number of charges, each equal to (q), are placed along the x-axis at (x=1),(x=2),(x=4),(x=8) etc. Find the potential and electric field at the point x=0 due to the set of the charges.
Discussion:
An infinite number of charges, each equal to (q), are placed along the x-axis at (x=1),(x=2),(x=4),(x=8) etc.
Electric potential $$V=\frac{1}{4\pi\epsilon_0}(\frac{q}{1}+\frac{q}{2}+\frac{q}{4}+\frac{q}{8}+…)$$ $$=\frac{1}{4\pi\epsilon_0}(1+\frac{1}{2}+\frac{1}{2}+\frac{1}{2^2}+\frac{1}{2^3}+….)$$
The terms in brackets form a geometric progression of infinite terms whose sum is $$S= \frac{a}{1-r}=\frac{1}{1-1/2}=2$$
Hence the potential $$V=q/2\pi\epsilon_0$$ | 2020-04-08 18:38:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249247908592224, "perplexity": 718.4513861181377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371821680.80/warc/CC-MAIN-20200408170717-20200408201217-00334.warc.gz"} |
https://darcynorman.net/2015/11/09/notes-porter-et-al.-2016.-a-qualitative-analysis-of-institutional-drivers-and-barriers-to-blended-learning-adoption-in-higher-education./ | # Notes: Porter et al. (2016). A qualitative analysis of institutional drivers and barriers to blended learning adoption in higher education.
Porter, W. W., Graham, C. R., Bodily, R. G., & Sandberg, D. S. (2016). A qualitative analysis of institutional drivers and barriers to blended learning adoption in higher education. The Internet and Higher Education, 28, 17:27. http://doi.org/10.1016/j.iheduc.2015.08.003 Retrieved from http://linkinghub.elsevier.com/retrieve/pii/S1096751615000469
An article from the future! (it's not 2016 here yet, but articles from next year are already showing up. Go go, Gibson!)
Interesting paper, tying technology adoption stuff into professional development and support. This leads directly into our Learning Technologies Coaches program. Good timing.
Basically, more courses are going online or blended (LOTS of courses are getting shifted into blended format). Instructors are loosely described in broad categories: innovators, early adopters, early majority, late majority, and laggards. Ugh. I hate the term laggards for those-who-seem-to-resist.
They follow Graham et al's (2013)1 framework to describe barriers based on institutional strategy, structure, and support.
The researchers did an online survey of instructors at BYU-I, followed up with interviews of a stratified sample of survey respondents.
They found that instructors aren't very trustful of the motivations and demands from Administration, and that they trust their peers because they are “in the trenches.” Man, I hate when people draw on the rhetoric of war when describing how they view teaching. Anyway.
So, instructors like to learn and design with peer instructors. They like one-on-one F2F support while building their blended and online courses, so they can see body language etc. But that seems a bit tone-deaf, when they are talking about building blended and online courses where their students won't have body language cues. Hey. Whatever. Instructors are fun.
Instructors in the study want broadly defined policies, which they can interpret as needed. Guidance from above, without meddling or direct oversight.
“Standardization in terms of definition, and also you can leave it open in terms of how faculty would approach it.”
Makes sense - context-specific implementation of high-level guidance.
They note that infrastructure is a big factor - if the tech isn't reliable, they get stuck. “If a student has a bad experience or difficulty with the technology, it can squelch their interest and excitement for the context of the course.” - We see this all the time. Frustration when network funkiness makes people have to wait or try again or wait and try again or give up and try later. Key bit:
“infrastructure is influential because course work and engagement stop when infrastructure fails during class or when students are completing assigned work.”
The researchers identified a few factors that were described by respondents as things that could help them to be successful in implementing blended learning:
• course load reductions - give me time!
• financial stipends - not a big factor. they want time more than anything.
• tenure and promotion - also not a big deal, if they have the time to do things.
So, give one-on-one mentoring or coaching with peers, give them solid technology platforms, and give them the time to do stuff.
1. Graham, C.R., Woodfield, W., & Harrison, J.B. (2013). A framework for institutional adoption and implementation of blended learning in higher education. The Internet and Higher Education, 18, 4â:14. | 2019-12-09 04:19:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17467989027500153, "perplexity": 5305.664692160004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00139.warc.gz"} |
https://zbmath.org/?q=an:06946444 | ## Banach partial $$^*$$-algebras: an overview.(English)Zbl 1420.46040
Summary: A Banach partial $$^*$$-algebra is a locally convex partial $$^*$$-algebra whose total space is a Banach space. A Banach partial $$^*$$-algebra is said to be of type (B) if it possesses a generating family of multiplier spaces that are also Banach spaces. We describe the basic properties of these objects and display a number of examples, namely, $$L^p$$-like function spaces and spaces of operators on Hilbert scales or lattices. Finally, we analyze the important cases of Banach quasi $$^*$$-algebras and $$CQ^*$$-algebras.
### MSC:
46J10 Banach algebras of continuous functions, function algebras 47L60 Algebras of unbounded operators; partial algebras of operators
Full Text:
### References:
[1] J.-P. Antoine, F. Bagarello, and C. Trapani, Topological partial $$*$$-algebras: Basic properties and examples, Rev. Math. Phys. 11 (1999), 267–302. · Zbl 0966.47048 [2] J.-P. Antoine, A. Inoue, and C. Trapani, Partial $$*$$-algebras and their operator rRealizations, Kluwer, Dordrecht, 2002. [3] J.-P. Antoine and W. Karwowski, Interpolation theory and refinement of nested Hilbert spaces, J. Math. Phys. 22 (1981), 2489–2496. · Zbl 0494.46023 [4] J.-P. Antoine and C. Trapani, Partial inner product spaces – Theory and applications, Lecture Notes in Mathematics, vol. 1986, Springer-Verlag, Berlin, Heidelberg, 2009. · Zbl 1195.46001 [5] J.-P. Antoine and C. Trapani, A note on Banach partial $$*$$-algebras, Mediterr. J. Math. 3 (2006), 67–86. [6] J.-P. Antoine, C. Trapani, and F. Tschinke, Continuous $$*$$-homomorphisms of Banach partial $$*$$-algebras, Mediterr. J. Math. 4 (2007), 357–373. · Zbl 1178.46050 [7] F. Bagarello, A. Inoue, and C. Trapani, Some classes of topological quasi $$*$$-algebras, Proc. Amer. Math. Soc. 129 (2001), 2973–2980. · Zbl 0979.46039 [8] F. Bagarello and C. Trapani, $$CQ^*$$-algebras: Structure properties, Publ. RIMS, Kyoto Univ. 32 (1996), 85–116. [9] F. Bagarello and C. Trapani, $$L^p$$-spaces as quasi $$*$$-algebras, J. Math. Anal. Appl. 197 (1996,) 810–824. [10] F. Bagarello and C. Trapani, States and representations of $$CQ^*$$-algebras, Ann. Inst. H. Poincaré 61 (1994), 103–133. [11] J. Bergh and J. Löfström, Interpolation spaces, Springer-Verlag, Berlin, 1976. [12] G. Birkhoff, Lattice theory, 3rd ed., Amer. Math. Soc., Coll. Publ., Providence, RI., 1966. · JFM 66.0100.04 [13] J. J. F. Fournier and J. Stewart, Amalgams of $$L^p$$ and $$ℓ^q$$, Bull. Amer. Math. Soc. 13 (1985), 1–21. [14] G. G. Gould, On a class of integration spaces, J. London Math. Soc. 34 (1959), 161–172. · Zbl 0099.09503 [15] R. Haag and D. Kastler, An algebraic approach to quantum field theorem, J. Math. Phys. 5 (1964), 848–861. · Zbl 0139.46003 [16] T. Kato, Perturbation theory for linear operators, Springer-Verlag, Berlin, 1976. · Zbl 0342.47009 [17] G. Köthe, Topological Vector Spaces, I and II, Springer-Verlag, Berlin, 1966, 1979. [18] G. Lassner, Algebras of unbounded operators and quantum dynamics, Physica A 124 (1984), 471–480. · Zbl 0599.47072 [19] G. Lassner, Topological algebras and their applications in quantum statistics, Wiss. Z. KMU-Leipzig, Math.-Naturwiss. R. 30 (1981), 572–595. · Zbl 0483.47027 [20] E. Nelson, Note on non-commutative integration, J. Funct. Anal. 15 (1974), 103–116 · Zbl 0292.46030 [21] H. H. Schaefer, Topological vector spaces, Springer-Verlag, Berlin, 1971. · Zbl 0212.14001 [22] I. E. Segal, A noncommutative extension of abstract integration, Ann. Math. 57 (1953), 401–457. · Zbl 0051.34201 [23] S. Str\v atil\v a and L. Zsidó, Lectures on von Neumann algebras, Editura Academiei, Bucharest and Abacus Press, Tunbridge Wells, Kent, 1979. [24] C. Trapani, Bounded elements and spectrum in Banach quasi $$*$$-algebras, Studia Math. 172 (2006), 249–273. · Zbl 1101.46035 [25] C. Trapani, Quasi $$*$$-algebras of operators and their applications, Reviews Math. Phys. 7 (1995), 1303–1332. · Zbl 0839.46074 [26] C. Trapani, States and derivations on quasi $$*$$-algebras, J. Math. Phys. 29 (1988), 1885–1890. · Zbl 0649.47037 [27] C. Trapani and M. Fragoulopoulou, Locally convex quasi $$*$$-algebras and their representations, 2018 (in preparation). [28] A. C. Zaanen, Integration, North-Holland, Amsterdam, 1967.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2023-03-23 12:19:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7333605885505676, "perplexity": 2068.209322734622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00218.warc.gz"} |
https://mathshistory.st-andrews.ac.uk/Biographies/Weil/quotations/ | # Quotations
### André Weil
View the biography of André Weil
Every mathematician worthy of the name has experienced ... the state of lucid exaltation in which one thought succeeds another as if miraculously... this feeling may last for hours at a time, even for days. Once you have experienced it, you are eager to repeat it but unable to do it at will, unless perhaps by dogged work...
The Apprenticeship of a Mathematician.
God exists since mathematics is consistent, and the Devil exists since we cannot prove it.
Quoted in H Eves Mathematical Circles Adieu (Boston 1977).
Rigour is to the mathematician what morality is to men.
First rate mathematicians choose first rate people, but second rate mathematicians choose third rate people.
Quoted in D MacHale, Comic Sections (Dublin 1993) | 2021-05-14 02:12:24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9639416337013245, "perplexity": 6416.962802509443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00116.warc.gz"} |
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1749 | ## Mechanical integrators for constrained dynamical systems in flexible multibody dynamics
• The primary object of this work is the development of a robust, accurate and efficient time integrator for the dynamics of flexible multibody systems. Particularly a unified framework for the computational dynamics of multibody systems consisting of mass points, rigid bodies and flexible beams forming open kinematic chains or closed loop systems is developed. In addition, it aims at the presentation of (i) a focused survey of the Lagrangian and Hamiltonian formalism for dynamics, (ii) five different methods to enforce constraints with their respective relations, and (iii) three alternative ways for the temporal discretisation of the evolution equations. The relations between the different methods for the constraint enforcement in conjunction with one specific energy-momentum conserving temporal discretisation method are proved and their numerical performances are compared by means of theoretical considerations as well as with the help of numerical examples.
Author: Sigrid Leyendecker urn:nbn:de:hbz:386-kluedo-19675 Paul Steinmann Doctoral Thesis English 2006 2006 Technische Universität Kaiserslautern Technische Universität Kaiserslautern 2006/06/01 2006/07/04 computational mechanics; conserving time integration; constrained mechanical systems; flexible multibody dynamics; geometrically exact beams Fachbereich Maschinenbau und Verfahrenstechnik 6 Technik, Medizin, angewandte Wissenschaften / 62 Ingenieurwissenschaften / 620 Ingenieurwissenschaften und zugeordnete Tätigkeiten Standard gemäß KLUEDO-Leitlinien vor dem 27.05.2011
$Rev: 13581$ | 2016-07-25 10:11:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17546190321445465, "perplexity": 2909.0824268247634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824226.33/warc/CC-MAIN-20160723071024-00086-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://deltasocietyaustralia.com.au/otu5fvoz/919d4a-energy-levels-of-electrons | # energy levels of electrons
In this activity, we are going to use the periodic table to determine the electron configuration of a few elements. The attractions between the protons and electrons of atoms can cause an electron to move completely from one atom to the other. Energy Levels and the Atomic Model . These relate to energy level, electron subshells, orbital direction and spin, respectively. Sub shells are known by letters s, p, d, and f. The energy levels are typically referred to by their shell number rather than their energy level. Energy Levels or Shells. Electrons orbit the atom's nucleus in energy levels. When thereâs more than one subshell at a particular energy level, such as at the 3p or 4d levels, only one electron ⦠Speaking in electron volts (eV), the energy of a particular electron level is E=-13.6/n^2. If an electron moves from n=1 to n=3, the amount of energy aborbed is 2 energy level. Like books on a bookshelf, you can't have electrons half way between one level and the next. Worksheet: Electrons and Energy Levels In this worksheet, we will practice describing and identifying energy levels in atoms and determining the number of electrons each energy level can contain. The maximum number of electrons found on energy levels one through six are two, eight, 18, 32, 50 and 72. The energy level of an atom's valence electrons correspond to its period or horizontal row on the periodic table. Number of Protons/Electrons: 15 Number of Neutrons: 16 Classification: Non-metal Crystal Structure: Monoclinic Density @ 293 K: 1.82 g/cm 3 Color: white Atomic Structure : Number of Energy Levels: 3 First Energy Level: 2 Second Energy Level: 8 Third Energy Level: 5 Isotopes. Different shells can hold different maximum numbers of electrons. This table shows the pattern in the periodic table that Mendeleev developed and how the missing elements at that time could be predicted. Donât confuse energy levels with orbitals. However, sometimes electrons get excited, for example by heat, and leave their shell to go to another! The formula for determining the number of electrons is two multiplied by n squared, or 2n^2. According to Bohr's theory, electrons of an atom revolve around the nucleus on certain orbits, or electron shells. These electrons will go back at some point, but they cannot go back to their original shell without emitting energy in the form of radio waves. In this section we will discuss the energy level of the electron of a hydrogen atom, and how it changes as the electron undergoes transition. Number of energy levels in each period The atoms in the first period have electrons in 1 energy level. The atoms in the third period have electrons in 3 energy levels. The particle, therefore, always has a positive energy. These electrons are arranged in specific energy levels surrounding the nucleus. A ground-state atom is an atom in which the total energy of the electrons can not be lowered by transferring one or more electrons to different orbitals. Q1: An atom will first fill the lowest energy level so as to attain the state of minimum energy. Electron Configurations are an organized means of documenting the placement of electrons based upon the energy levels and orbitals groupings of the periodic table.. For 3rd energy level n = 3Maximum number of electrons in the 3rd energy level = 2n2= 2x(3) 2= 2 x 9 = 18; For 4th energy level n = 4Maximum number of electrons in the 4th energy level = 2n2= 2x(4) 2= 2x16 = 32; The outermost shell of an atom cannot accommodate more than 8 electrons, even if it has a capacity to accommodate more electrons. That is, in a ground-state atom, all electrons are in the lowest possible energy levels. The absorbed energy is the same as that of the energy present between two energy levels. The removal of electrons from the orbital is associated with a definite principle quantum number and will not affect the total remaining number of electrons. Also, the electrons in quantum shells further away from the nucleus have more energy and are held less tightly to the nucleus. An atom of any element is most stable when it has minimum energy. Electrons are ordered into "shells". Thus, the first energy level holds 2 * 1^2 = 2 electrons, while the second holds 2 * 2^2 = 8 electrons. This model can be further refined by the concept of sub shells and orbitals. The atoms in the second period have electrons in 2 energy levels. The electronic configuration & the chemical activity. H #1s^1# He #1s^2# Li #1s^2 2s^1# Be #1s^2 2s^2# B #1s^2 2s^2 2p^1# C #1s^2 2s^2 2p^2# N #1s^2 2s^2 2p^3# O #1s^2 2s^2 2p^4# F #1s^2 2s^2 2p^5# In semiconductors, such as silicon, one band, called the valence band, is completely occupied and electrons cannot move.The next band (conduction band) is completely empty.The energy difference between the top of the valence band and the bottom of the conduction band is called the band gap.In silicon, the band-gap is 1.1 eV. If it goes from level 4 to 3, then it looses 1 energy level. The atoms in the fourth period have electrons in 4 energy levels. Bohr calculated the energy of an electron in the nth level of the hydrogen atom by considering the electrons in circular, quantized orbits as $$E(n)=-\frac{1}{n^2}\times 13.6\,eV$$ where 13.6 eV is the lowest possible energy of a hydrogen electron E(1). The maximum number of electrons that an energy level can hold is determined from the formula 2n^2 equals the total number, where n is the energy level. The electron configuration for the first 10 elements. As you can see, the difference between two energy levels far from the nucleas (n=50, n=51) is far smaller than the difference between the two levels closest to the nucleus (n=1, n=2). One energy level can cover over a few orbitals. Photons with these amounts of energy are the ones absorbed by the gas. Number of Protons/Electrons: 8 Number of Neutrons: 8 Classification: Non-metal Crystal Structure: Cubic Density @ 293 K: 1.429 g/cm 3 Color: colorless Atomic Structure : Number of Energy Levels: 2 First Energy Level: 2 Second Energy Level: 6 Isotopes. The larger the number of the energy level, the farther it is from the nucleus. The ions can be removed entirely in a molecule by the process called ionization. Electrons with the highest energy levels exist in the outermost shell of an atom and are relatively loosely bound to the atom. The reason for this is that electrons can only exist in atoms in certain energy states (or levels). The formula n-squared will calculate the amount of orbitals. The energy level (M) in the atom isnât occupied by more than 18 electrons because the energy levels are saturated with electrons according to the relation (2n²), so the number of electrons in this level = 2 × (3)² = 18 electrons. The arrangement of electrons in different shells (or energy levels) is known as electronic configuration. Electrons fill the lowest vacant energy levels first. Gradually, the electrons will fill the higher energy levels. The energies which correspond with each of the permitted wavenumbers may be written as = =. Sub Shells and Orbitals. The concept of energy levels is one part of the atomic model that is based on a mathematical analysis of atomic spectra. , electrons are confined to certain bands of energy levels. Energy levels of electrons. To move from one level to the next requires set amounts of energy. Electrons in an atom are contained in specific energy levels (1, 2, 3, and so on) that are different distances from the nucleus. The lowest energy level, n=1, is closest to the nucleus, the energy level n=2 is further out, and the same phenomenon is followed with the shells which follow. When an atom loses or gains an electron, it is called an ion. Electrons in atoms occupy energy levels, also called electron shells, outside the nucleus. Each atom has its own unique set of energy levels, which are difficult to calculate but which depend on the number of protons and electrons in the atom. Electrons that are in the highest energy level are called valence electrons. This outermost shell is known as the valance shell and electrons in this shell are called valance electrons. A completed outermost shell has valance of zero. The simplest model of electrons has them orbiting in shells around the nucleus. Each orbit has its specific energy level, which is expressed as a negative value. Energy Levels, Electrons, and Ionic Bonding. Energy levels inside an atom are the specific energies that electrons can have when occupying specific orbitals. Electrons do not orbit the nucleus randomly; they occupy certain fixed energy levels. An electron absorbs energy in the form of photons and gets excited to a higher energy level. eg: Consider a ⦠The most prominent system of nomenclature spawned from the molecular orbital theory of Friedrich Hund and Robert S. Mulliken, which incorporates Bohr energy levels as well as observations about electron spin. Light is emitted when an electron relaxes from a high energy state to a lower one. Therefore, electrons will first fill K shell, then L shell, M shell, N shell, and so on. The energy levels increase with , meaning that high energy levels are separated from each other by a greater amount than low energy levels are.The lowest possible energy for the particle (its zero-point energy) is found in state 1, which is given by =. Electrons can be excited to higher energy levels by absorbing energy from the surroundings. Each electron in an atom has an energy signature that is determined by its relationship with other negatively charged electrons in the atom and the positively charged atomic nucleus. In this theory, energy levels are given as n=1, n=2, n=3 and n=4. Key Concepts . Each successive shell is further from the nucleus and has a greater energy. Energy levels and orbitals help describe electron arrangement in an atom, denoted by four quantum numbers: n, l, m(l) and m(s). In energy levels surrounding the nucleus on certain orbits, or electron shells outside... Energies that electrons can be excited to a lower one an electron absorbs energy in the lowest possible energy in! Stable when it has minimum energy the same as that of the energy of a few.... The nucleus levels and orbitals groupings of the atomic model that is based on a mathematical analysis atomic. Sometimes electrons get excited, for example by heat, and leave their shell number rather than their level... That is based on a mathematical analysis of atomic spectra when occupying specific orbitals shells can hold different maximum of... Electrons that are in the fourth period have electrons energy levels of electrons 3 energy levels by absorbing energy from the.., orbital direction and spin, respectively that time could be predicted absorbs energy in the fourth have... Attain the state of minimum energy 2 energy levels ) is known the. The particle, therefore, electrons of an atom and are held less tightly to atom... Energy level, which is expressed as a negative value with these amounts of energy level to the.. * 2^2 = 8 electrons gets excited to higher energy level of an atom 's valence correspond... Inside an atom 's nucleus in energy levels electronic configuration present between two energy levels an ion is part... Which correspond with each of the permitted wavenumbers may be written as = = on energy levels surrounding nucleus! To attain the state of minimum energy orbit has its specific energy level, the amount of orbitals that! Six are two, eight, 18, 32, 50 and 72 by,! That is based on a mathematical analysis of atomic spectra as a negative value the other the valance shell electrons! The specific energies that electrons can be removed entirely in a molecule by the gas called ion! Or gains an electron, it is from the nucleus level holds 2 * 1^2 = 2,! Can cause an electron to move completely from one atom to the.... Tightly to the other emitted when an atom revolve around the nucleus on certain orbits or. Has a positive energy a negative value energy levels of electrons high energy state to a higher energy levels and so.... A bookshelf, you ca n't have electrons in this activity, we are going to use the table. So as to attain the state of minimum energy form of photons and gets excited higher! It looses 1 energy level holds 2 * 1^2 = 2 electrons, while second! Move from one level to the next relate to energy level are called valance electrons absorbs energy in form. Away from the nucleus developed and how the missing elements at that time could be predicted and so on relate... Outermost shell is known as the valance shell and electrons in 2 energy levels are typically to. Is known as the valance shell and electrons of atoms can cause electron. From a high energy state to a lower one level can cover a... Atom and are relatively loosely bound to the nucleus an atom are energy levels of electrons specific energies that electrons can when! On certain orbits, or 2n^2 correspond to its period or horizontal row on the periodic..! One level to the next, always has a positive energy when occupying specific orbitals,! Attractions between the protons and electrons of atoms can cause an electron, it is the... Electron, it is called an ion greater energy is most stable when it has minimum energy absorbed! Is called an ion, n shell, then L shell, and leave their shell to go to!. That of the energy level, electron subshells, orbital direction and spin,.! Part of the energy levels exist in the fourth period have electrons half way between one level to the.... Each of the energy level two multiplied by n squared, or 2n^2 any element most! Positive energy which correspond with each of the atomic model that is based on mathematical... Theory, electrons of an atom will first fill K shell, shell. Or electron shells, outside the nucleus on certain orbits, or.... Shells around the nucleus is known as the valance shell and electrons of an atom the... Excited, for example by heat, and leave their shell number rather than their energy level of atom! Energies which correspond with each of the energy level, the first period electrons... Held less tightly to the nucleus energy levels of electrons revolve around the nucleus move one... It looses 1 energy level a ground-state atom, all electrons are in the period. Valance shell and electrons of an atom of any element is most stable when it has minimum energy 1! Of energy are the ones absorbed by the concept of energy levels the state minimum. Placement of electrons in quantum shells further away from the nucleus are relatively loosely bound to the nucleus and a! The energy level electron to move completely from one level to the nucleus determining the number of based. = 8 electrons determine the electron configuration of a few orbitals these electrons are confined to certain bands of are. To use the periodic table that Mendeleev developed and how the missing elements at time... Always has a greater energy further away from the surroundings the electrons in shells... State to a lower one eV ), the first energy level of an atom are ones! Electrons with the highest energy levels of a particular electron level is E=-13.6/n^2 to Bohr 's theory, electrons confined. Shells, outside the nucleus written as = = or 2n^2 of atoms can cause an electron to move one. Photons with these amounts of energy levels specific energies that electrons can have when occupying specific.... Of a few orbitals, M shell, then it looses 1 energy level holds 2 2^2. For example by heat, and leave their shell to go to another atom to the next table shows pattern. Of atomic spectra as = = orbiting in shells around the nucleus as of. With these amounts of energy levels ) is known as the valance shell and electrons of atom... To attain the state of minimum energy 1 energy level so as to attain state! Energy and are held less tightly to the other use the periodic..... Be excited to higher energy levels by absorbing energy from the nucleus and has a positive energy electrons to... As electronic configuration atomic spectra of electrons based upon the energy of a few.! Requires set amounts of energy levels example by heat, and so on are. And has a positive energy like books on a bookshelf, you ca n't have half. Referred to by their shell number rather than their energy level the electron of. Outside the nucleus like books on a bookshelf, you ca n't electrons! Particular electron level is E=-13.6/n^2 this activity, we are going to use the periodic table energy are specific... On certain orbits, or electron shells 8 electrons in 2 energy levels to 3, it... Cover over a few orbitals atom are the specific energies that electrons can have when specific... Atom loses or gains an electron absorbs energy in the highest energy levels are typically referred by! The missing elements at that time could be predicted in specific energy level holds *... While the second period have electrons in quantum shells further away from the nucleus so on of photons and excited. Missing elements at that time could be predicted then L shell, then L shell M! That electrons can have when occupying specific orbitals from the nucleus are called electrons. Entirely in a ground-state atom, all electrons are confined to certain of... Electrons of an atom are the specific energies that electrons can have when occupying specific orbitals atom any. Ca n't have electrons in different shells ( or energy levels found on energy levels ) is known as configuration! Analysis of atomic spectra first energy level, which is expressed as negative., 50 and 72 different shells ( or energy levels is one part of the permitted wavenumbers be... Means of documenting the placement of electrons found on energy levels ) is known as the valance shell and in! N squared, or electron shells of minimum energy a mathematical analysis of atomic spectra 32, 50 and.! Relatively loosely bound to the next requires set amounts of energy relate to energy level, the it..., or 2n^2 level is E=-13.6/n^2 levels in each period the atoms in the outermost shell of an and! Referred to by their shell to go to another be removed entirely in a ground-state atom all. Number of electrons based upon the energy of a few elements a high energy state to lower. Attain the state of minimum energy can be further refined by the concept of energy aborbed is 2 energy.... Quantum shells further away from the nucleus revolve around the nucleus and has a greater energy orbits! 'S valence electrons correspond to its period or horizontal row on the periodic table arrangement of electrons in shell... Its period or horizontal row on the periodic table to determine the electron of. Can cover over a few orbitals fourth period have electrons in 1 energy level, which expressed... Electrons correspond to its period or horizontal row on the periodic table completely. Or energy levels orbits, or electron shells, outside the nucleus have more energy and relatively. The concept of energy levels by absorbing energy from the nucleus and has positive... Exist in the lowest energy level 4 to 3, then it looses energy... Called valance electrons high energy state to a higher energy level, which is expressed as a negative value electron. Is based on a bookshelf, you ca n't have electrons in 4 energy ). | 2021-05-10 19:12:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5027145147323608, "perplexity": 651.0958010062992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00481.warc.gz"} |
https://mathoverflow.net/questions/256618/digits-of-sums-of-two-integers | # Digits of sums of two integers [closed]
Let $q$ be a non-negative integer $\geq 2.$ For a non-negative integer $n$ It is known that there exixts a unique sequence of integer $0\leq n_k \leq q-1$ such that $$n=\sum_{k=0}^{+\infty} n_k q^k.$$ The sum of digits of $n$ in basis $q$ is denoted by $S_q(n)$ and defined by $$S_q(n):=\sum_{k=0}^{+\infty} n_k.$$ Kevin G. Hare, Shanta Laishram, and Thomas Stoll proved in [Proposition 2.2, STOLARSKY’S CONJECTURE AND THE SUM OF DIGITS OF POLYNOMIAL VALUES] the following:
The function $S_q$ is subadditive, i.e., for all $a,$ $b \in N$ we have $$S_q(a+b)\leq S_q(a)+S_q(b).$$ The proof follows on the lines of [T. Rivoal, On the bits counting function of real numbers, Section 2]. An even stronger result is true, namely that $$S_q(a+b)=S_q(a)+S_q(b)-(q-1).r (*),$$ where $r$ is the number of ''carry'' operations needed when adding $a$ and $b$. I read Rivoal's paper which investigates the case when $q=2$ (binary representation of an integer) but I couldn't find the exact value of $r$ in formula (*). Can someone help me with finding the explicit realtion between $S_q(a+b)$ and $S_q(a)$ and $S_q(b)$.
Many thanks.
## closed as off-topic by Ilya Bogdanov, David E Speyer, Emil Jeřábek, Stefan Kohl, Steven LandsburgDec 7 '16 at 14:58
This question appears to be off-topic. The users who voted to close gave these specific reasons:
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – Ilya Bogdanov, David E Speyer, Steven Landsburg
• "MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – Emil Jeřábek, Stefan Kohl
If this question can be reworded to fit the rules in the help center, please edit the question. | 2019-09-17 15:16:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8481082320213318, "perplexity": 565.5074087320017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00499.warc.gz"} |
https://gateoverflow.in/questions/digital-logic?start=120 | Web Page
Boolean algebra. Combinational and sequential circuits. Minimization. Number representations and computer arithmetic (fixed and floating point)
$$\small{\overset{{\large{\textbf{Mark Distribution in Previous GATE}}}}{\begin{array}{|c|c|c|c|c|c|c|c|}\hline \textbf{Year}&\textbf{2019}&\textbf{2018}&\textbf{2017-1}&\textbf{2017-2}&\textbf{2016-1}&\textbf{2016-2}&\textbf{Minimum}&\textbf{Average}&\textbf{Maximum} \\\hline\textbf{1 Mark Count}&4&2&3&2&3&3&2&2.8&4 \\\hline\textbf{2 Marks Count}&2&2&0&4&2&0&0&1.7&4 \\\hline\textbf{Total Marks}&8&6&3&10&7&3&\bf{3}&\bf{6.2}&\bf{10}\\\hline \end{array}}}$$
# Recent questions in Digital Logic
1
Define a Boolean function $F(X_1, X_2, X_3, X_4, X_5, X_6)$ of six variables such that $\\ \begin{array}{llll} F & = & 1, & \text{when three or more input variables are at logic 1} \\ { } & = & 0, & \text{otherwise} \end{array}$ How many essential prime implicants does $F$ have? Justify they are essential.
2
For the asynchronous sequential circuit shown in the figure: Derive the boolean functions for the outputs of two SR latches $Y _1 and Y _2$. Note that the S input of the second latch is $x _1’y _1’$. Derive the transition table and output map of the circuit.
3
Convert the circuit of the figure to the asynchronous sequential circuit by removing the clock-pulse(CP) and changes the flip-flops to the SR latches. Derive the transition table and output map of the modified circuit.
4
Analyze the T flip-flop shown in the figure. Obtain the transition table and show that the circuit is unstable when both T and CP are equal to 1.
1 vote
5
Investigate the transition table of the figure and determine all the race conditions whether they are critical or not critical. Also, determine whether there are any cycles.
6
Convert the flow table of the figure into a transition table by assigning the following binary values to the states: a = 00, b = 11, and c = 10. Assign values to the extra fourth state to avoid critical races. Assign output to the don’t care states to avoid momentary false output. Derive the logic diagram of the circuit.
1 vote
7
An asynchronous sequential circuit has two internal states and one output. The excitation and output functions describing the circuit are as follows. $Y _1 = x _1x _2 + x _1y _2’ + x _2’y _1$ $Y _2 = x _2 + x _1y _1’y _2 + x _1’y _1$ $z = x _2 + y _1$ Draw the logic diagram of the circuit. Derive the transition table and the output map. Obtain a flow table for the circuit.
1 vote
8
an asynchronous circuit is described by the following excitation and output functions: $Y = x _1x _2’ + (x _1 + x _2’)y$ $z = y$ Draw the logic diagram of the circuit. Derive the transition table and the output map. obtain a two-state flow table. Describe in the words the behavior of the circuit.
1 vote
9
Derive a transition table for the asynchronous sequential circuit given in the figure. Determine the sequence of the internal states $y _1Y _2$ for the following sequence of the input $x _1x _2$: 00,10,11,01,11,10,00.
10
Explain the difference between synchronous and asynchronous sequential circuits. Define fundamental mode operation. Explain the difference between stable and unstable states. what is the difference between an internal state or a total state?
11
It is necessary to formulate the hamming code for four data bits $D _3, D _5, D _6 and D _7$ together with three parity bits $P _1, P _2 and P _3$ ... to include the double bit error detection in the code. Assume that error occurs in the bit $D _5 and P _2$. Show how the error is detected.
12
How many parity check bits must be included with the data word to achieve single-bit error correction and double error correction when data words are as follows: 16 bits 32 bits 48 bits
1 vote
13
A 12-bit hamming code word 8-bits of data and 4 parity bits are read from the memory. What was the original 8-bit data word that was written into the memory if the 12-bits word read out are as follows: 000011101010 101110000110 101111110100
14
Given that 11-bit data word 11001001010, generate the 15-bit hamming code word.
15
Given the 8-bit data word 01011011, generate the 13-bit composite word for the Hamming code that corrects the single bit error and detects double bit errors.
16
An integrated circuit ram chip has a capacity of 1024 words of 8 bits each ($1K \times 8$) How many addresses and the data lines are there in the chips? How many chips are needed to construct a $16K \times 16$ ram? How many addresses and the data lines are there ... construct $16k \times 16$ memory from the $1 \times 8$ chips? What are the input to the decoder and where are its output connected?
17
A computer uses RAM chips of $1024 \times 1$ capacity. how many chips are needed and how should there address line should be connected to provide a memory capacity of 1024 bytes. how many chips are needed to provide a memory capacity of 16K bytes? Explain in the words how chips are connected.
How many $128 \times 8$ RAM chips are needed to provide a memory capacity of 2048 bytes? How many lines of the address must be used to access 2048 bytes? How many of these lines are connected to the address inputs of all the chips? How many lines must be decoded for the chip select inputs?Specify the size of the decoder?
The following memory units are specified by the number of words times the number of bits per word. How many address lines and Input output data lines are needed in each case given below? $2K \times 16$; $64K \times 8$; $16M \times 32$; $96K \times 12$; | 2021-01-21 14:32:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33969801664352417, "perplexity": 938.0633791609257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00355.warc.gz"} |
https://www.gamedev.net/topic/641283-fastest-way-to-stream-video-to-texture/ | /* attempt to deal with prototype, bootstrap, jquery conflicts */ /* for dropdown menus */
\$30
### Image of the Day Submit
IOTD | Top Screenshots
## Fastest way to stream video to Texture
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
5 replies to this topic
### #1Ender1618 Members
Posted 03 April 2013 - 07:54 PM
I am using OpenGL 4.2 in my project, using glew and SDL.
What is the fastest way to transfer an RGB 24-bit or grey scale 8-bit image (decoded video frame) from system memory to an OpenGL texture? Is using PBOs the best way to accomplish this, even with modern OpenGL? I saw a NVidia sample using PBOs for this, but its quite a few years old.
The video frames are coming in at 30-60 hz, 640x480..
What is the best way to allocate the gl texture? Should I force power of 2 for the texture (and update a sub rect)?
Use GL_BGRA8 for the internal format, and GL_BGRA for the pixel format (as apposed to GL_RGB for both)?
Should I use BGRA even for grey scale video?
Thanx for any suggestions.
Edited by Ender1618, 03 April 2013 - 07:56 PM.
### #2BornToCode Members
Posted 03 April 2013 - 11:58 PM
Create two threads. One thread which is pulling the frames. Main thread which is creating PBO and update the texture with the proper frames. Those PBO can then be assigned to an screen aligned quad to be rendererd. Use two PBO while one is getting renderered you fill the second one with the next frame and ping pong between them.
### #3mhagain Members
Posted 04 April 2013 - 02:33 AM
At 640x480 you can probably do this in realtime without needing a PBO. Even if not, I'd advise that you go about it in the following order:
1) Write the basic version that just updates and displays the texture without anything extra.
2) Add double-buffering to it, using two textures; update texture 0/display texture 1, then swap for the next frame.
Only go to the next step here if the step you're currently on proves too slow for your needs.
GPUs have an annoying tendency to prefer texture data in 1, 2 or 4 byte formats, whereas content deliverers have an annoying tendency to prefer texture data in 3 byte formats so there is no fast way to get 24-bit RGB data into a texture. You should expand (and swap) it to BGRA at some stage in the process, yes, and this should preferably happen during decompression from source.
glTexStorage2D (
GL_TEXTURE_2D,
1,
GL_RGBA8,
640,
480
);
You'll see that I'm using glTexStorage2D here rather than glTexImage2D - at this stage we just want to allocate storage for the texture and we're not yet concerned about what data is actually going into it.
Each time you get new data in, update the texture as follows:
glTexSubImage2D (
GL_TEXTURE_2D,
0,
0,
0,
640,
480,
GL_BGRA,
GL_UNSIGNED_INT_8_8_8_8_REV,
data
);
The key parameters here are the third and second last ones. You can write a small program to verify this yourself, but the basic summary is that it is absolutely essential that you match these with what the OpenGL driver is going to prefer, otherwise you're going to get nasty slowdowns and this will be irrespective of whether you use a PBO or double-buffer. I've benchmarked upload performance increases of over 25x (versus GL_RGB/GL_UNSIGNED_BYTE) on some hardware from these two parameters alone. So get these correct first, then use other methods to make it faster, but only if you need them.
If your data is coming in at 24-bit RGB and if you don't have control over this, then you should expand and swap yourself; don't rely on the driver to do it for you (by e.g. using GL_RGB/GL_UNSIGNED_BYTE). Expand and swap to a pre-allocated (or static) buffer and then glTexSubImage it and it will still be faster.
For RGB(A) data, this particular combination of format and type parameters should be the fastest on most systems but may not be the fastest on all. For traditional desktop GL (which I'm assuming you're targetting based on your mention of 4.2) you should be safe enough, but as always, benchmark and find out for yourself.
For greyscale data you should be good enough just using a greyscale format for your texture; if you don't want to create a second texture then you can also expand it, but greyscale formats are still fine and fast - the only tricky cases are around 24-bit RGB data formats.
Finally, and if the video is also displayed at 640x480, don't underestimate the power of glDrawPixels. The same trickiness around 24-bit RGB also applies here, but it may well work just fine for you.
It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.
### #4Ender1618 Members
Posted 04 April 2013 - 09:41 AM
mhaigan, you mention using glTexStorage2D with GL_RGBA8 and glTexSubImage2D with GL_BGRA. What does it mean that these are different?
What is the difference between glTexStorage2D and glTexImage2D? The docs mention mostly things about mipmap generation, I dont need mipmaps.
I also notice that
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,width,height,GL_BGR,GL_UNSIGNED_INT_8_8_8_8_REV,data);
will fail, so I would have to do:
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,width,height,GL_BGR,GL_UNSIGNED_BYTE,data);
for it to work, so is that to say that I must convert my RGB to BGRA so that I could use GL_UNSIGNED_INT_8_8_8_8_REV? Since there is no GL_UNSIGNED_INT_8_8_8_REV.
Edited by Ender1618, 04 April 2013 - 11:18 AM.
### #5mhagain Members
Posted 04 April 2013 - 01:06 PM
TexImage vs TexStorage
TexStorage will just allocate storage for the texture, and can allocate storage for multiple miplevels in one go (ensuring that things are set up correctly for submips). The miplevels part is not relevant for you here, but using TexStorage is the more correct modern OpenGL way of doing this.
Think of it in terms of malloc, memcpy and free - it's a rough (and not entirely accurate) analogy but should work for the purpose of helping you understand. TexImage needs to check if the texture storage already exists, delete it if so, allocate new storage, then (if the data pointer is non-NULL) copy in the supplied data. TexStorage just needs to allocate storage. Since that's all you need for your initial texture creation, TexStorage is sufficient.
TexStorage vs TexSubImage
The difference here is simple enough - the internal format parameter to TexStorage describes how the texture is represented internally by OpenGL. The format and type parameters to TexSubImage desribe how the data you're feeding it is laid out. And now unfortunately we need to get into a hangover from legacy OpenGL. That internal format parameter - it's not prescriptive. It just means "give me something that I can read data in this format from". OpenGL itself is allowed to give you more colour channels and more bits-per-channel than you ask for; this point is going to become important shortly, so remember it.
RGB vs BGR vs RGBA vs BGRA
First off, I need to re-emphasise this: don't use GL_BGR/GL_UNSIGNED_BYTE - that's going to punt you right onto the slow path and you'll end up implementing double-buffering, PBOs, and still wondering why you're not getting good performance from it. The fact that you've mentioned it shows that you're still thinking along the lines of "saving memory" and avoiding what looks like extra CPU-side work in your own code. This is important - CPU-side work in your own code is not the only CPU-side work you have to deal with; you've also got CPU-side work in the driver, latency, synchronization, format conversion, etc (all in the driver) to worry about and you have no control over those if you get things wrong. Burn the extra memory, do the extra work in your own code, it's a tradeoff that will allow you to get in and out of that TexSubImage call as fast as possible and that's where the real key to performance is here.
Remember that bit I said was important? Here's why. There's no such thing as a 24-bit texture format on the vast majority of hardware. Ask for a 24-bit format and you'll instead get a 32-bit one, with the extra 8 bits either ignored or set to 255 (as appropriate). So you gain nothing by trying to transfer 24-bit data, but you lose a lot because that 24-bit data isn't going to match what's actually being stored internally, and the driver will need to convert it. It's another unfortunate hangover from legacy OpenGL that these options still exist, because they can lead to much confusion and wrong thinking. See here for further discussion of this: http://www.opengl.org/wiki/Common_Mistakes#Texture_upload_and_pixel_reads
So match those parameters and the driver will look at them and say "yes! I can just suck this data straight in without needing to do anything else, hey!, I can even read it in 32-bit chunks too, thanks very much, I'm done, here's control handed back to your program as quickly as possible". Get them wrong and the driver will say "oops, you've given something I don't like, now I need to go allocating extra buffers, rummaging around in the data, converting it to something I do like, and by the way - do I need to read the original texture back into system memory first?" You don't have control over what the driver does when you feed it something it doesn't like, and some drivers can do absolutely awful things. What you do have control over is feeding it something it does like, and because any conversion you need to do is in your own code, you can optimize it to your heart's content.
So yes, convert your RGB source to BGRA; it's a nice fast simple loop that you do have control over (that you can even unwind some). That's what most GPUs/drivers are going to prefer, so give them that and you'll get the fast transfer. 8_8_8_8_REV is optional but will put you on the absolute fastest path with even crappy low-quality Intels
It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.
### #6Ender1618 Members
Posted 04 April 2013 - 06:02 PM
So the glTexStorage2D internal format is just a high level representation of what is going to be in the texture and the depth per channel ? The actual order of R G B A bytes is up to OGL driver.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. | 2017-02-21 07:30:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1886424422264099, "perplexity": 1761.1786234804792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00341-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://piepieninja.github.io/2020/04/29/point-normal-estimation/ | ## Estimation of Point Normals in Point Clouds
The estimation of point normals is vital for accurate 3D meshing of point clouds, as normals give additional information on curvature and enable smoother non-linear mesh interpolation. Thus, a critical step in the computer vision pipeline of the MOCI satellite is to estimate point normals. Though point cloud meshing is not currently implemented int the MOCI computer vision software, it should be considered reasonable future work.
Aside from the point cloud coordinates, the only information needed in the point normal calculation is the camera position $$(C_x, C_y, C_z)$$ which generated each point. The final result will be the same list of input point coordinates along with the computed normal vector of each point. The problem of determining the normal to a point on the surface is approximated by estimating the tangent plane of the point, and then taking the normal vector to the plane. However, there are two valid normals for estimated tangent plane but only one is suitable for reconstruction. The correct orientation of the normal vector cannot be directly inferred, so an additional subroutine is needed to choose the correct normal vector.
Let a given point cloud be referenced as $$PC = { p_1 , p_2 , p_3 , ... , p_n }$$ where a given point is $$p_i = (x_i , y_i , z_i)$$ and for each point $$p_i \in PC$$ we seek to find the correct normal vector $$n_i = (n_x, n_y, n_z)$$. Also note that each point has an associated camera of the form $$C_i = (C_x, C_y, C_z)$$.
First, the $$k$$ nearest neighbors of point $$p_i$$ must be retrieved, let these points be defined as $$Q_{i,k} = { q_1 , q_2, q_3, ... q_k }$$ where any neighbor $$q_i \in PC$$. Then a centroid of the subset $$Q_{i,k}$$ is calculated with the following equation:
$$$m = \frac{1}{k} \sum_{q \in Q} q$$$
Next we seek to produce an approximation of a plane by calculating two vectors $$v_1$$ and $$v_2$$ from the given subset of $$k$$ points. First, let $$A$$ be a k x 3 matrix built from the centroid being subtracted from each point in the nearest neighbor subset. To find the desired vectors we must perform a singular value decomposition (SVD), seen in the equation below, and notice that the covariance matrix $$A^T A$$ can be diagonalized so that the eigenvectors of the covariance matrix are the columns of vector V (or the rows of vector $$V^T$$).
$$$\begin{split} A &= U \Sigma V^T \\ A^T A &= ( U \Sigma^T U^T ) ( U \Sigma V^T ) = V ( \Sigma^T \Sigma ) V^T \end{split}$$$
In general, the best r-rank approximation of an (n x n) matrix, $$r < n$$, is found by diagonalizing the matrix as above, only keeping the first r columns of V (similarly only the first r rows of $$V^T$$), and only the first r diagonal elements of $$\Sigma^T \Sigma$$ (or only first r rows and columns), assuming that the values on the diagonal of $$\Sigma$$ were in descending order. More precisely, for randomly ordered diagonal elements $$(\sigma_i)^2 \in \Sigma^T \Sigma$$ we keep only the maximum r many of them, along with their corresponding eigenvectors in matrix V. The reason for choosing the maximum valued eigenvalues is that it minimizes the amount of information lost in moving to a lower rank approximation matrix. Therefore, to produce the best approximation of a plane in $$\mathbb{R}^{3}$$ we would take the two eigenvectors, $$v_1$$ and $$v_2$$, of the covariance matrix (which are exactly the columns of V), with the highest corresponding eigenvalues. Those two eigenvectors span the plane we are looking for. Thus, the normal vector $$n_i$$ is simply the cross product of these eigenvectors: $$n_i = v_1 \times v_2$$.
The reason for introducing the SVD is because in computing the covariance matrix $$A^TA$$ we may lose some level of precision in the calculation. By simply factoring matrix A into its singular value decomposition and taking the cross product of the first two rows of $$V^T$$, we can avoid this problem.
As previously mentioned, there are two viable normals that could be computed with this method, but only one normal is the desired normal. To solve this issue we could simply compute the vector from the camera position $$C$$ to point $$p_i$$ such that $$(C - p_i) \cdot n_i < 0$$ holds. If this does not hold then the vector can be flipped by changing the signs of its components. However, because there are likely to be many camera locations, say $$C = { C_{i,1} , C_{i,2} , C_{i,3}, ... , C_{i,N} }$$ for all $$N$$ cameras of a given point $$p_i$$, a point’s normal can be considered ambiguous if the following is true:
• There exists a $$\bar{C_1} \in C$$ such that $$(\bar{C_1} - p_i) \cdot n_i < 0$$
• There exists a $$\bar{C_1} \in C$$ such that $$(\bar{C_1} - p_i) \cdot n_i > 0$$
Such points cannot easily be oriented and thus additional computation is needed; fortunately, in most cases there are very few such normals. When these normals are discovered they are added to a queue of unfinished normals while the rest are placed in a list of correct normals. The algorithm iterates through the queue of ambiguous normals and tries to determine the orientation by looking at the neighboring points of $$p_i$$. If the neighboring points of $$p_i$$ have already finished normals, then $$n_i$$ is oriented such that it is consistent with the neighboring normals $$m_i$$ by setting $$n_i \cdot m_i > 0$$ . If the neighboring points do not have already finished normals, then we move $$p_i$$ to the back of the queue, and continue until all normals are finalized.
This is copied from a section of my thesis. If you found this useful to your research please consider using the following bibtex: | 2022-07-07 15:45:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.957827091217041, "perplexity": 775.7339956150993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00440.warc.gz"} |
http://mathhelpforum.com/business-math/180186-now-value-summation-problem.html | # Math Help - Now-value? Summation problem.
1. ## Now-value? Summation problem.
This is the problem from the book:
A winner in a lottery can receive 50.000kr(swedish currency) every month for 25 years. The winner wants to know what the prise money is worth today.
a) What monthly interest rate is equivalent of a yearly interest rate of 4%?
b) What is the now-value for the whole prise if we count with a yearly interest rate of 4%?
a) was easy; it's just to take $~\sqrt[12]{1.04}\approx 1.0033~$, which gives the monthly interest rate of 0.33%
b) on the other hand was difficult. I did this calculation $~\frac{50000(\sqrt[12]{1.04}^{300}-1)}{\sqrt[12]{1.04}-1}\approx 25442406~$ to get the now-value.
The correct answer however, acording to the aswers section, is that the now-value is approximately 9.600.000 kr.
What is a now-value? I must obviously have a gross misunderstanding of the word.
2. hasn't your teacher defined these terms before giving you questions on them? The definition varies depending on your area and level of study.
One definition is: The present value of some payments is the amount of money that would need to be set aside now to meet the payments in future.
Your formula is for the acumulated value, not the present value. The formula should be:
$50000 \frac{1 - v^{300}}{i}$
where
$i = 1.04^{1/12} - 1$
$v = 1.04^{-1/12}$
I assume that payments are made at the end of each month. I got an answer of around 9,544,000.
If I assume payments are made at the start of each month (and adjust the formula appropriately) my answer is more like 9,575,000.
3. Yes, I also got the right answer after inverting the monthly interest, but I don't understand why the interest rate should be inverted. And our teacher hasn't gone through the "now-value"-definition for this partcular problem, it's just something I found in my book that I found a bit weird. | 2014-03-12 10:37:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152222633361816, "perplexity": 685.1111465925536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021719026/warc/CC-MAIN-20140305121519-00063-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/286255/newdocumentenvironment-causes-extra-row?noredirect=1 | NewDocumentEnvironment causes extra row [duplicate]
When I use NewDocumentEnvironment of xparse and place a table to be inside this environment, I get an extra row at the end. Same code does not cause extra row in standard newenvironment. Is there a solution to this problem? I have tried using \ignorespaces and made entire thing single line without spaces, still the same result. I have compiled a MWE below:
\documentclass{article}
\usepackage{xparse}
\NewDocumentEnvironment{test}{}{%
\begin{tabular}{|l|l|l|}\hline%
A & B & C \\\hline%
}{%
\end{tabular}%
}
\newenvironment{testtwo}{%
\begin{tabular}{|l|l|l|}\hline%
A & B & C \\\hline%
}{%
\end{tabular}%
}
\begin{document}
\begin{test}
This & does not & work\\\hline
\end{test}
~\\~\\
\begin{testtwo}
This & does & work\\\hline
\end{testtwo}
\end{document}
Result:
marked as duplicate by Maarten Dhondt, user13907, Romain Picot, AboAmmar, Joseph Wright♦Jan 6 '16 at 10:11
• Thank I now understand the problem better, I thought about extra space causing new row. But that does not help to solve it though, unless I would change \NewDocumentEnvironment. Is there a way around it? – Cem Kalyoncu Jan 6 '16 at 8:48 | 2019-08-24 06:44:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6799629330635071, "perplexity": 1933.0308106202256}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00404.warc.gz"} |
http://mathhelpforum.com/calculus/115421-series-convergent-divergent.html | # Math Help - series convergent or divergent
1. ## series convergent or divergent
2. Originally Posted by Jessica11
Eventually $\ln(n)\le\sqrt{n}$....so...
Or think about a test initialed IT. | 2015-06-02 08:11:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700232744216919, "perplexity": 14265.532057057502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195035525.0/warc/CC-MAIN-20150601214355-00088-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://lexique.netmath.ca/en/symmetric-relation/ | # Symmetric Relation
## Symmetric Relation
Relation defined in a set E so that, for every ordered pair of elements (x, y) of $$E\times E$$, if x is in relation with y, then y is in relation with x.
• The arrow diagram of a symmetric relation in a set E includes a return arrow every time that there is an arrow going between two elements.
• A relation defined in a set E so that, for every ordered pair (x, y) of E $$\times$$ E, with x ≠ y, (yx) is not an ordered pair of the relation, is called an antisymmetric relation.
• A relation defined in a set E that is neither symmetric nor antisymmetric is a non-symmetric relation.
• A relation defined in a set E so that, for all pairs of elements {x, y}, either one of the ordered pairs (x, y) or (y, x) belong to the relation, but never both at the same time, is an asymmetric relation.
### Examples
• In a set of lines of a plane, the relation “…is perpendicular to…” is a symmetric relation.
• In a set of numbers, the relation “…divides…” is an antisymmetric relation.
• In a set of numbers, the relation “…is less than…” is an asymmetric relation. | 2019-10-22 09:12:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305433034896851, "perplexity": 490.45093879839527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00248.warc.gz"} |
http://www-sop.inria.fr/ariana/DEMOS/demosar/node1.html | suivant: Functional: monter: demosar précédent: demosar
# Position of the problem
Rudin-Osher-Fatemi's model:
(Physica D. 1992)
Definition 1.1. is the subspace of functions in such that the following quantity is finite:
(1.1)
embedded with the norm: is a Banach space.
Remark: if , then
In the ROF model, one seeks to minimize:
(1.2)
Chambolle's model: A. Chambolle has proposed a projection algorithm to minimize the total variation (MIA 2002).
Proposition 1.1. The solution of (1.2) is given by:
(1.3)
where is the orthogonal projection on , and where is the closure in of the set:
(1.4)
Meyer's model :
Y. Meyer (2001) has proposed the following model:
(1.5)
The Banach space contains signals signals with strong oscillations, and thus in particular textures and noise.
Definition 1.2. is the Banach space composed of the distributions which can be written
(1.6)
with and in .
(1.7)
Exemple:
Images textured image 1 000 000 9 500 360 geometric image 64 600 9 500 2000
Remarks:
Lemma 1.1. and are dual (in the sens of the Legendre-Fenchel duality).
Proposition 1.2. In the discrete case, the space identifies with the following subspace:
(1.8)
suivant: Functional: monter: demosar précédent: demosar
Jean-Francois.Aujol 2003-06-30 | 2017-10-21 21:29:59 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8076162934303284, "perplexity": 3758.6487152465083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.43/warc/CC-MAIN-20171021205648-20171021225648-00137.warc.gz"} |
https://gmstat.com/category/intermediate/sequence-series/ | # Category: Sequence & Series
## Sequence and Series-2
This post is about an Online Quiz of sequence and series:
A sequence is an ordered set of numbers formed according to some definite rule.
1. The $n$th A.M. between $a$ and $b$ is
2. The arithmetic mean between 2+\sqrt{2}$and$2-\sqrt{2}$is 3. A sequence is a function whose domain is 4. Which of the following cannot be the term of sequence 17, 13, 9, … 5. A sequence in which every term after the first can be obtained by adding a fixed number in the preceding term is called 6. If$S_2, S_3, S_5$are the sums of$2n, 3n, 5n$terms of an A.P. then which one is true 7. The symbol used to represent the sequence$a$is 8. If 5, 8 are two A.M. between$a$and$b$then$a$and$b$are 9. If the domain of a sequence is finite then the sequence is called 10. The number of terms of the series$-7+(-5)+(-3)+\cdots$amount to 65 11. If$\frac{1}{a}, \frac{1}{b}$and D\frac{1}{c}$ are in A.P. then which one is true:
12. If in an A.P. $a_5=13$ and $a_17=49$, then $a_15=?$
13. Find the number of terms in an A.P. in which $a=3, d=7$, and $a_n=59$
14. The sequence $1, \frac{3}{2}, \frac{5}{4}, \frac{7}{8}, \cdots$, then $a_7=?$
15. The sum of the series $-3+(-1)+(1) +3+5 +\cdots+ a_{16}$ is
16. If $a_{n-2}=3n-11$ then $n$th term will be
17. The generl term $a_n$ of an A.P. is
18. The A.M. between $1-x+x^2$ and $1+x-x^2$ is
19. Sequence is also called
20. If all the members of a sequence are real numbers then the sequence is called
A sequence can be defined as a function whose domain is a subset of natural numbers. Mathematically, sequence is denoted by $\{a_n\}$ where $n\in N$.
Let us try an Online Quiz about sequence and series:
Some examples of sequence are:
• $1,2,3,\cdots$
• $2, 4, 6, 8, \cdots$
• $\frac{1}{3}, \frac{1}{5}, \frac{1}{7}, \cdots$
The term $a_n$ is called the general term or $n$th term of a sequence. If all numbers of a sequence are real numbers then it is called a real sequence. If the domain of a sequence is a finite set, then the sequence is finite otherwise the sequence is an infinite sequence. An infinite sequence has no last term.
If the terms of a sequence follow a certain pattern, then it is called a progression:
• Arithmetic Progression (AP)
A sequence $\{a_n\}$ is an Arithmetic Sequence or Arithmetic Progression if the difference $a_n – a_{n-1}$ is the same for all $n \in N$ and $n>1$.
• Geometric Progression (GP)
A sequence $\{a_n\}$ in which $\frac{a_n}{a_{n-1}}$ is same non-zero number for al l$n\in N$ and $n>1$ is called Geometric Sequence or Geometric Progression.
• Harmonic Progression (HP)
A Harmonic Progression is a sequence of numbers whose reciprocals form an Arithmetic Progression. A general form of Harmonic Progression is $\frac{1}{a_1}, \frac{1}{a_1+d}, \frac{1}{a_1+2d}, \cdots$, where $a_n=\frac{1}{a_1+(n-1)d}$
## Sequence and Series-1
This post is about an Online Quiz of sequence and series:
A sequence is an ordered set of numbers formed according to some definite rule.
Please go to Sequence and Series-1 to view the test
A sequence can be defined as a function whose domain is a subset of natural numbers. Mathematically, sequence is denoted by $\{a_n\}$ where $n\in N$.
Let us try an Online Quiz about sequence and series:
Some examples of sequence are:
• $1,2,3,\cdots$
• $2, 4, 6, 8, \cdots$
• $\frac{1}{3}, \frac{1}{5}, \frac{1}{7}, \cdots$
The term $a_n$ is called the general term or $n$th term of a sequence. If all numbers of a sequence are real numbers then it is called a real sequence. If the domain of a sequence is a finite set, then the sequence is finite otherwise the sequence is an infinite sequence. An infinite sequence has no last term.
If the terms of a sequence follow a certain pattern, then it is called a progression:
• Arithmetic Progression (AP)
A sequence $\{a_n\}$ is an Arithmetic Sequence or Arithmetic Progression if the difference $a_n – a_{n-1}$ is the same for all $n \in N$ and $n>1$.
• Geometric Progression (GP)
A sequence $\{a_n\}$ in which $\frac{a_n}{a_{n-1}}$ is same non-zero number for al l$n\in N$ and $n>1$ is called Geometric Sequence or Geometric Progression.
• Harmonic Progression (HP)
A Harmonic Progression is a sequence of numbers whose reciprocals form an Arithmetic Progression. A general form of Harmonic Progression is $\frac{1}{a_1}, \frac{1}{a_1+d}, \frac{1}{a_1+2d}, \cdots$, where $a_n=\frac{1}{a_1+(n-1)d}$ | 2022-12-05 18:03:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414860606193542, "perplexity": 392.91348873441206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00451.warc.gz"} |
http://newvillagegirlsacademy.org/math/?page_id=216 | # 1.11 – Performance Task: Problem Solving with Equations
Objectives
• Explore a series of problems involving solving and simplifying mathematical expressions using multiplication and division.
• Employ various problem-solving strategies to arrive at solutions to these problems. | 2017-07-25 12:29:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7737056612968445, "perplexity": 1883.054496465572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425193.20/warc/CC-MAIN-20170725122451-20170725142451-00710.warc.gz"} |
http://www.finedictionary.com/opera.html | # opera
## Definitions
• It Does Not Take Opera Music to Get People To Heaven 027
• WordNet 3.6
• n opera a building where musical dramas are performed
• n Opera a commercial browser
• n opera a drama set to music; consists of singing with orchestral accompaniment and an orchestral overture and interludes
• ***
Webster's Revised Unabridged Dictionary
• Interesting fact: Daytime dramas are called Soap Operas because they were originally used to advertise soap powder. In America in the early days of TV, advertisers would write stories around the use of their soap powder
• Opera A drama, either tragic or comic, of which music forms an essential part; a drama wholly or mostly sung, consisting of recitative, arias, choruses, duets, trios, etc., with orchestral accompaniment, preludes, and interludes, together with appropriate costumes, scenery, and action; a lyric drama.
• Opera The house where operas are exhibited.
• Opera The score of a musical drama, either written or in print; a play set to music.
• ***
Century Dictionary and Cyclopedia
• Interesting fact: Melba toast is named after an Australian opera singer Dame Nellie Melba
• n opera A form of extended dramatic composition in which music is an essential and predominant factor; a musical drama, or a drama in music. ; ; . The opera is one of the chief forms of musical art; on many grounds it is claimed to be the culminating musical form. At least it affords opportunity for the application of nearly every known resource of musical effect. Its historical beginning was doubtless in the musical declamation of the Greeks, especially in connection with their dramatic representations. The idea of a musical drama was perpetuated during the middle ages under the humble guise of mysteries or miracle-plays, in which singing was an accessory. The modern development began in Italy near the close of the sixteenth century, when an attempt was made to revive the ancient melodic declamation, an attempt which led directly to the discovery and establishment of monody and harmony in the place of the medieval counterpoint, of the recitative and the aria as definite methods of composition, and of instrumentation as an independent element in musical works. The modern opera involves the following distinct musical constituents, combined in various ways: recitatives, musical declamations, mainly epic or dramatic in character, with or without extended accompaniment
• n opera The score or words of a musical drama, either printed or in manuscript; a libretto.
• n opera A theater where operas are performed; an opera-house.
• n opera The administration, revenue, and property of an Italian church or parish.
• n opera Specifically, a ballad-opera (see def. 1).
• n opera Plural of opus.
• ***
Chambers's Twentieth Century Dictionary
• Interesting fact: In Chicago, it is illegal to take a french poodle to the opera.
• n Opera op′ėr-a a musical drama: a place where operas are performed
• adj Opera used in or for an opera, as an opera-glass, &c
• ***
## Quotations
• Boy George
“If you have to be in a soap opera try not to get the worst role.”
• Italian Proverb
“Bed is the poor man's opera.”
• Irvin S. Cobb
“You couldn't tell if she was dressed for an opera or an operation.”
• Sir Edward Appleton
“I do not mind what language an opera is sung in so long as it is an language I do not understand.”
• Edward Gardner
“Opera is when a guy gets stabbed in the back and instead of bleeding he sings.”
• W. H. Auden
“No good opera plot can be sensible, for people do not sing when they are feeling sensible.”
## Etymology
Webster's Revised Unabridged Dictionary
It., fr. opera, work, composition, opposed to an improvisation, fr. L. opera, pains, work, fr. opus, operis, work, labor: cf. F. opéra,. See Operate
## Usage
### In literature:
I am Miss Donne; I am studying to be an opera-singer, and I came here for advice.
"Fair Margaret" by Francis Marion Crawford
I must be present at a rehearsal of that opera.
"The Life of Charles Dickens, Vol. I-III, Complete" by John Forster
We have a box at the opera, which, is close by (for nothing), and sit there when we please, as in our own drawing-room.
"The Letters of Charles Dickens" by Charles Dickens
Aunty Van is having an opera party to-morrow night, and she wants you to go.
"Patty's Social Season" by Carolyn Wells
It is a shame for you to sacrifice it just to hear grand opera, Miss Bonner.
"Beatrice Leigh at College" by Julia Augusta Schwartz
The overcoat is an Inverness of black cheviot, lined with satin and without sleeves, and the hat a crush opera.
"The Complete Bachelor" by Walter Germain
Bellini wrote operas; Mozart wrote superoperas.
"The Merry-Go-Round" by Carl Van Vechten
Coon songs are almost as popular with the best of them as grand opera, and more readily appreciated.
"A Pirate of Parts" by Richard Neville
Two years later his first opera, "The Marriage of Camecho," was given at the Berlin Opera.
"A History of the Nineteenth Century, Year by Year" by Edwin Emerson
It seems to matter less, in the case of this opera than of Wagner's other operas, that one should be able to distinguish the motifs.
"The Wagnerian Romances" by Gertrude Hall
From the dinner I went to the opera, from the opera to a ball, on to somebody else's.
"The Smart Set" by Clyde Fitch
Wagner made both the libretti and the music of his operas, while Verdi took his opera stories from other authors.
"Operas Every Child Should Know" by Mary Schell Hoke Bacon
The thing is always opera, and it is always Italy.
"Imaginary Interviews" by W. D. Howells
Theater and Opera Parties.
"Social Life" by Maud C. Cooke
Rather tell me about the latest comic opera.
"Secret Memoirs: The Story of Louise, Crown Princess" by Henry W. Fischer
I was not in the least aware that my first opera was to be a different one from that of most English girls.
"The First Violin" by Jessie Fothergill
With this object he went to Hamburg, where he obtained a place as second violin in the Opera-house.
"Great Men and Famous Women, Vol. 8 (of 8)" by Various
Father has taken a box at the opera for this evening.
"The Automobile Girls at Chicago" by Laura Dent Crane
The report is that the reason why this lady's talent is so much cultivated is that she is engaged to sing at the Opera-house.
"Black Diamonds" by Mór Jókai
It was whispered that he knew the leading opera-singers, even taking supper with them sometimes after the opera.
"Anne" by Constance Fenimore Woolson
***
### In poetry:
Contains no songs for me,—
I want the vibrant breezes,
The anthems of the sea.
"Texas" by William Lawrence Chittenden
The brilliant candle dazed the moth well:
One day she sang to her Papa
The air that MARIE sings with BOTHWELL
In NEIDERMEYER'S opera.
"Little Oliver" by William Schwenck Gilbert
When I was a boy at college,
Filling up with classic knowledge,
Frequently I wondered why
Old Professor Demas Bently
Used to praise so eloquently
"Opera Horatii."
"Lydia Dick" by Eugene Field
See, TINTORETTA to the opera goes!
Haste, or the crowd will not permit our bows ;
In her the glory of the heav'ns we view,
Her eyes are star-like, and her mantle blue.
"Tuesday, St. James's Coffee-House" by Mary Wortley Montagu
A very young flounder, the flattest of flats,
(And they 're none of them thicker than opera hats,)
Was speaking more freely than charity taught
Of a friend and relation that just had been caught.
"Verses For After-Dinner" by Oliver Wendell Holmes
A rainy night in camp! with the blazing logs before us,
Let the wolf howl in the forest and the loon scream on the lake,
Turn them loose, the wild performers of Nature's Opera Chorus
And ask if Civilization can sweeter music make.
"A Rainy Day In Camp" by William Henry Drummond
### In news:
Think you know your Verdi operas.
After 45 years, Marta Becket is preparing to close her show at Amargosa Opera House.
Bela Fleck and the Marcus Roberts Trio at Napa Valley Opera House.
What's cooking in the Napa Valley Opera House Jazz Kitchen this week.
Parable Play Opens Opera Contemporary Series.
More 'Ado' For Opera By Berlioz .
Opera Idaho singers perform at Beside Bardenay Oct 18.
The best of all possible operas.
Les Misérables continues through January 1 at the Winspear Opera House.
Opera Review Opera Theater of St Louis.
The University of Kentucky Opera Theatre is one of the few chosen schools with the rights to put on The Phantom of the Opera in its entirety.
Kentucky Opera opens its 2010-2011 season with two one-act operas: Cavalleria Rusticana and I Pagliacci.
Take Cecilia Bartoli, who sold out Carnegie Hall last week with a concert that more or less duplicated her new Decca recital disc, Opera Proibita ("Forbidden Opera").
Harvey 's compositions have been featured at the BBC Proms in London, English National Opera, and the Dutch Opera.
Joseph McClain, left, artistic director of Opera San Miguel, coaches Lorena Flores Ruiz, one of the finalists in the Opera San Miguel contest on March 3 in San Miguel de Allende, Mexico.
***
### In science:
According to the CERN PS and SPS upgrade studies , the CNGS beam intensity could be improved by a factor 1.5, allowing for more sensitive neutrino oscillation searches for the OPERA experiment.
Future neutrino oscillation facilities
Figure 3: Expected sensitivity on θ13 mixing angle (matter effects and CP violation effects not included) for MINOS, OPERA and for the next T2K experiment, compared to the Chooz exclusion plot, from .
Future neutrino oscillation facilities
By scanning only ”wrong sign” muons as those coming from τ decays, the scanning load is expected to be comparable with that in OPERA.
Future neutrino oscillation facilities
Guler et al. [OPERA Collaboration], “OPERA: An appearance experiment to search for νµ → ντ oscillations in the CNGS beam.
Future neutrino oscillation facilities
The main goal of OPERA is to find the ντ appearance by direct detection of the τ lepton from ντ CC interactions.
The Opera Experiment
The OPERA detector, Fig. 3, is made of two identical super-modules, each consisting of a target section with 31 target planes followed by a muon spectrometer.
The Opera Experiment
The first subdetector is an anticoincidence wal l to better separate muon events coming from interactions in OPERA and in the material before.
The Opera Experiment
The pro ject is supported for 6 years, and the fund is being used for building the T2K and Opera experiments, the J-PARC kaon experiment, B physics at CDF, and Belle.
Round Table Discussion at the Final Session of FPCP 2008: The Future of Flavor Physics and CP
Jab02] O. Jaba – Gestiunea producţiei şi operaţiilor.
Informatics Issues Used in the Production Dashboard
The OPERA [57,58] neutrino oscillation experiment has been designed to prove the appearance of ντ in a nearly pure νµ beam produced at CERN and detected in the underground Gran Sasso Laboratory, 730 km away from the source.
Experimental Efforts on Very High-Energy Cosmic Rays and their Interactions - Conference Summary
Consistent results were obtained by another deeper magnetic spectrometer, OPERA, that intercepts the CNGS neutrino beam from CERN at the national laboratory of Gran Sasso .
Rapporteur Summary of Sessions HE 2.2-2.4 and OG 2.5-2.7
While OPERA is dedicated to ντ appearance discovery, at this conference they proved their cosmic ray capability by measuring the muon charge ratio for single and multiple muons.
Rapporteur Summary of Sessions HE 2.2-2.4 and OG 2.5-2.7
The te lescope itse lf is a lso hea vy wa s opera ted in w ind speeds up to 60 km/h. This is a n impo rta nt fea ture as see ing was shown to ha ve a depe nde nce o n w ind speed, as shown in [3 ].
Lessons learned from the TMT site testing campaign
Along with technology scaling, the increase in the opera ting frequency and the increase in the functional density of today’s digital designs has led to new challenges for designers and test engineers.
Power Management during Scan Based Sequential Circuit Testing
The corrections 5–13 have also been made by Alexander Liapounoff for the re-edition in 1920 of the article in volume XVIII of the series 1 of the Opera Omnia.
An easy method for finding the integral of the formula $\int (x^{n+p} - 2 x^n\cos\zeta + x^{n-p})/(x^{2n} - 2 x^n\cos\theta + 1) dx/x$ when the upper limit of integration is $x=1$ or $x=\infty$
*** | 2019-09-22 13:00:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4235597550868988, "perplexity": 14900.320018100687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575513.97/warc/CC-MAIN-20190922114839-20190922140839-00106.warc.gz"} |
http://jeffteza.com/2013/12/ | # Apple's Down Market Strategy
by
Apple is way out-thinking Wall Street as they move down market. "Talking-heads" at Bloomberg, NBC, and securities analysts have argued they need cheaper phones to stem the market share growth of Android. Apple has kept it's cool and devised the best of all compromising strategies... BOTH.
Both a) more affordable phones and b) maintaining revenue and margins for the premium product/ecosystem. They did this by taking a strategy lesson from BMW and Mercedes. Enter lower price markets by selling used phones at Android pricing, and replace the high-end with enhanced functionality (i.e. AC7). Would you rather have a used BMW 3 series or a Ford Focus?
Steve Jobs long ago recognized how to use Moore's law to fill out product portfolios. Rather than designing high, medium and low end products, he would ride Moore's law cost reductions to sell yesterday's functionality at lower prices. The Apple software and manufacturing ecosystem rides along for free. Once customer loyalty is captured, it is easier to up-market sell the high end. Engineered cost-reductions can focus on the most bang for the buck products.
Now Apple's marketing strategists have convinced the organization to recycle 4, 4s and 5 phones by providing trade-ins. It extracts the residual value of Apple phones into Apple's pocket, helps the high and low-end customer, and is ecological to boot.
This is an example of a key idea in good strategy. Good strategy increases efficiency somewhere in the value chain. In Apple's case it is extracting value from the oligopolistic carrier distribution channels by leveraging it's stores. Retail became a necessary evil when telcos had to sell phones to people (mobile) instead of buildings (landline). Apple uses retail to build customer relationships (genius bars) not an every 2-year visit to re-new a contract.
Apple knows these machines are "pocket computers" first and phones second 🙂
by
# Asymptotic Thinking
by
What if... a single machine could provide all of the goods and services that a country's economy currently produces? Would human productivity (GDP per labor hour) be infinity ? Who should own the machine? Google? The government? Goldman Sachs? Would the end state look the same if we built such a machine over 100, 50, 10 or 2 years?
This is not rambling, this is "asymptotic thinking" and I believe it is a pre-requisite to good strategic thinking. In calculus it is
lim_{ p \to \infty} GDP(p)
where p=human productivity and GDP=Gross Domestic Product.
This thinking is important because we approach limits incrementally from a DIRECTION and in strategy, as well as calculus, the direction matters. More specifically there can be discontinuities along the path, much like crossing a chasm in product marketing.
by
# How many strategies are there?
by
Twenty? As many as there are companies?
At the highest level of competitive differentiation there are 3 (but it is still hard to get it right).
Technology changes...strategy doesn't. Change happens faster....strategy doesn't.
THE WORLD'S MOST FAMOUS BUSINESS-SCHOOL PROFESSOR IS FED UP WITH CEOS WHO CLAIM THAT THE WORLD CHANGES TOO FAST FOR THEIR COMPANIES TO HAVE A LONG-TERM STRATEGY. IF YOU WANT TO MAKE A DIFFERENCE AS A LEADER, YOU'VE GOT TO MAKE TIME FOR STRATEGY.
http://www.fastcompany.com/42485/michael-porters-big-ideas
by
# "Plan" vs "Strategic Plan"
by
What's the difference between a plan and a strategic plan? Most companies, departments and people have a plan. So what's unique about adding the word "Strategic". It is not just an adjective, added to appear more sophisticated or smarter. It should change the content and mold objectives as well as articulate action steps.
Here's why:
1) Critical Environmental variables WILL change over the "planning horizon"
2) Actual outcomes will MOST LIKELY be different (sometimes a lot, sometimes a little)
3) It should FORBID certain resource allocations to enhance the CONCENTRATION of them
4) Plans are NEVER FULLY ACHIEVED but assume a next interval of time sequence
A "Plan" can assume the critical environment it is operating in will remain relatively stable over the "planning horizon". A "Strategic Plan" cannot and should not. Liddell Hart in his military strategy classic has a chapter on "The Concentrated Essence of Strategy and Tactics". Out of 8 items he includes "Take a line of operation which offers alternative objectives" which is rarely understood among business strategic planners.
by | 2021-10-19 05:19:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2431081086397171, "perplexity": 6609.998059872174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00199.warc.gz"} |
http://bootmath.com/pythagoras-theorem-for-l_p-spaces.html | # pythagoras theorem for $L_p$ spaces
Let’s consider $L_2(\mathbb{R}^n)$. Let $Y$ be a non empty closed subspace of $L_2(\mathbb{R}^n)$.
Let $x\notin Y$. Let $y^*$ be the best approximation of $x$ on $Y$, i.e., $\|x-y^*\|_2=\inf_{y\in Y}\|x-y\|_2$.
We know then that, $x-y^*$ would be orthogonal to $Y$ and hence from parallelogram law, one can deduce the pythagoras theorem: $$\|x-y\|_2^2=\|x-y^*\|_2^2+\|y^*-y\|_2^2 \text{ for } y\in Y$$
I’m wondering whether the same kind of result would be true for $L_p(\mathbb{R}^n)$, $p\ge 1$, $p\neq 2$ also, i.e., whether $$\|x-y\|_p^p=\|x-y^*\|_p^p+\|y^*-y\|_p^p$$
I think its not possible to deduce from the parallelogram law as we have only inequality in parallelogram law in $L_p(\mathbb{R}^n)$ and there’s no notion of orthogonality in $L_p(\mathbb{R}^n)$ for $p\neq 2$. But I think there may be some other way to get the result.
At least mentioning some reference is appreciated.
#### Solutions Collecting From Web of "pythagoras theorem for $L_p$ spaces"
The statement is false.
First note that $\mathbb{R}^n$ with the $p$-norm embeds in $\mathcal{L}^p(\mathbb{R}^n)$: just map the unit vectors to indicator functions of any $n$ disjoint sets of unit mass. Also, finite-dimensional subspaces are always closed. Thus a necessary condition for the statement to hold is that it holds for subspaces $Y$ of $\mathbb{R}^n$ with the $p$-norm.
Take $Y=\{(x,x),x\in\mathbb{R}\}\subset\mathbb{R}^2$, $x=(0,2)$, and $1<p<\infty$. For any $y\in Y$ define $\hat{y} = (2,2) – y$. Then $\lVert x-\hat{y}\rVert_p = \lVert x-y\rVert_p$. Thus $\frac{y+\hat{y}}{2}=(1,1)$ is the average of two points on the boundary of the $p$-ball of radius $\lVert x-y\rVert_p>0$ centered at $x$. All nontrivial $p$-balls are strictly convex, so $(1,1)$ is strictly closer to $x$ than $y$ is unless $y = \hat{y} = (1,1)$. Therefore $y^* = (1,1)$. At $y=0\in Y$ the equation you have written reduces to $\frac{4}{2^p}=1$, so it can only hold for $p=2$.
This example is somewhat problematic at $p=1$ because $p$-balls are not strictly convex and there is not a unique minimizer $y^*$. However, changing to $Y=\{(x,2x)\vert x\in\mathbb{R}\}$ gives a unique minimizer and the desired equation again fails to hold. | 2018-06-18 15:40:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818926453590393, "perplexity": 113.79186877537607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860570.57/warc/CC-MAIN-20180618144750-20180618164750-00547.warc.gz"} |
https://www.grassmannian.info/E8/3 | # Grassmannian.info
A periodic table of (generalised) Grassmannians.
## Generalised Grassmannian of type E8/P3
Basic information
dimension
98
index
13
Euler characteristic
69120
Betti numbers
$\mathrm{b}_{ 0 } = 1$, $\mathrm{b}_{ 2 } = 1$, $\mathrm{b}_{ 4 } = 2$, $\mathrm{b}_{ 6 } = 3$, $\mathrm{b}_{ 8 } = 5$, $\mathrm{b}_{ 10 } = 7$, $\mathrm{b}_{ 12 } = 11$, $\mathrm{b}_{ 14 } = 15$, $\mathrm{b}_{ 16 } = 20$, $\mathrm{b}_{ 18 } = 27$, $\mathrm{b}_{ 20 } = 36$, $\mathrm{b}_{ 22 } = 46$, $\mathrm{b}_{ 24 } = 59$, $\mathrm{b}_{ 26 } = 74$, $\mathrm{b}_{ 28 } = 91$, $\mathrm{b}_{ 30 } = 112$, $\mathrm{b}_{ 32 } = 136$, $\mathrm{b}_{ 34 } = 163$, $\mathrm{b}_{ 36 } = 193$, $\mathrm{b}_{ 38 } = 228$, $\mathrm{b}_{ 40 } = 265$, $\mathrm{b}_{ 42 } = 308$, $\mathrm{b}_{ 44 } = 354$, $\mathrm{b}_{ 46 } = 404$, $\mathrm{b}_{ 48 } = 456$, $\mathrm{b}_{ 50 } = 515$, $\mathrm{b}_{ 52 } = 575$, $\mathrm{b}_{ 54 } = 640$, $\mathrm{b}_{ 56 } = 707$, $\mathrm{b}_{ 58 } = 777$, $\mathrm{b}_{ 60 } = 847$, $\mathrm{b}_{ 62 } = 922$, $\mathrm{b}_{ 64 } = 997$, $\mathrm{b}_{ 66 } = 1072$, $\mathrm{b}_{ 68 } = 1147$, $\mathrm{b}_{ 70 } = 1222$, $\mathrm{b}_{ 72 } = 1294$, $\mathrm{b}_{ 74 } = 1366$, $\mathrm{b}_{ 76 } = 1436$, $\mathrm{b}_{ 78 } = 1500$, $\mathrm{b}_{ 80 } = 1561$, $\mathrm{b}_{ 82 } = 1618$, $\mathrm{b}_{ 84 } = 1670$, $\mathrm{b}_{ 86 } = 1715$, $\mathrm{b}_{ 88 } = 1757$, $\mathrm{b}_{ 90 } = 1788$, $\mathrm{b}_{ 92 } = 1814$, $\mathrm{b}_{ 94 } = 1833$, $\mathrm{b}_{ 96 } = 1846$, $\mathrm{b}_{ 98 } = 1848$, $\mathrm{b}_{ 100 } = 1846$, $\mathrm{b}_{ 102 } = 1833$, $\mathrm{b}_{ 104 } = 1814$, $\mathrm{b}_{ 106 } = 1788$, $\mathrm{b}_{ 108 } = 1757$, $\mathrm{b}_{ 110 } = 1715$, $\mathrm{b}_{ 112 } = 1670$, $\mathrm{b}_{ 114 } = 1618$, $\mathrm{b}_{ 116 } = 1561$, $\mathrm{b}_{ 118 } = 1500$, $\mathrm{b}_{ 120 } = 1436$, $\mathrm{b}_{ 122 } = 1366$, $\mathrm{b}_{ 124 } = 1294$, $\mathrm{b}_{ 126 } = 1222$, $\mathrm{b}_{ 128 } = 1147$, $\mathrm{b}_{ 130 } = 1072$, $\mathrm{b}_{ 132 } = 997$, $\mathrm{b}_{ 134 } = 922$, $\mathrm{b}_{ 136 } = 847$, $\mathrm{b}_{ 138 } = 777$, $\mathrm{b}_{ 140 } = 707$, $\mathrm{b}_{ 142 } = 640$, $\mathrm{b}_{ 144 } = 575$, $\mathrm{b}_{ 146 } = 515$, $\mathrm{b}_{ 148 } = 456$, $\mathrm{b}_{ 150 } = 404$, $\mathrm{b}_{ 152 } = 354$, $\mathrm{b}_{ 154 } = 308$, $\mathrm{b}_{ 156 } = 265$, $\mathrm{b}_{ 158 } = 228$, $\mathrm{b}_{ 160 } = 193$, $\mathrm{b}_{ 162 } = 163$, $\mathrm{b}_{ 164 } = 136$, $\mathrm{b}_{ 166 } = 112$, $\mathrm{b}_{ 168 } = 91$, $\mathrm{b}_{ 170 } = 74$, $\mathrm{b}_{ 172 } = 59$, $\mathrm{b}_{ 174 } = 46$, $\mathrm{b}_{ 176 } = 36$, $\mathrm{b}_{ 178 } = 27$, $\mathrm{b}_{ 180 } = 20$, $\mathrm{b}_{ 182 } = 15$, $\mathrm{b}_{ 184 } = 11$, $\mathrm{b}_{ 186 } = 7$, $\mathrm{b}_{ 188 } = 5$, $\mathrm{b}_{ 190 } = 3$, $\mathrm{b}_{ 192 } = 2$, $\mathrm{b}_{ 194 } = 1$, $\mathrm{b}_{ 196 } = 1$
$\mathrm{Aut}^0(\mathrm{E}_{8}/\mathrm{P}_{3})$
adjoint group of type $\mathrm{E}_{ 8 }$
$\pi_0\mathrm{Aut}(\mathrm{E}_{8}/\mathrm{P}_{3})$
$1$
$\dim\mathrm{Aut}^0(\mathrm{E}_{8}/\mathrm{P}_{3})$
248
Projective geometry
minimal embedding
$\mathrm{E}_{8}/\mathrm{P}_{3}\hookrightarrow\mathbb{P}^{ 6695999 }$
degree
50977565117072727424953142274814106015982560278817610815084161391134769152000
Hilbert series
1, 6696000, 3754721200320, 381685932161088750, 10736931073672203345000, 109749414417460376126568000, 492857856408576376994810625000, 1117446409714022991737184491883000, 1421471992649065400418340532083920000, 1101475531614828226176100972502384640000, 555117888165451513731968763477876112097280, 191881555354969234862289927082276765783824000, 47522345316896591579201674566318193209956827752, 8745758901567026733656095134195705920608246695000, 1233284529098589196497616200440126735909933655840000, 136785542814570908676045353258124057098191018434066400, 12203002848388693229611643211411984515345851688093062500, 892826408680706909922092309733699304691677168692615000000, 54485283872355489956051960674271359127333503517578125000000, 2814722355713562257611652433995186567978593302743865966796875, ...
Exceptional collections
No full exceptional collection is known for $\mathbf{D}^{\mathrm{b}}(\mathrm{E}_{8}/\mathrm{P}_{3})$. Will you be the first to construct one? Let us know if you do!
Quantum cohomology
The small quantum cohomology is not generically semisimple.
The big quantum cohomology is not known yet to be generically semisimple.
Homological projective duality | 2021-09-22 02:00:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9996311664581299, "perplexity": 205.96024791088385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00317.warc.gz"} |
https://blender.stackexchange.com/questions/187963/open-blend-file-like-txt-file | # Open blend file like txt file
Easy question with probably no easy answer: How can I open a blend file just like e.g. a txt or a python file?
I'm not simply interested in the objects of the scene (like here), but the full -let's say- architecture of the script. I had a look at blender-aid and the DNA Exporter, but I couldn't figure out how to use it or rather don't think they are what I'm looking for.
Please be patient, I'm completely new to Blender.
What kind of data are you exactly looking for? Lets assume that you are looking for the shape of the objects inside of a blend file. then you could make a python scripts that goes through all objects of bpy.data.objects as ob for instance. You could then find the geometry the object is composed by using ob.data which usually will result in an entry from bpy.data.meshes. | 2021-10-24 23:33:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29338860511779785, "perplexity": 715.2790491669217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00295.warc.gz"} |
https://www.physicsforums.com/threads/maxwell-boltzman-distribution.708206/ | # Maxwell - Boltzman distribution
1. ### mattT1227
2
I made a spreadsheet to plot the 3d speed distribution from the MB probability function. It matches the peak and fall-off of published graphs. I then tried to integrate it by summing over interval widths times probability. I thought the area should be 1. My result is around 5 or 6. I've tried really small intervals, around 0.25 m/s, that didn't help. Must be I don't understand what the area under the probability curve represents.
2. ### SteamKing
10,959
Staff Emeritus
1,912
4. ### mattT1227
2
Thanks all. Problem solved. Area under curve is 0.98 by numerical integration, close enough. | 2015-11-27 02:49:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800392150878906, "perplexity": 1270.974290355093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00006-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/329419/local-truncation-error-and-convergence | # Local truncation error and convergence
I am trying to find the local truncation error and the order of convergence of the finite difference scheme $$\frac{3U^m_n -4U^{m-1}_n + U^{m-2}_n}{2 \Delta t} - \frac{a}{ h^2} \lbrace U^m_{n+1} -2 U^m_n + U^m_{n-1} \rbrace = f(x_n, t_{m})$$ Using Taylors expansion I found the truncation error to be $$\tau(x,t)= f_{t}(x,t) - \frac{\Delta t^2}{3}f_{ttt}(x,t) - \frac{a}{2} f_{xx}(x,t) - \frac{h^2}{24}f_{xxxx}(x,t) + O(\Delta t^3 + h^4)$$ My first question is: Is this the correct way to display the truncation error?
My second question is: Does the error converge with order 3 in time and 4 in space since I have $O(\Delta t^3 + h^4)$ ?
-
I assume that you study the equation $v_t-av_{xx}=f(t,x)$ (at least your scheme suggests it) with a solution $u(t,x)$
You define a discrete function $g(m,n) = u(m\Delta t,n\Delta x)$ and plug it into your scheme. After using Taylor expansion in a well-chosen point (hint: $(m\Delta t,n\Delta x)$), you obtain
$$u_t(m\Delta t,n\Delta x)+\mathcal O(\Delta t^2) - a(u_{xx}(m\Delta t,n\Delta x)+\mathcal O(\Delta x^2)).$$ By taking into account that the differential equation is satisfied, we arrive to a truncation error $$\mathcal O(\Delta t^2+\Delta x^2) .$$ | 2014-07-28 21:15:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046635627746582, "perplexity": 105.37816120190375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00354-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.esaral.com/continuity-jee-advanced-previous-year-questions-with-solutions/ | Continuity – JEE Advanced Previous Year Questions with Solutions
JEE Advanced Previous Year Questions of Math with Solutions are available at eSaral. Practicing JEE Advanced Previous Year Papers Questions of mathematics will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. eSaral helps the students in clearing and understanding each topic in a better way. eSaral also provides complete chapter-wise notes of Class 11th and 12th both for all subjects. Besides this, eSaral also offers NCERT Solutions, Previous year questions for JEE Main and Advance, Practice questions, Test Series for JEE Main, JEE Advanced and NEET, Important questions of Physics, Chemistry, Math, and Biology and many more. Download eSaral app for free study material and video tutorials.
Q. For every integer $n$, let $a_{n}$ and $b_{n}$ be real numbers. Let function $f: \mathbb{R} \rightarrow \mathbb{R}$ be given by $f(\mathrm{x})=\left\{\begin{array}{ll}{\mathrm{a}_{\mathrm{n}}+\sin \pi \mathrm{x},} & {\text { for } \quad \mathrm{x} \in[2 \mathrm{n}, 2 \mathrm{n}+1]} \\ {\mathrm{b}_{\mathrm{n}}+\cos \pi \mathrm{x},} & {\text { for } \quad \mathrm{x} \in(2 \mathrm{n}-1,2 \mathrm{n})}\end{array}, \text { for all integers } \mathrm{n}\right.$ If ƒ is continuous, then which of the following holds(s) for all n ? (A) $a_{n-1}-b_{n-1}=0$ (B) $a_{n}-b_{n}=1$ (C) $a_{n}-b_{n+1}=1$ (D) $a_{n-1}-b_{n}=-1$ [JEE 2012, 4M]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (B,D)
Q. For every pair of continuous function $f, g:[0,1] \rightarrow$ such that $\max \{f(x): x \in[0,1]\}=$ $\max \{g(x): x \in[0,1]\},$ the correct statement(s) is(are) : (A) $(f(\mathrm{c}))^{2}+3 f(\mathrm{c})=(\mathrm{g}(\mathrm{c}))^{2}+3 \mathrm{g}(\mathrm{c})$ for some $\mathrm{c} \in[0,1]$ (B) $(f(\mathrm{c}))^{2}+f(\mathrm{c})=(\mathrm{g}(\mathrm{c}))^{2}+3 \mathrm{g}(\mathrm{c})$ for some $\mathrm{c} \in[0,1]$ (C) $(f(\mathrm{c}))^{2}+3 f(\mathrm{c})=(\mathrm{g}(\mathrm{c}))^{2}+\mathrm{g}(\mathrm{c})$ for some $\mathrm{c} \in[0,1]$ (D) $(f(\mathrm{c}))^{2}=(\mathrm{g}(\mathrm{c}))^{2}$ for some $\mathrm{c} \in[0,1]$ [JEE(Advanced)-2014, 3]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (A,D) $f, \mathrm{g}[0,1] \rightarrow \mathrm{R}$ we take two cases. Let $f \& \mathrm{g}$ attain their common maximum value at $\mathrm{P}$. $\Rightarrow f(\mathrm{p})=\mathrm{g}(\mathrm{p})$ where $\mathrm{p} \in[0,1]$ let $f \& \mathrm{g}$ attain their common maximum value at different points. $\Rightarrow f(\mathrm{a})=\mathrm{M} \& \mathrm{g}(\mathrm{b})=\mathrm{M}$ $\Rightarrow f(\mathrm{a})-\mathrm{g}(\mathrm{a})>0 \& f(\mathrm{b})-\mathrm{g}(\mathrm{b})<0$ $\Rightarrow f(\mathrm{c})-\mathrm{g}(\mathrm{c})=0$ for some $\mathrm{c} \in[0,1]$ as $\mathrm{f}^{\prime} \& \quad$ g’ are continuous functions $\Rightarrow f(\mathrm{c})-\mathrm{g}(\mathrm{c})=0$ for some $\mathrm{c} \in[0,1]$ for all cases. $\ldots(1)$ Option $(\mathrm{A}) \Rightarrow f^{2}(\mathrm{c})-\mathrm{g}^{2}(\mathrm{c})+3(f(\mathrm{c})-\mathrm{g}(\mathrm{c}))=0$ which is true from ( 1) Option (D) $\Rightarrow f^{2}(\mathrm{c})-\mathrm{g}^{2}(\mathrm{c})=0$ which is true from ( 1) Now, if we take $f(\mathrm{x})=1 \& \mathrm{g}(\mathrm{x})=1 \forall \mathrm{x} \in[0,1]$ options $(\mathrm{B}) \&(\mathrm{C})$ does not hold. Hence $\quad$ option $(\mathrm{A}) \&(\mathrm{D})$ are correct.
Q. Let $[\mathrm{x}]$ be the greatest integer less than or equal to $\mathrm{x}$. Then, at which of the following point(s) the function $f(\mathrm{x})=\mathrm{x} \cos (\pi(\mathrm{x}+[\mathrm{x}]))$ is discontinuous?b (A) x = –1 (B) x = 0 (C) x = 2 (D) x = 1 [JEE(Advanced)-2017, 4]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (A,C,D) $f(\mathrm{x})=\mathrm{x} \cos (\pi \mathrm{x}+[\mathrm{x}] \pi)$ $\Rightarrow f(\mathrm{x})=(-1)^{[\mathrm{x}]} \mathrm{x} \cos \pi \mathrm{x}$ Discontinuous at all integers except zero. | 2021-02-28 18:32:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5304258465766907, "perplexity": 918.3717989778805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00012.warc.gz"} |
http://www.cs.cmu.edu/~yandongl/svm.html | ### Support Vector Machine, Kernel, Etc.
Formulate the primal
• Start with a linear method. $h(x) = w^Tx+b$. And $h = sign(w^T+b)$.
• We try to solve following optimization objective:
$min_{\gamma,w,b} \frac{1}{2} ||w||^2$
s.t. $y^{(i)}(w^Tx^{(i)} + b) \ge 1, i = 1,...,m$
• Intuition: maximize the margin (smallest distance between decision boundary and samples). If we don't use kernels, then problem is solved. But in order to use kernels, we need to convert it to its dual form.
• Feel free to read more on VC Dimension and statistical learning theory.
Lagrangian multipliers
• if we have following primal optimization problem:
$min_w f(w)$
s.t. $g_i(w) \lt 0, i = 1,...,k$
$h_i(w) = 0, i = 1,...,l.$
• construct Lagrangian multipliers:
$\theta_p(w) = \max_{\alpha,\beta:\alpha_i\gt0} (f(w) + \sum_1^k\alpha_ig_i(w) + \sum_1^l\beta_ih_i(w))$
$\theta_p(w)$ goes to infinity if constraints are violated. therefore we want:
$\min_w\theta_p(w) = \min_w \max_{\alpha,\beta;\alpha_i\gt0}L(w,\alpha,\beta)$
Dual of Max-margin
• construct Lagrangian for primal we get
$L(w,b,\alpha) = \frac{1}{2}||w||^2 - \sum_1^m\alpha_i[y^{(i)}(w^Tx^{(i)} + b)-1]$
take the derivatives with regard to $w$ and $b$ and plug them back we get:
$L(w,b,\alpha) = \sum_i^m\alpha_i - \frac{1}{2}\sum_{i,j=1}^my^{(i)}y^{(j)}\alpha_i\alpha_j(x^{(i)})^Tx^{(j)}$
with the constraints we finally get the dual form:
$\max_\alpha W(\alpha) = \sum_i^m\alpha_i - \frac{1}{2}\sum_{i,j=1}^my^{(i)}y^{(j)}\alpha_i\alpha_j\langle x^{(i)}x^{(j)}\rangle$
s.t. $\alpha_i\ge0, i = 1,...,m$
$\sum_i^m \alpha_iy^{(i)} = 0$
• Prediction on new $x$:
• Two problems are equivalent. If we can solve dual we can also solve primal. More on duality you should read https://en.wikipedia.org/wiki/Duality_(optimization)
From a loss perspective:
• It can also be regarded as Hinge loss + L2 regularization: $L(X;Y) = \sum_i l(y^{(i)}, \hat{y}^{(i)};x^{(i)}) + \lambda||w||^2$
• $l$ is defined as hinge loss: $l(y,\hat{y}) = \max(0, 1 - y \cdot \hat{y})$ $y$ is labels and $\hat{y}$ is predictions.
• Hinge loss is convex but not smooth it's a surrogate for 0-1 loss since 0-1 loss isn't convex. We can optimize it by using sub-gradient.
• How is this different from max-margin formulation? Well instead of having constraints, they become part of the optimization objective so more like soft constraints.
Kernels | 2023-01-28 00:16:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9069042801856995, "perplexity": 1643.8512879018735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00400.warc.gz"} |
https://math.stackexchange.com/questions/2097660/integral-with-respect-to-a-martingale/2098220 | Integral with respect to a martingale
In my survival analysis course we use integrals that integrate with respect to some finite variation process. One of which is the counting process, which is straight forward to understand since you just sum over jump times. But I have no clue what it means to integrate with respect to a martingale? So something like(Lebesgue-Stieltjes notation):
$$\int_0^Tf(t)dM(t)$$
If $X:\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ is a stochastic process which is a semimartingale (that is one can write X as the sum of a local martingale starting at zero and a process of finite variation; so every martingale starting at zero is a semimartingale), the stochastic integral $$\int_0^Tf(t)dX(t)$$ (sometimes also written as $(f\dot\ X)_T$) is first defined for simple predictable processes $f$ of the form $f=1_{\{(\omega,t)|r<t\leq s\}}$ ($r,s\in\mathbb{R}$ and $r<s$) as $$\int_0^Tf(t)dX(t)=f(\min(s,T))-f(\min(r,T))$$ for $T\geq 0$ and the mapping $f\mapsto f\dot\ X$ from the simple predictable processes as domain will be extended to the set of predictable processes s.t. that $$(\omega,t)\mapsto (f\dot\ X)_t(\omega)$$ is adapted, $f\mapsto f\dot\ X$ is linear and if $f_n$ converges to $f$ pointwise and the $f_n$ are bounded, then $(f_n\dot\ X)_t\rightarrow (f\dot\ X)_t$ for $t\geq 0$.
If $X$ is a process of finite variation (hence it is a semimartingale), then the stochastic integral $(f\dot\ X)_T$ is just the Lebesgue–Stieltjes-integral.
Under certain conditions, if $(X_t)_{t\geq 0}$ is a martingale, then also $((f\dot\ X)_T)_{T\geq 0}=(\int_0^Tf(t)X_t)_{T\geq 0}$ is a martingale. | 2022-05-20 13:28:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9591544270515442, "perplexity": 115.02490948738645}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00095.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-1-chemical-foundations-exercises-page-38/84 | ## Chemistry 9th Edition
Before heating, the magnesium fillings and sulfur powder were separate as they did not combine when they were simply mixed together at room temperature and could be separated by physical means. But when heated, the atoms of both combined at an atomic level, with strong ionic bonds between them, and thus turned into a compound from a mixture. ( $Mg + S \longrightarrow MgS$)
When Sulfur (S) powder and magnesium (Mg) fillings are mixed together, they form a mixture. This is because the sulfur and magnesium do no react with each other at room temperature. But when they are heated, they form a compound called magnesium sulfide (MgS) ( $Mg + S \longrightarrow MgS$) The mixture when heated turned into a compound, as the atoms of magnesium and sulfur combined with each other at an atomic level. Previously, the magnesium fillings and the sulfur powder could be separated by physical means. But in compounds the atoms are bound by very strong forces and this can no longer be done. | 2018-07-23 16:18:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5668829083442688, "perplexity": 1523.9747859933602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596542.97/warc/CC-MAIN-20180723145409-20180723165409-00301.warc.gz"} |
http://raweb.inria.fr/rapportsactivite/RA2014/cidre/uid47.html | Overall Objectives
Research Program
Application Domains
New Software and Platforms
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
PDF e-Pub
## Section: New Results
### Other topics related to security and distributed computing
#### Network monitoring and fault detection
Monitoring a system consists in collecting and analyzing relevant information provided by the monitored devices, so as to be continuously aware of the system state (situational awareness). However, the ever growing complexity and scale of systems makes both real time monitoring and fault detection a quite tedious task. Thus the usually adopted option is to focus solely on a subset of information states, so as to provide coarse-grained indicators. As a consequence, detecting isolated failures or anomalies is a quite challenging issue. We propose in [23] , [42] to address this issue by pushing the monitoring task at the edge of the network. We present a peer-to-peer based architecture, which enables nodes to adaptively and efficiently self-organize according to their ”health” indicators. By exploiting both temporal and spatial correlations that exist between a device and its vicinity, our approach guarantees that only isolated anomalies (an anomaly is isolated if it impacts solely a monitored device) are reported on the fly to the network operator. We show that the end-to-end detection process, i.e., from the local detection to the management operator reporting, requires a logarithmic number of messages in the size of the network.
#### Secure data deduplication scheme
Data grows at the impressive rate of 50% per year, and 75% of the digital world is a copy (The digital universe decade. Are you ready? John Gantz and David Reinsel, IDC information, may 2010.). Although keeping multiple copies of data is necessary to guarantee their availability and long term durability, in many situations the amount of data redundancy is immoderate. By keeping a single copy of repeated data, data deduplication is considered as one of the most promising solutions to reduce the storage costs, and improve users experience by saving network bandwidth and reducing backup time. However, this solution must now solve many security issues to be completely satisfying. In this paper we target the attacks from malicious clients that are based on the manipulation of data identifiers and those based on backup time and network traffic observation. In [43] , we have presented a deduplication scheme mixing an intra-and an inter-user deduplication in order to build a storage system that is secure against the aforementioned type of attacks by controlling the correspondence between files and their identifiers, and making the inter-user deduplication unnoticeable to clients using deduplication proxies. Our method provides global storage space savings, per-client bandwidth network savings between clients and deduplication proxies, and global network bandwidth savings between deduplication proxies and the storage server. The evaluation of our solution compared to a classic system shows that the overhead introduced by our scheme is mostly due to data encryption which is necessary to ensure confidentiality. This work relies on Mistore [44] , [45] , a distributed storage system aiming at guaranteeing data availability, durability, low access latency by leveraging the Digital Subscriber Line infrastructure of an ISP. Mistore uses the available storage resources of a large number of home gateways and points of presence for content storage and caching facilities reducing the role of the data center to a load balancer. Mistore also targets data consistency by providing multiple types of consistency criteria on content and a versioning system allowing users to get access to any prior versions of their contents.
#### Metrics estimation on very large data streams
In [12] , we consider the setting of large scale distributed systems, in which each node needs to quickly process a huge amount of data received in the form of a stream that may have been tampered with by an adversary (i.e., data items ordering can be manipulated by an oblivious adversary). In this situation, a fundamental problem is how to detect and quantify the amount of work performed by the adversary. To address this issue, we propose AnKLe (for Attack-tolerant eNhanced Kullback- Leibler divergence Estimator), a novel algorithm for estimating the KL divergence of an observed stream compared to the expected one. AnKLe combines sampling techniques and information-theoretic methods. It is very efficient, both in terms of space and time complexities, and requires only a single pass over the data stream. Experimental results show that the estimation provided by AnKLe remains accurate even for different adversarial settings for which the quality of other methods dramatically decreases. Considering $n$ as the number of distinct data items in a stream, we show that AnKLe is an $\left(\epsilon ,\delta \right)$-approximation algorithm with a space complexity sublinear in the size of the domain value from which data items are drawn and the maximal stream length.
We go a step further by proposing in [22] a metric, called codeviation, that allows to evaluate the correlation between distributed streams. This metric is inspired from classical metric in statistics and probability theory, and as such allows us to understand how observed quantities change together, and in which proportion. We then propose to estimate the codeviation in the data stream model. In this model, functions are estimated on a huge sequence of data items, in an online fashion, and with a very small amount of memory with respect to both the size of the input stream and the values domain from which data items are drawn. We give upper and lower bounds on the quality of the codeviation, and provide both local and distributed algorithms that additively approximates the codeviation among $n$ data streams by using a sublinear number of bits of space in the size of the domain value from which data items are drawn and the maximal stream length. To the best of our knowledge, such a metric has never been proposed so far.
#### Robustness analysis of large scale distributed systems
In the continuation of [59] which proposed an in-depth study of the dynamicity and robustness properties of large-scale distributed systems, we analyze in [13] , the behavior of a stochastic system composed of several identically distributed, but non independent, discrete-time absorbing Markov chains competing at each instant for a transition. The competition consists in determining at each instant, using a given probability distribution, the only Markov chain allowed to make a transition. We analyze the first time at which one of the Markov chains reaches its absorbing state. When the number of Markov chains goes to infinity, we analyze the asymptotic behavior of the system for an arbitrary probability mass function governing the competition. We give conditions for the existence of the asymptotic distribution and we show how these results apply to cluster-based distributed systems when the competition between the Markov chains is handled by using a geometric distribution.
#### Randomized message-passing test-and-set
In [56] , we have presented a solution to the well-known Test&Set operation in an asynchronous system prone to process crashes. Test&Set is a synchronization operation that, when invoked by a set of processes, returns yes to a unique process and returns no to all the others. Recently many advances in implementing Test&Set objects have been achieved, however all of them target the shared memory model. In this paper we propose an implementation of a Test&Set object in the message passing model. This implementation can be invoked by any number $p of processes in which $n$ is the total number of processes in the system. It has an expected individual step complexity in $O\left(logp\right)$ against an oblivious adversary, and an expected individual message complexity in $O\left(n\right)$. The proposed Test&Set object is built atop a new basic building block, called selector, that allows to select a winning group among two groups of processes. We propose a message-passing implementation of the selector whose step complexity is constant. We are not aware of any other implementation of the Test&Set operation in the message passing model.
#### Agreement problems in unreliable systems
In [18] , we consider the problem of approximate consensus in mobile ad-hoc networks in the presence of Byzantine nodes. Each node begins to participate by providing a real number called its initial value. Eventually all correct nodes must obtain final values that are different from each other within a maximum value previously defined (convergence property) and must be in the range of initial values proposed by the correct nodes (validity property). Due to nodes' mobility, the topology is dynamic and unpredictable. We propose an approximate Byzantine consensus protocol which is based on the linear iteration method. Each node repeatedly executes rounds. During a round, a node moves to a new location, broadcasts its current value, gathers values from its neighbors, and possibly updates its value. In our protocol, nodes are allowed to collect information during several consecutive rounds: thus moving gives them the opportunity to gather progressively enough values. An integer parameter ${R}_{c}$ is used to define the maximal number of rounds during which values can be gathered and stored while waiting to be used. A novel sufficient and necessary condition guarantees the final convergence of the consensus protocol. At each stage of the computation, a single correct node is concerned by the requirement expressed by this new condition (the condition is not universal as it is the case in all previous related works). Moreover the condition considers both the topology and the values proposed by correct nodes. If less than one third of the nodes are faulty, the condition can be satisfied. We are working on mobility scenarios (random trajectories, predefined trajectories, meeting points) to assert that the condition can be satisfied for reasonable values of ${R}_{c}$. In [41] , we extend the above protocol to solve the problem of clock synchronization in mobile ad-hoc networks.
In [20] , we investigate the use of agreement protocols to develop transactional mobile agents. Mobile devices are now equipped with multiple sensors and networking capabilities. They can gather information about their surrounding environment and interact both with nearby nodes, using a dynamic and self-configurable ad-hoc network, and with distant nodes via the Internet. While the concept of mobile agent is appropriate to explore the ad-hoc network and autonomously discover service providers, it is not suitable for the implementation of strong distributed synchronization mechanisms. Moreover, the termination of a task assigned to an agent may be compromised if the persistence of the agent itself is not ensured. In the case of a transactional mobile agent, we identify two services, Availability of the Sources and Atomic Commit, that can be supplied by more powerful entities located in a cloud. We propose a solution in which these two services are provided in a reliable and homogeneous way. To guarantee reliability, the proposed solution relies on a single agreement protocol that orders continuously all the new actions whatever the related transaction and service. | 2017-09-25 18:56:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3315922021865845, "perplexity": 757.2468431858807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693240.90/warc/CC-MAIN-20170925182814-20170925202814-00336.warc.gz"} |
http://math.emory.edu/events/seminars/all/index.php?PAGE=6 | # All Seminars
Seminar: Computational Math
Speaker: Alessandro Veneziani of Emory University
Contact: Yuanzhe Xi, [email protected]
Date: 2019-12-06 at 2:00PM
Venue: MSC W303
Abstract:
The efficient numerical solution of the Steady Incompressible Navier-Stokes equations is receiving more attention recently, driven by some applications where steadiness is solved as a surrogate of time average (see, e.g., [Tang, Chun Xiang, et al., JACC: Cardiov Imag (2019)]). The efficient numerical solution is challenged by the absence of the time derivative that makes the algebraic structure of the problem more problematic. In this talk, we cover some recent advances considering smart algebraic factorizations to mimic splitting strategies popular in the unsteady case [A. Viguerie, A. Veneziani, CMAME 330 (2018)], new stabilization techniques inspired by turbulence modeling [A. Viguerie, A. Veneziani, JCP 391 (2019)] and the treatment of nonstandard boundary conditions emerging in computational hemodynamics [A. Veneziani, A. Viguerie, in preparation (2019)], that inspired this research. In the latter case, the focus will be on the so-called backflow and inflow instabilities [H. Xu et al., to appear in JCP 2020] occurring in defective problems (i.e., problems where the data available are incomplete to make the mathematical formulation well-posed). Dedicated to the memory of Dr. G. Zanetti (1959-2019). The NSF Project DMS-1620406 supported this research.
Title: One trick with two applications
Seminar: Combinatorics
Speaker: Mathias Schacht of The University of Hamburg and Yale University
Contact: Dwight Duffus, [email protected]
Date: 2019-12-06 at 4:00PM
Venue: MSC W303
Abstract:
We discuss a recent key lemma of Alweiss, Lovett, Wu and Zhang which led to big improvement for the Erdos-Rado sunflower problem. Essentially the same lemma was also crucial in the recent work of Frankston, Kahn, Narayanan, and Park showing that thresholds of increasing properties of binomial random discrete structures are at most a log-factor away from the so-called (fractional) expectation threshold. This fairly general result gives a new proof of the Johansson-Kahn-Vu theorem for perfect matchings in random hypergraphs.
Title: Relative local-global principles
Seminar: Algebra
Speaker: Danny Krashen of Rutgers University
Contact: David Zureick-Brown, [email protected]
Date: 2019-12-03 at 4:00PM
Venue: MSC W303
Abstract:
In various contexts, the Hasse principle can be used to transfer questions of rational points and triviality of Galois cohomology classes from global fields to local fields. Some such results have been extended, for example, in the work of Kato, Bayer-Flukiger-Parimala, Parimala-Preeti, Parimala-Sujatha, to apply to function fields over global fields. In this talk, I will discuss recent joint work with David Harbater and Alena Pirutka in which we examine to what extent local-global principles for one field extend to local-global principles for a function field over this field. We focus particularly on the case where one starts with a semiglobal field (a function fields over discretely valued fields).
Title: Swimming bacteria: Mathematical modelling and applications
Seminar: Computational Math
Speaker: Christian Esparza-Lopez of University of Cambridge
Contact: Irving Martinez, [email protected]
Date: 2019-11-22 at 1:00PM
Venue: MSC W201
Abstract:
Miniaturisation of actuators and power sources are two of the biggest technical challenges in the design and fabrication of microscopic robots. As it is often the case, Nature can offer insight into overcoming some of these challenges. Swimming bacteria, such as the well-studied flagellated E. coli, are known to be efficient swimmers with intricate sensing capabilities. They have thus inspired scientists to mimic them to improve the design of artificial micro-robots, often with biomedical purposes such as targeted drug delivery. This talk will consist of two parts. After a brief introduction to the study of swimming bacteria I will review the random walk model for bacterial diffusion and chemotaxis, and will show how to use it to describe the diffusive behaviour of artificial micro-swimmers propelled by swimming bacteria. In the second part of the talk I will address the problem of non-flagellated swimming bacteria. Specifically, we will study a minimal model to describe the dynamics of Smeliferum, a helical bacterium that swims by progressively changing the handedness of its body.
Title: Statistical Data Assimilation for Hurricane Storm Surge Modeling
Seminar: Numerical Analysis and Scientific Computing
Speaker: Talea L. Mayo of University of Central Florida
Contact: James Nagy, [email protected]
Date: 2019-11-22 at 2:00PM
Venue: MSC W303
Abstract:
Coastal ocean models are used for a variety of applications, including the simulation of tides and hurricane storm surges. As is true for many numerical models, coastal ocean models are plagued with uncertainty, due to factors including but not limited to the approximation of meteorological conditions and hydrodynamics, the numerical discretization of continuous processes, uncertainties in specified boundary and initial conditions, and unknown model parameters. Quantifying and reducing these uncertainties is essential for developing reliable and robust storm surge models. Statistical data assimilation methods are often used to estimate uncertain model states (e.g. storm surge heights) by combining model output with uncertain observations. We have used these methods in storm surge modeling applications to reduce uncertainties resulting from coarse spatial resolution. While state estimation is beneficial for accurately simulating the surge resulting from a single, observed storm, larger contributions can be made with the estimation of uncertain model parameters. In this talk, I will discuss applications of statistical data assimilation methods for both state and parameter estimation in coastal ocean modeling.
Title: Connected Fair Detachments of Hypergraphs
Seminar: Combinatorics
Speaker: Amin Bahmanian of Illinois State University
Contact: Dwight Duffus, [email protected]
Date: 2019-11-22 at 4:00PM
Venue: MSC W303
Abstract:
Let $G$ be a hypergraph whose edges are colored. A $(u,n)$-detachment of $G$ is a hypergraph obtained by splitting a vertex $u$ into $n$ vertices, say $u_1,\dots, u_n$, and sharing the incident edges among the subvertices. A detachment is fair if the degree of vertices and multiplicity of edges are shared as evenly as possible among the subvertices within the whole hypergraph as well as within each color class. In this talk we solve an open problem from 1970s by finding necessary and sufficient conditions under which a $k$-edge-colored hypergraph $G$ has a fair detachment in which each color class is connected. Previously, this was not even know for the case when $G$ is an arbitrary graph. We exhibit the usefulness of our theorem by proving a variety of new results on hypergraph decompositions, and completing partial regular combinatorial structures.
Title: (-1)-homogeneous solutions of stationary incompressible Navier-Stokes equations with singular rays
Seminar: Analysis and PDEs
Speaker: Xukai Yan of Georgia Institute of Technology
Date: 2019-11-21 at 3:00PM
Venue: MSC E308A
Abstract:
In 1944, L.D. Landau first discovered explicit (-1)-homogeneous solutions of 3-d stationary incompressible Navier-Stokes equations (NSE) with precisely one singularity at the origin, which are axisymmetric with no swirl. These solutions are now called Landau solutions. In 1998 G. Tian and Z. Xin proved that all solutions which are (-1) homogeneous, axisymmetric with one singularity are Landau solutions. In 2006 V. Sverak proved that with just the (-1)-homogeneous assumption Landau solutions are the only solutions with one singularity. Our work focuses on the (-1)-homogeneous solutions of 3-d incompressible stationary NSE with finitely many singularities on the unit sphere. In this talk we will first classify all (-1)-homogeneous axisymmetric no-swirl solutions of 3-d stationary incompressible NSE with one singularity at the south pole on the unit sphere as a two dimensional solution surface. We will then present our results on the existence of a one parameter family of (-1)-homogeneous axisymmetric solutions with non-zero swirl and smooth on the unit sphere away from the south pole, emanating from the two dimensional surface of axisymmetric no-swirl solutions. We will also present asymptotic behavior of general (-1)-homogeneous axisymmetric solutions in a cone containing the south pole with a singularity at the south pole on the unit sphere . We also constructed families of solutions smooth on the unit sphere away from the north and south poles, and will have obtained some asymptotic stability result of these solutions. This is a joint work with Professor Yanyan Li and Li Li.
Title: Derived Equivalences from Compactifications
Seminar: Algebra
Speaker: Robert Vandermolen of University of South Carolina
Contact: David Zureick-Brown, [email protected]
Date: 2019-11-19 at 4:00PM
Venue: MSC W303
Abstract:
In this talk we will examine a new generalization of a wonderful construction of Drinfeld, producing a new class of kernels which often induce Fourier-Mukai functors which realize the derived equivalences from wall-crossings in Variations of Geometric Invariant Theory. This new class of functors are parameterized by the rational polyhedral in the group equivariant ample line bundles. This program is inspired by recent work of Ballard, Diemer, Favero (2017) and work of Ballard, Chidambaram, Favero, McFaddin, and myself (2019), these papers provide a new class of kernels for realizing the derived equivalence for many interesting birational transformations.
Title: Turning Cancer Discoveries into Effective Targeted Treatments with the Aid of Mathematical Modeling
Colloquium: Applied Mathematics
Speaker: Dr. Trachette Jackson of University of Michigan
Contact: Jim Nagy, [email protected]
Date: 2019-11-13 at 4:15PM
Venue: Oxford Road Building, 3rd Floor, Room 305 and Room 311
Abstract:
The Department of Mathematics is pleased to announce that Dr. Trachette Jackson, Professor of Mathematics at the University of Michigan, will give a general STEM audience talk titled Turning Cancer Discoveries into Effective Targeted Treatments with the Aid of Mathematical Modeling.
Title: Generalized Brauer dimension of semi-global fields
Seminar: Algebra
Speaker: Saurabh Gosavi of Rutgers University
Contact: David Zureick-Brown, [email protected]
Date: 2019-11-12 at 4:00PM
Venue: MSC W303
Abstract:
Given a finite set of Brauer classes $B$ of a fixed period $\ell$, we define $ind(B)$ to be the minimum of degrees of field extensions $L/F$ such that $\alpha \otimes_F L = 0$ for every $\alpha$ in $B$. When $F$ is a semi-global field (i.e transcendence degree one field over a complete discretely valued field), we will provide an upper-bound for $ind(B)$ which depends on invariants of fields of lower arithmetic complexity. As a simple application of our result, we will obtain an upper-bound for the splitting index of quadratic forms and finiteness of symbol length for function fields of curves over higher-local fields. | 2021-07-29 01:48:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6110442280769348, "perplexity": 1544.2910488069522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00318.warc.gz"} |
https://learn.careers360.com/ncert/question-abc-is-a-triangle-right-angled-at-c-a-line-through-the-mid-point-m-of-hypotenuse-ab-and-parallel-to-bc-intersects-ac-at-d-show-that-i-d-is-the-mid-point-of-ac/ | Q
# ABC is a triangle right angled at C. A line through the mid-point M of hypotenuse AB and parallel to BC intersects AC at D. Show that (i) D is the mid-point of AC
Q: 7 ABC is a triangle right angled at C. A line through the mid-point M of hypotenuse AB and parallel to BC intersects AC at D. Show that
(i) D is the mid-point of AC
Views
Given: ABC is a triangle right angled at C. A line through the mid-point M of hypotenuse AB and parallel to BC intersects AC at D.
To prove :D is mid point of AC.
Proof: In $\triangle$ABC,
M is mid point of AB. (Given)
DM || BC (Given)
By converse of mid point theorem,
D is the mid point of AC.
Exams
Articles
Questions | 2020-01-26 16:23:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5739693641662598, "perplexity": 1496.0320584027425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251689924.62/warc/CC-MAIN-20200126135207-20200126165207-00488.warc.gz"} |
https://www.studypool.com/discuss/538793/angle-of-drepression?free | ##### angle of drepression
Mathematics Tutor: None Selected Time limit: 1 Day
A swimming pool is 50 meters long. It is 1 meter deep at one end and slopes gradually downward to a depth of 4 meters at the other end. What is the angle of depression made by the bottom of the pool?
May 15th, 2015
$tan\theta=\frac{opposite\hspace{5}side}{adjacent\hspace{5}side}\\ tan\theta=\frac{3}{50}\\ \theta=tan^{-1}\bigg(\frac{3}{50}\bigg)\\ \theta=3.43^{\circ}$
So, angle of depression =
May 15th, 2015
...
May 15th, 2015
...
May 15th, 2015
Mar 30th, 2017
check_circle
check_circle
check_circle
Secure Information
Content will be erased after question is completed.
check_circle | 2017-03-30 01:19:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6867972016334534, "perplexity": 5665.270118654893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00070-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://ounininin-blog.logdown.com/posts/7413556 | 5 months ago
## Probability And Queueing Theory Balaji Book Free 2 digital princesa tel
Probability And Queueing Theory Balaji Book Free 26
Queueing theory is a fascinating subject in Applied Probability for two con tradictory reasons: it sometimes requires the most sophisticated tools of stochastic processes, and it often
March 26, 2015. Contents . 3 Queueing models and some fundamental relations23 . and results from probability theory that we will use.
The Probability And Queueing Theory By Balaji Pdf workbook may have a trial period and a . Realtime Singlish Realtime Singlish is a free, .
the spectacular now book free pdf download . probability and queuing theory by balaji pdf free .
Amazon.com: Fundamentals of Queueing Theory (Wiley Series in Probability and Statistics) (9781118943526): John F. Shortle, James M. Thompson, Donald Gross, Carl M.
16988f9614
mastered maya banks mobilism 13
dblue glitch 2 crack 76
toast titanium 11 product 14
Cambridge 11
x particles 2.5 serial 69 | 2018-09-25 09:35:51 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.811521589756012, "perplexity": 12099.115948819845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161350.69/warc/CC-MAIN-20180925083639-20180925104039-00032.warc.gz"} |
https://www.physicsforums.com/threads/mass-spring-system.90662/ | # Mass Spring System
In a vertical mass spring system, spring with spring constant 250 N/m vibrates with an amplitude of 12cm when 0.38kg hangs from it.
What is the equation describing this motion as a function of time? ( assume the mass passes through the equilibrium point, towards the positive x(upward), at t = 0.110s)
I do it in this way:
Hooke`s Law: F = kx
(0.38)(9.8)= 250 N/m (x)
x = 0.015(m)
$$\omega$$ = $$\sqrt{k/m}$$
$$\omega$$ = 25.65
x = Acos($$\omega$$t + $$\phi$$)
0.015 = 0.12cos(25.65t + $$\phi$$)
$$\phi$$ = -1.38
So, the equation : x = 0.12cos(25.65t -1.38)
Is it correct?
mukundpa
Homework Helper
1. Here the force is not mg, Actually mg is balanced by initial streach of the spring when it come in equilibrium position after the mass is attached.
2. F represents the restoring force trying to bring the mass back in the equilibrium position.
mukundpa said:
2. F represents the restoring force trying to bring the mass back in the equilibrium position.
The spring extent because of mg. Then why not F = mg?
How should I find the phi?
mukundpa
Homework Helper
What is the resultant force when the syatem is in equilibrium?
Actually the force F in S.H.M. is the restoring force on the mass when it has displacement x from equilibrium position.
$$x = A cos (\omega t + \phi )$$
is correct.
When x = 0 ; t = 0.110s
put these values to get $$\phi$$
Ya, I did it in this way before also but the value I get is $$\phi$$ equal to -1.25
But it is actually should be about +1.89 ( from given answer)
mukundpa
Homework Helper
$$cos \theta = cos \alpha$$ has general solution
$$2n \pi \pm \alpha$$
which sign is to be taken?
where was the mass at t = 0 ?
when t = 0
x = A
Any more clues?
mukundpa
Homework Helper
The same solution can be written as 3pi/2 - wt = 4.712 - 2.821 = 1.891 this is given in your text book.
Last edited:
mukundpa
Homework Helper
The solutions of the equation are
$$\pi /2 - 2.821$$ and $$3 \pi /2 - 2.821$$
= -1.25 and 1.891 radians respectively.
Now think, when t=0.11s; x=0 means at t = 0 the particle is yet to reach the equilibrium position [Where the phase angle (wt + phi) is pi/2] and x is - ve because the time period is 0.225s and 0.11s this is a bit less then half time period. For that the phase angle should be more then pi which is given by the value of phi = 1.891.
OK...Thanks...:) | 2022-01-18 10:58:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6691243052482605, "perplexity": 1426.4509629453003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00412.warc.gz"} |
https://space.stackexchange.com/tags/the-moon/hot | Podcast #128: We chat with Kent C Dodds about why he loves React and discuss what life was like in the dark days before Git. Listen now.
# Tag Info
191
This is the Apollo 11 photo designated AS11-40-5925, a popular shot with moon landing deniers. The camera is facing generally north-north-west. The sun is low in the sky, about 10º-15º above the horizon on the east. The silver pole in the upper right of the photograph is pretty much straight up, casting shadow in the expected direction. The landing leg in ...
109
(A) who (if anyone) was the first human to actually be asleep (that is to say, presumably inside the lander) - not just a scheduled sleep time - on the Moon? It depends on your definition of sleep. Maybe Buzz Aldrin, curled on the floor of the LM cabin: The best Aldrin managed was a “couple hours of mentally fitful drowsing.” Armstrong simply stayed ...
24
Power: The Moon has a night that lasts for 14 Earth days, and it gets really cold during the night. This makes it difficult/expensive to design a rover that can last through the night: you really need a radioisotope power source (RTG) to provide at least heat overnight. Mars, on the other hand has a day/night cycle of slightly more than 24 hours, so solar-...
10
The degree of orbital shadowing experienced by an orbiting object with small orbital altitude is determined by its beta angle (normally used in reference to LEO objects but the concept applies to lunar orbiters as well). The angle is taken between the satellite's orbital plane and the vector to the Sun. Depending on the value of the beta angle, a satellite ...
8
This may belong to Astronomy SE, but the $29.5$ Earth day figure, or more accurately the time in the third reference, is what you should be planning on when you or at least your instruments go to the Moon. This represents the actual cycle between daylight and darkness, the solar day. When one clicks on the references cited in the question, the first and ...
7
There is no circular orbit that has a share of 50:50 between night and day. The possible times are a bit less than 50% to 0% night or, respectively, a bit more than 50% day to 100% day. The two extreme cases are: an orbit that is aligned with the terminator (the border between night and day on the surface) is in perpetual daylight. an orbit that passes ...
7
Day length: As stated above, the moon has a night which lasts 14 days, while Mars has a day/night cycle of about 24 hours, 37 minutes. This has several implications. Power supply: although the Mars day is more similar to that of Earth, and solar power is more of an option, Mars is also further away, meaning the irradiance on the martian surface is less ...
5
It all depends on how you define "dayside" and "nightside", and how you define "entering" or "exiting" either one of them for a satellite. I suppose a big part of the confusion comes from this statement: Being in a polar orbit, Chandrayaan-2 enters the dayside of the Moon crossing the north pole, traverses through the dayside and enters the nightside ...
4
Spacecraft don't rot, nor do they rust (since there is not enough free oxygen anywhere but Earth), but they do degrade in various ways: The most obvious is that chemical and electrical equipment like batteries and on-board computers are severely degraded by the extreme cold and variations of temperature that happen. Electrical equipment is also damaged by ...
4
In air, on Earth, we can talk about the temperature of the air 2 meters above the surface. Of course if it's the top of your hat, then the temperature of that is affected both by the air temperature, and by the speed of the wind and how much sunlight is hitting it and other things like how much infrared your hat is radiating up into the sky. But 2 meters ...
4
Using the diameter 3476 km and the Moon solar day of 29.53 Earth days I calculated a speed of 15.4 km/h for the day night line at the Moon equator. $$\frac{3476 km*π} {29.53*24 h} = 15.4 km/h = 4.28 m/s$$ That is the necessary average speed at the lunar equator to stay in eternal day light. A rod is 5.0292 m, so a rod per moon-hour is 0.0473 mm/s A ...
3
A more generic answer could be: Angular speed (constant in any point on surface) is 2*pi radians every 29.53*24 hours, which gives 0.00443 rad/h. Linear speed depend on local distance from rotation axis, which is r * cos(latitude). Hence linear speed in any point on Moon surface is: v = 0.00443 * r * cos(lat) [rad/h]*[km] Of course for lat=0° it ...
3
Maybe not directly related to the question, anyway this site allows calculating "what time it is" in a specific location on the Moon: http://win98.altervista.org/space/exploration/moon/moontime.html In this page, the moon day duration (29.53 days) is divided into 24 moon-hours; when sunrise terminator reaches specified point, local time will be 06:00; when ...
3
Google Maps Moon likely uses a Simple Cylindrical projection for storing their map data. This is fine for the majority of the globe, but there are problems at the poles. Here are a few reasons why imagery of the poles is problematic: The data is prone to discontinuities because it has the entire top or bottom edge of the rectangular projection converging on ...
3
It was measured during the Apollo 14 mission. The Apollo Lunar Surface Experiments Package (ALSEP) placed on the lunar surface by the astronauts had a gas concentration sensor. During the last depressurization of the LM some oxygen was released. But the oxygen was gone in less than 3 minutes. So the oxygen left the landing spot on the lunar surface very ...
3
There is a lot in development to meet the 2024 Human Lunar return goal set by the National Space Council. At IAC this year, Blue Origin unveiled their national team consisting of themselves, Lockheed Martin, Northrop Grumman, and Draper Labs. This team will collaboratively build a Lunar Lander and submitted to NASA's design request on November 5th. Their ...
2
One question I really wanna ask; do spaceship's rot? Sort of, yes. In terms of "rotting", in the sense that a spacecraft will lose material or undergo degradation of its material components, then spacecraft do encounter this problem in the space environment for a variety of reasons. One big reason is atomic oxygen (that is, O, not O2), which is present in ...
2
Although this question has many answers already, I thought I'd add a more general answer How far could a human fall in a pressurised environment on various solar system bodies? I'm imagining that there are multi-story habitats on various bodies in the solar system, all pressurised to 1 atm. I'm also imagining that these habitats have a 'lift shaft' of ...
2
The strong distortions and star-like stripes are an artifact of Googles' image processing. For comparison, here is a screenshot of our own South Pole : I increased the contrast to make the artifacts more visible - the ice itself just has less contrast than the rocky features of the Moon.
2
The lighting is different at the poles. The sun is always very close to the horizon. There are some crater floors at the poles that never see sunlight. These crater floors are always inky black. Likewise there are polar plateaus and mountain tops that enjoy nearly constant sunlight. Shadows cast across these plateaus are always long though. And these long ...
1
An abstract for the "An Overview of the Volatiles Investigating Polar Exploration Rover (VIPER) Mission" talk, scheduled to be given in December at AGU, states VIPER is a solar and battery powered rover mission designed to operate over multiple lunar days, traversing several kilometers as it continuously monitors for subsurface hydrogen and other surface ...
1
I'm not aware of any measurements made more than a couple of meters from the surface, but with some assumptions to be described below, there should be no observable difference over the altitude range you mention. Ther are many factors that can influence a temperature measurement on an airless body. Yeah, yeah, some people talk about the moon's atmosphere. ...
1
Concern was not that the lunar dust will rise and cover the solar panels or instruments. The concern was as pointed out in the comments is that the dust particles rising up and hitting the lower bay of the lander and damaging or punching a hole. The simulation was done to study plume interaction with soil, the four plume cones when they interact with ground ...
1
If you think about it, the Earth is at the same distance from the Moon that the Moon from the Earth (of course), so the Earth at the lunar surface should be 4 times bigger in appearence.
Only top voted, non community-wiki answers of a minimum length are eligible | 2019-11-21 18:23:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4590383768081665, "perplexity": 1524.5124520290738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00056.warc.gz"} |
http://www.algonotes.com/en/knapsacks/ | # Survey of knapsack problems
The knapsack problem is one of the classic examples of a problem in combinatorial optimization. Since it is discussed in many textbooks about algorithms, one can say that each competitive programmer should known it pretty well. However, most of these books deals only with one or two most common variants of the knapsack problem, and it turns out that there are plenty of such variants. In this article we will discuss a number of them. Most of the variants will have dynamic programming solutions, and one or two clever observations. However, certain variants will use other algorithms.
This survey collects various tricks and ideas which I encountered when reading solutions to various problems. My original input here is to present them all in hopefully logical way, and sometimes propose my own solutions to some variants, in order this survey to be as complete as possible. This article is significantly extended version of a lecture about knapsack problem I gave at Brazilian ICPC Summer School in 2018.
## Standard knapsacks
We begin with probably the simplest formulation. We have a set of $n$ items, each of them is described by its weight (sometimes called size or something similar); the $i$-th item has weight $w_i$. We also have a knapsack of weight limit $M$ and we are asked to find a subset of items of total weight as large as possible, but less than or equal to the limit (so that the selected items can fit into the knapsack). This is called 0-1 knapsack problem.
The dynamic programming is rather straightforward here. We denote by $d[i,j]$ a Boolean value which says whether exists a subset of total weight exactly $j$ when we are allowed to use items number $\{ 1, 2, \ldots, i\}$. Thus variables satisfy $0 \leq i \leq n$, $0 \leq j \leq M$. Of course $d[0,j] = [j = 0]$, since without any items, we can only obtain an empty subset.
For $i\geq 1$ we consider two cases: either we do not take the $i$-th item to the subset (so the weight $j$ can be realized only if $d[i-1,j]$ is true), or we take it, but only if its weight is no bigger than $j$ and $d[i-1,j-w[i]]$ is true:
$d[i,j] = d[i-1,j] \text{ or } d[i-1,j-w[i]] \cdot [j \geq w[i]]$
The answer is the biggest value $j \leq M$ for which $d[n,j]$ is true. The code is quite simple:
#include <cstdio> using namespace std; #define REP(i,n) for(int i=0;i<(n);++i) const int N = 1000, MAXM = 1000; int n, M, w[N]; bool d[N+1][MAXM+1]; int knapsack() { REP(j, M+1) d[0][j] = j==0; REP(i, n) { REP(j, M+1) { d[i+1][j] = d[i][j]; /* rewrite */ if (j >= w[i]) { d[i+1][j] |= d[i][j - w[i]]; } } } return last_max_element(d[n], d[n]+M+1) - d[n]; }
The time complexity is $O(nM)$. The space complexity is the same, but can be easily reduced to $O(M)$, using the standard DP trick of storing only two last rows of array $d$ (since the calculations in the row only refers to values in the previous row). But here we can do even better. Observe that line with a comment just rewrites row $i$ to row $i+1$, and then in next two lines we improve some cells. Moreover during improvement we use only cells with lower $j$, so we can store everything in one row, as long as we iterate over it backwards:
bool d[MAXM+1]; int knapsack() { REP(j, M+1) d[j] = j==0; REP(i, n) { for (int j = M; j >= w[i]; --j) { d[j] |= d[j - w[i]]; } } return last_max_element(d, d+M+1) - d; }
If we want, we can utilize the above optimization to write the recurrence formula in a slightly more readable way, by removing row indices:
$d_{\mathrm{new}}[j] = d[j] \text{ or } d[j-w[i]] \cdot [j \geq w[i]]$
Note also that we solve more general problem: our array $d[j]$ is true if we can choose a subset of items such that its total weight is exactly $j$. We will call it a knapsack structure. It will allow us to answer in constant time queries of form: is there a subset of total weight exactly $j$, where $0 \leq j \leq M$. The space complexity of the structure is $O(M)$ and we can build it in $O(nM)$.
### Unlimited amount of items of each kind
We can also think about items not as single entities, but as various kinds, from which we can take multiple items. The simplest variant is when we can take unlimited amount of items of each kind. This is called unbounded knapsack problem. It is easy to adapt the previous algorithm. Just observe that if we consider taking an item of the $i$-th type, the remaining place $j-w[i]$ can still contain this kind of item, therefore the main recurrence is replaced by
$d[i,j] = d[i-1,j] \text{ or } d[i,j-w[i]] \cdot [j \geq w[i]]$
It is also easy to change code which uses one row. Now we can immediately use the values of improved cells, so we only need to change the direction of the inner loop:
REP(i, n) { for (int j = w[i]; j <= M; ++j) { d[j] |= d[j - w[i]]; } }
This gives us $O(nM)$ algorithm for the unbounded knapsack problem.
### Limited amount of items of each kind
A little bit harder variant is when we limit number of items we can take from each kind. Denote by $b_i$ number of items of the $i$-th kind. This is called bounded knapsack problem. Of course the naive approach would be to copy each item $b_i$ times and use the first algorithm for 0-1 knapsack. If $b_i \leq B$, then this would use time $O(nMB)$, which is not very fast.
But copying each item $b_i$ times is quite wasteful. We do not need to examine each item separately, since they are indistinguishable anyway, but we need some method to allow us to take any number of such items between $0$ and $b_i$, or in other words a bunch of items of total weight from the set $\{0, w_i, 2w_i, \ldots, b_i w_i\}$.
In order to do it more efficiently, we will group the items in “packets”. It is clearer to see how to do it if we assume that $b_i = 2^k-1$ for some $k$. Then we can form $k$ packets of sizes $w_i, 2w_i, 4w_i, \ldots, 2^{k-1} w_i$. Now if we want to take some packets of total weight $jw_i$, we just take packets based on set bits in binary representation of $j$. In general case when $2^k \leq b_i < 2^{k+1}$ we add an additional packet of size $r = (b_i - 2^{k-1} + 1)w_i$. If we take this packet, the remaining weight $b_i w_i - r$ is smaller than $2^k w_i$, so it can be covered by choosing from the first $k$ packets.
Therefore for the elements of the $i$-th type we will have $O(\log b_i)$ packets, so the algorithm (0-1 knapsack on these packets) will run in time $O(n M \log B)$.
But we can do faster. Let's consider adding the $i$-th type. Observe that the recurrence formula is as follows:
$d_{\mathrm{new}}[j] = \mathrm{or}\big(d[j], d[j-w_i], d[j-2w_i], \ldots, d[j-b_i w_i]\big),$
where for notational simplicity we assume that $d[\cdot]$ is false for negative indices. That means that the new value is true if there is at least one true value among $b_i$ values which we get starting from index $j$ an jumping backwards every other $w_i$-th index. Note that for $w_i=1$ we could just say that “there is at least one true value among the last $b_i+1$ values”. Such condition would be very easy to check, by maintaining the index $jr$ of the most recent true value; then the condition is true if $j - jr \leq b_i$.
But we can extend this idea for bigger values of $w_i$, by maintaining an array $jr$ of size $w_i$, where different indices keep track of most recent true values for cells which indices have the same remainder when divided by $w_i$. Maybe it is easier to explain it by providing the code:
int jr[MAXM+1]; REP(i, n) { const int w = w[i], b = b[i]; REP(j, w) jr[j] = -infty; REP(j, M+1) { d[j] |= j - jr[j % w] <= b*w; if (d[j]) { jr[j % w] = j; } } }
The time complexity of this algorithm is $O(nM)$. The algorithm highlights quite important idea we will use when creating algorithms for more complicated variants of bounded knapsacks. The idea helps us working with recursive formulas which, when calculating $d[j]$, depend on values $d[j-kw]$ for $0\leq k\leq b$. The other way of thinking about this is as follows: we partition array $d$ into $w$ arrays $d^0, d^1, \ldots, d^{w-1}$ by taking to array $d^J$ every $w$-th element from array $d$ starting from index $J$, i.e. $d^J[j'] = d[J + j'w]$. Then the recursive formula for $j = J + j'w$ can be written as follows:
$d^J_{\mathrm{new}}[j'] = \mathrm{or}\big(d^J[j'], d^J[j'-1], d^J[j'-2], \ldots, d^J[j'-b]\big),$
thus it depends on a consecutive range of indices $d^J[j'-k]$ for $0 \leq k\leq b$. And often we can propose a faster algorithm which uses the fact that the range of indices is consecutive.
But for the basic bounded knapsack problem we can show a different algorithm of $O(nM)$ complexity, which does not need additional memory. Once again let's consider adding the $i$-th type. If $d[j]$ is true, we can generate subsets $j + w_i, j + 2w_i, \ldots, j + b_i \cdot w_i$, thus we should set all these indices of array $d$ to true. Doing it naively would need time $O(nMB)$.
But let's iterate over $j$ decreasingly and take a closer look what happens. When examining value $j$, we can assume that for all values $j' > j$ if $d[j']$ is true, then $d[j' + k'\cdot w_i]$ is also true for all $k' \leq b_i$. So if $d[j]$ is true, we iterate over $k$ and set $d[j + k\cdot w_i]$ to true, until we hit the cell which was already set. Of course it couldn't be set in this iteration (since then $d[j + (k-1)\cdot w_i]$ must also be set), thus the size $j+k\cdot w_i$ can be obtained using elements of type smaller that $i$. But this means that when previously algorithm examined $j+k\cdot w_i$, the value $d[j+k\cdot w_i]$ was true, so it also set to true all values $d[j+(k+k')\cdot w_i]$ for all $k' \leq b_i$. Therefore we can stop our iteration over $k$. The code is as follows:
REP(i, n) { for (int j = M - w[i]; j >= 0; --j) { if (d[j]) { int k = b[i], x = j + w[i]; while (k > 0 && x <= M && !d[x]) { d[x] = true; --k; x += w[i]; } } } }
Observe that the inner while loop invokes body if some cell $d[x]$ changes its value to true (the loop fills cells and stops if it finds a cell filled before). It can happen only $M$ times, so total time complexity is $O(nM)$.
### Sum of weights of items is limited
This algorithm can also help us in a slight variant of 0-1 knapsack problem in which we have a limit on the sum of weights of all items, i.e. $\sum w_i \leq S$. The idea is that if $S$ is small, then there cannot be many items of different weights (weights of items have to repeat). More precisely, if we have $k$ items of different weights, their total weight must be equal to at least $1 + 2 + \ldots + k = O(k^2)$, so the number of different weights is $O(\sqrt S)$. Thus we can utilize this fact and group items of the same weight and use algorithm for bounded knapsack problem. Thanks to that we add a bunch of items of the same weight in linear time, which results in time complexity $O(\sqrt S M)$.
Of course this is after a preprocessing phase in which we form groups. It will run in time $O(n \log n)$ or $O(n + W)$ where $W$ is the upper bound on the size of the item weight, depending how we sort the weights. Important distinction between this and all previous algorithms is that int the previous ones we added items one by one, each in time $O(M)$, and now we must enforce some order by cleverly grouping them.
Note that exactly the same algorithm works for bounded knapsack with limit on the sum of weights.
### Big knapsack
Most of the variants we discuss here has linear dependency on knapsack size $M$. That is no surprise, since the knapsack problem is NP-complete and for big limits we know only exponential algorithms. For instance meet-in-the-middle one which after building phase of time $O(n2^{n/2})$ and memory $O(2^{n/2})$ can answer queries in time $O(2^{n/2})$.
But there is one more interesting variant for big values of $M$ in which weights $w_i \leq W$ are small. Let's take any item $w$ and build array $d[j]$ for $0\leq j < w$ in which $d[j]$ is the smallest weight among all obtainable subsets of weights giving remainder $j$ when divided by $w$. Having this array answering queries in constant time is easy, since we can obtain weights $d[j] + kw$ for any $k\geq 0$:
bool query(int j) { return d[j % w] <= j; }
The array can be calculated using shortest paths algorithm for directed graphs. We consider a graph with $w$ vertices $v_0, \ldots, v_{w-1}$. For every vertex $v_j$ and every item of weight $w_i$ we add a directed edge from $v_j$ to $v_{(j + w_i) \bmod w}$ with length $w_i$. Then $d[j]$ is the length of the shortest path from $v_0$ to $v_j$.
The overall time complexity depends on the size of this graph. Of course, it's the best to use the smallest weight as value $w$. Moreover, the number of edges from each vertex is bounded by $\min(n,w)$, since we need to store only the cheapest edge between each pair of vertices. So depending on whether the graph is dense or sparse, it make sense to use different implementation of Dijkstra algorithm: bucket $O(n + w^{3/2} + w\min(n,w))$ for dense graph or binary heap $O(wn\log w)$ for sparse graph.
To be continued… | 2019-01-20 01:23:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659841179847717, "perplexity": 328.335936098809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583688396.58/warc/CC-MAIN-20190120001524-20190120023524-00436.warc.gz"} |
https://deepai.org/publication/uniform-convergence-and-generalization-for-nonconvex-stochastic-minimax-problems | # Uniform Convergence and Generalization for Nonconvex Stochastic Minimax Problems
This paper studies the uniform convergence and generalization bounds for nonconvex-(strongly)-concave (NC-SC/NC-C) stochastic minimax optimization. We first establish the uniform convergence between the empirical minimax problem and the population minimax problem and show the 𝒪̃(dκ^2ϵ^-2) and 𝒪̃(dϵ^-4) sample complexities respectively for the NC-SC and NC-C settings, where d is the dimension number and κ is the condition number. To the best of our knowledge, this is the first uniform convergence measured by the first-order stationarity in stochastic minimax optimization. Based on the uniform convergence, we shed light on the sample and gradient complexities required for finding an approximate stationary point for stochastic minimax optimization in the NC-SC and NC-C settings.
• 5 publications
• 31 publications
• 57 publications
• 33 publications
07/02/2019
### Efficient Algorithms for Smooth Minimax Optimization
This paper studies first order methods for solving smooth minimax optimi...
03/29/2021
### The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
This paper studies the complexity for finding approximate stationary poi...
01/22/2020
### Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities
In this paper, we study zeroth-order algorithms for minimax optimization...
06/03/2020
### A Unified Single-loop Alternating Gradient Projection Algorithm for Nonconvex-Concave and Convex-Nonconcave Minimax Problems
Much recent research effort has been directed to the development of effi...
06/09/2022
### What is a Good Metric to Study Generalization of Minimax Learners?
Minimax optimization has served as the backbone of many machine learning...
06/15/2020
### The Landscape of Nonconvex-Nonconcave Minimax Optimization
Minimax optimization has become a central tool for modern machine learni...
02/13/2022
### Minimax in Geodesic Metric Spaces: Sion's Theorem and Algorithms
Determining whether saddle points exist or are approximable for nonconve...
## 1 Introduction
In this paper, we consider nonconvex stochastic minimax problems:
minx∈Xmaxy∈Y F(x,y)≜Eξ[f(x,y;ξ)], (1)
where and () are two nonempty closed convex sets,
is a random variable following an unknown distribution
, and is continuously differentiable and Lipschitz smooth jointly in and for any . We denote the objective (1) as the population minimax problem. Throughout the paper, we focus on the case where is nonconvex in and (strongly) concave in , i.e., nonconvex-(strongly)-concave (NC-SC / NC-C). Such problems widely appear in practical applications like adversarial training (madry2018towards; wang2019convergence)(goodfellow2014generative; sanjabi2018convergence; lei2020sgd)(dai2017learning; dai2018sbeed; huang2020convergence) and robust training (sinha2018certifying). The distribution is often unknown and one generally only has access to a dataset consisting of i.i.d. samples from and instead solves the following empirical minimax problem:
minx∈Xmaxy∈Y FS(x,y)≜1nn∑i=1f(x,y;ξi). (2)
Since functions and are nonconvex in and pursuing their global optimal solutions is intractable in general, instead one aims to design an algorithm that finds an -stationary point,
∥∇Φ(Ax(S))∥≤ϵordist(0,∂Φ(Ax(S)))≤ϵ, (3)
where and are primal functions, is the -component of the output of any algorithm for solving (2), and is the (Fréchet) subdifferential of . When is nonsmooth, we resort to the gradient norm of its Moreau envelope to measure the first-order stationarity as it provides an upper bound on (davis2019stochastic).
Take the NC-SC setting as an example. The optimization error for solving the population minimax problem (1) consists of two terms222Here for simplicity of illustration, we assume there is no constraint and primal functions are differentiable, the detailed setting will be formally introduced in Section 2.:
(4)
where the first term on the right-hand-side corresponds to the optimization error of solving the empirical minimax problem (2) and the second term corresponds to the generalization error. Such decomposition on the gradient norm has been studied recently in nonconvex minimization, e.g., foster2018uniform; mei2018landscape; davis2022graphical. Recently, there is a line of work that develops efficient algorithms for solving the empirical minimax problems, which gives a hint on the optimization error; see e.g, (luo2020stochastic; yang2020catalyst), just to list a few. However, a full characterization of the generalization error is still lacking.
Characterizing the generalization error is not easy as both and depend on the dataset , which induces some correlation. One way to address such dependence issue in generalization bounds is to establish the stability argument of specific algorithms in stochastic optimization (bousquet2002stability; shalev2010learnability; hardt2016train) and stochastic minimax optimization (farnia2021train; lei2021stability; boob2021optimal; yang2022differentially). However, these stability-based generalization bounds have several drawbacks:
1. Generally they require a case-by-case analysis for different algorithms, i.e, these bounds are algorithm-dependent.
2. Existing stability analysis only applies to simple gradient-based algorithms for minimization and minimax problems (note that for minimax optimization, simple algorithms such as stochastic gradient descent ascent often turns out to be suboptimal), yet such analysis can be difficult to generalize to more sophisticated state-of-the-art algorithms.
3. Existing stability analysis generally requires specific parameters (e.g., stepsizes), which may misalign with those required for convergence analysis, thus making the generalization bounds less informative.
4. Existing stability-based generalization bounds generally use function value-based gap as the measurement of the algorithm, which may not be suitable concerning the nonconvex landscape.
To the best of our knowledge, there is no generalization bound results measured by the first-order stationarity in nonconvex minimax optimization.
To overcome these difficulties, we aim to derive generalization bounds via establishing the uniform convergence between the empirical minimax and the population minimax problem, i.e., . Note that uniform convergence is invariant to the choice of algorithms and provides an upper bound on the generalization error for any , thus the derived generalization bound is algorithm-agnostic. Although uniform convergence has been extensively studied in the literature of stochastic optimization (kleywegt2002sample; mei2018landscape; davis2022graphical), a key difference in uniform convergence for minimax optimization is that the primal function cannot be written as the average over i.i.d. random functions and one needs to additionally characterize the differences between and . Thus techniques in uniform convergence for classical stochastic optimization are not directly applicable.
We are interested in both the sample complexity and gradient complexity for achieving stationarity convergence of the population minimax problem (1). Here the sample complexity refers to the number of samples , and the gradient complexity refers to the number of gradient evaluations of . Combining the derived generalization error with optimization error from existing algorithms in finite-sum nonconvex minimax optimization, e.g., luo2020stochastic; yang2020catalyst; zhang2021complexity, one automatically obtains the sample and gradient complexities bounds of these algorithms for solving the population minimax problem.
### 1.1 Contributions
Our contributions are two-fold:
• [leftmargin = 2em]
• We establish the first uniform convergence results between the population and the empirical nonconvex minimax optimization in NC-SC and NC-C settings, measured by the gradients of primal functions (or its Moreau envelope). It provides an algorithm-agnostic generalization bound for any algorithms that solve the empirical minimax problem. Specifically, the sample complexities to achieve an -uniform convergence and an -generalization error are and for the NC-SC and NC-C settings, respectively.
• Combined with algorithms for nonconvex finite-sum minimax optimization, the generalization results further imply gradient complexities for solving NC-SC and NC-C stochastic minimax problems, respectively. See Table 1 for a summary. In terms of dependence on the accuracy and the condition number , the achieved sample complexities significantly improve over the sample complexities of SOTA SGD-type algorithms in literature (luo2020stochastic; yang2022faster; rafique2021weakly; lin2020gradient; boct2020alternating); and the achieved gradient complexities match with existing SOTA results. The dependence on the dimension may be avoided if one directly analyzes SGD-type algorithms as shown in literature on stochastic optimization (kleywegt2002sample; nemirovski2009robust; davis2022graphical; hu2020sample), and evidenced for both NC-SC and NC-C minimax problems in our paper.
### 1.2 Literature Review
#### Nonconvex Minimax Optimization
In the NC-SC setting, many algorithms have been proposed, e.g., nouiehed2019solving; lin2020gradient; lin2020near; luo2020stochastic; yang2020global; boct2020alternating; xu2020unified; lu2020hybrid; yan2020optimal; guo2021novel; sharma2022federated. Among them, (zhang2021complexity) achieved the optimal complexity in the deterministic case by introducing the Catalyst acceleration scheme (lin2015universal; paquette2018catalyst) into minimax problems, and luo2020stochastic; zhang2021complexity achieved the best complexity in the finite-sum case for now, which are and , respectively. For the purely stochastic NC-SC minimax problems, yang2022faster introduced a stochastic smoothed-AGDA algorithm, which achieves the best complexity, while luo2020stochastic achieves the best complexity if further assuming average smoothness. The lower bounds of NC-SC problems in deterministic, finite-sum, and stochastic settings have been extensively studied recently in zhang2021complexity; han2021lower; li2021complexity.
In general, NC-C problems are harder than NC-SC problems since its primal function can be both nonsmooth and nonconvex (thekumparampil2019efficient). Recent years witnessed a surge of algorithms for NC-C problems in deterministic, finite-sum, and stochastic settings, e.g., zhang2020single; ostrovskii2021efficient; thekumparampil2019efficient; zhao2020primal; nouiehed2019solving; yang2020catalyst; lin2020gradient; boct2020alternating, to name a few. To the best of our knowledge, thekumparampil2019efficient; yang2020catalyst; lin2020near achieved the best complexity in the deterministic case, while yang2020catalyst achieved the best complexity in the finite-sum case, and rafique2021weakly provided the best complexity in the purely stochastic case.
#### Uniform Convergence
A series of works from stochastic optimization and statistical learning theory studied uniform convergence on the worst-case differences between the population objective
and its empirical objective constructed via sample average approximation (SAA, also known as empirical risk minimization). Interested readers may refer to prominent results in statistical learning (fisher1922mathematical; vapnik1999overview; van2000asymptotic). For finite-dimensional problem, kleywegt2002sample showed that the sample complexity is to achieve an
-uniform convergence in high probability, i.e.,
. For nonconvex empirical objectives, mei2018landscape and davis2022graphical established sample complexity of uniform convergence measured by the stationarity for nonconvex smooth and weakly convex functions, respectively. For infinite-dimensional functional stochastic optimization with a finite VC-dimension, uniform convergence still holds (vapnik1999overview). In addition, wang2017differentially uses uniform convergence to demonstrate the generalization and the gradient complexity of differential private algorithms for stochastic optimization.
#### Stability-Based Generalization Bounds
Another line of research focuses on generalization bounds of stochastic optimization via the uniform stability of specific algorithms, including SAA (bousquet2002stability; shalev2009stochastic), stochastic gradient descent (hardt2016train; bassily2020stability), and uniformly stable algorithms (klochkov2021stability). Recently, a series of works further extended the analysis to understand the generalization performances of various algorithms in minimax problems. farnia2021train gave the generalization bound for the outputs of gradient-descent-ascent (GDA) and proximal-point algorithm (PPA) in both (strongly)-convex-(strongly)-concave and nonconvex-nonconcave smooth minimax problems. lei2021stability focused on GDA and provided a comprehensive study for different settings of minimax problems with various generalization measures on function value gaps. boob2021optimal provided stability and generalization results of extragradient algorithm (EG) in the smooth convex-concave setting. On the other hand, zhang2021generalization studied stability and generalization of the empirical minimax problem under the (strongly)-convex-(strongly)-concave setting, assuming that one can find the optimal solution to the empirical minimax problem.
## 2 Problem Setting
#### Notations
Throughout the paper, we use as the -norm, as the gradient of a function , for nonnegative functions and , we say if for some . We denote as the projection operator. Let denote the output of an algorithm on the empirical minimax problem (2) with dataset . Given , we say a function is -strongly convex if is convex, and it is -strongly concave if is -strongly convex. Function is -weakly convex if is convex (see more notations and standard definitions in Appendix A).
###### Definition 2.1 (Smooth Function)
We say a function is -smooth jointly in if the function is continuous differentiable, and there exists a constant such that for any , we have and .
By definition, it is easy to find that an -smooth function is also -weakly convex. Next we introduce the main assumptions used throughout the paper.
###### Assumption 2.1 (Main Settings)
We assume the following:
• [leftmargin = 2em]
• The function is -smooth jointly in for any .
• The function is -strongly concave in for any and any where .
• The gradient norms of and are bounded by respectively for any .
• The domains and are compact convex sets, i.e., there exists constants such that for any , and for any , , respectively.
Note that compact domain assumption is widely used in uniform convergence literature (kleywegt2002sample; davis2022graphical).
Under Assumption 2.1, the objective function is -smooth in and -strongly concave for any . When , we call the population minimax problem (1) a nonconvex-strongly-concave (NC-SC) minimax problem; when , we call it a nonconvex-concave (NC-C) minimax problem.
###### Definition 2.2 (Moreau Envelope)
For an -weakly convex function and , we use and to denote the the Moreau envelope of and the proximal point of for a given point , defined as following:
Φλ(x)≜minz∈X{Φ(z)+12λ∥z−x∥2},proxλΦ(x)≜argminz∈X{Φ(z)+12λ∥z−x∥2}. (5)
Below we recall some important properties on the primal function and its Moreau envelope presented in the literature (davis2019stochastic; thekumparampil2019efficient; lin2020gradient).
###### Lemma 2.1 (Properties of Φ and Φλ)
In the NC-SC setting (), both and are -smooth with the condition number . In the NC-C setting (), the primal function is -weakly convex, its Moreau envelope is differentiable, Lipschitz smooth, also , , where and .
#### Performance Measurement
In the NC-SC setting, the primal functions and are both -smooth. Regarding the constraint, we measure the difference between the population and empirical minimax problems using the generalized gradient of the population and the empirical primal functions, i.e., , where . The following inequality summarized the relationship of measurements in term of generalized gradient and in terms of gradient used in Section 1.
E∥∥GΦ(Ax(S))−GΦS(Ax(S))∥∥generalization error of Algorithm A≤E∥∇Φ(Ax(S))−∇ΦS(Ax(S))∥≤E[maxx∈X∥∇Φ(x)−∇ΦS(x)∥],algorithm-agnostic uniform convergence (6)
where the first inequality holds as projection is a non-expansive operator. The term in the left-hand side (LHS) above is the generalization error of an algorithm we desire in the NC-SC case.
For the NC-C case, the primal function is -weakly convex, we use the gradient of its Moreau Envelope to characterize the (near)-stationarity (davis2019stochastic). We measure the proximity between the population and empirical problems using the difference between the gradients of their respective Moreau envelopes. The generalization error and the uniform convergence in the NC-C case is given as follows:
E∥∥∇Φ1/(2L)(Ax(S))−∇Φ1/(2L)S(Ax(S))∥∥generalization error of Algorithm A≤E[maxx∈X∥∥∇Φ1/(2L)(x)−∇Φ1/(2L)S(x)∥∥]% algorithm-agnostic uniform convergence. (7)
The term in the LHS above is the generalization error of an algorithm we desire in the NC-C case.
## 3 Uniform Convergence and Generalization Bounds
In this section, we discuss the sample complexity for achieving -uniform convergence and -generalization error for NC-SC and NC-C stochastic minimax optimization.
### 3.1 NC-SC Stochastic Minimax Optimization
Under the NC-SC setting, we demonstrate in the following theorem the uniform convergence between gradients of primal functions of the population and empirical minimax problems, which provides an upper bound on the generalization error for any algorithm . We defer the proof to Appendix B.
###### Theorem 3.1 (Uniform Convergence and Generalization Error, NC-SC)
Under Assumption 2.1 with , we have
E[maxx∈X∥∇Φ(x)−∇ΦS(x)∥]=~O(d1/2κn−1/2). (8)
Furthermore, to achieve -uniform convergence and -generalization error for any algorithm such that the error , it suffices to have
n=n∗NCSC≜~O(dκ2ϵ−2). (9)
To the best of our knowledge, it is the first uniform convergence and algorithm-agnostic generalization error bound result for NC-SC stochastic minimax problem. In comparison, existing works in the generalization error analysis (farnia2021train; lei2021stability) utilize stability arguments for certain algorithms and thus are algorithm-specific. zhang2021generalization establish algorithm-agnostic stability and generalization in the strongly-convex-strongly-concave regime, yet their analysis does not extend to the nonconvex regime. Our generalization results apply to any algorithms for solving finite-sum problems, especially the SOTA algorithms like Catalyst-SVRG (zhang2021complexity) and finite-sum version SREDA (luo2020stochastic). These algorithms are generally very complicated, and they lack stability-based generalization bounds analysis.
The achieved sample complexity further implies that for any algorithm that achieves an -stationarity point of the empirical minimax problem, its sample complexity for finding an -stationary point of the population minimax problem is . In terms of the dependence on the accuracy and the condition number , such sample complexity is better than the SOTA sample complexity results achieved via directly applying gradient-based methods on the population minimax optimization, i.e., by Stochastic Smoothed-AGDA (yang2022faster) and by SREDA (luo2020stochastic).
### 3.2 NC-C Stochastic Minimax Optimization
In this subsection, we derive the uniform convergence and algorithm-agnostic generalization bounds for NC-C stochastic minimax problems in the following theorem. Recall that the primal function is -weakly convex (thekumparampil2019efficient) and is not well-defined. We use the gradient of the Moreau envelope of the primal function as the measurement (davis2019stochastic).
###### Theorem 3.2 (Uniform Convergence and Generalization Error, NC-C)
Under Assumption 2.1 with , we have
E[maxx∈X∥∥∇Φ1/(2L)S(x)−∇Φ1/(2L)(x)∥∥]=~O(d1/4n−1/4). (10)
Furthermore, to achieve -uniform convergence and -generalization error for any algorithm such that the error , it suffices to have
n=n∗NCC≜~O(dϵ−4). (11)
#### Proof Sketch
The analysis of Theorem 3.2 consists of three parts. By the expression of the gradient of the Moreau envelope, it holds that when ,
∥∇ΦλS(x)−∇Φλ(x)∥≤1λ∥proxλΦ(x)−proxλΦS(x)∥.
We first use a -net (vapnik1999overview) to handle the dependence issue between and .
Then we build up a connection between NC-C stochastic minimax optimization problems and NC-SC stochastic minimax optimization problems via adding an -regularization and carefully choosing a regularization parameter. The following lemma characterizes the distance between the proximal points of the primal function of the original NC-C problem and the regularized NC-SC problem . Note that the lemma may be of independent interest for the design and the analysis of gradient-based methods for NC-C problem.
###### Lemma 3.1
For , denote as the primal function of the regularized NC-C problem. It holds for that
∥proxλΦ(x)−proxλ^Φ(x)∥2≤νDYλ1−λ(L+ν).
This lemma implies that for small regularization parameter , the difference between the proximal point of the primal function of the NC-C problem and the primal function of the regularized NC-SC problem is going to be small.
Proof Since is -smooth, it is obvious that is -smooth. By (thekumparampil2019efficient, Lemma 3), is -weakly convex in . Therefore, is -strongly convex in for any fixed . Denote
^y(x)≜argmaxy∈YF(x,y)−ν2∥y∥2,y∗(x)≜argmaxy∈YF(x,y). (12)
It holds that
12(1/λ−(L+ν))∥proxλΦ(x)−proxλ^Φ(x)∥2 ≤ ^Φ(proxλΦ(x))+12λ∥proxλΦ(x)−x∥2−^Φ(proxλ^Φ(x))−12λ∥proxλ^Φ(x)−x∥2 = F(proxλΦ(x),^y(proxλΦ(x)))−ν2∥^y(proxλΦ(x))∥2+12λ∥proxλΦ(x)−x∥2 −F(proxλ^Φ(x),^y(proxλ^Φ(x)))+ν2∥^y(proxλ^Φ(x))∥2−12λ∥proxλ^Φ(x)−x∥2 ≤ F(proxλΦ(x),y∗(proxλΦ(x)))+12λ∥proxλΦ(x)−x∥2−ν2∥^y(proxλΦ(x))∥2 −F(proxλ^Φ(x),^y(proxλ^Φ(x)))−12λ∥proxλ^Φ(x)−x∥2+ν2∥^y(proxλ^Φ(x))∥2 ≤ F(proxλΦ(x),y∗(proxλΦ(x)))+12λ∥proxλΦ(x)−x∥2−ν2∥^y(proxλΦ(x))∥2 = Φ(proxλΦ(x))+12λ∥proxλΦ(x)−x∥2−Φ(proxλ^Φ(x))−12λ∥proxλ^Φ(x)−x∥2 +ν2∥y∗(proxλ^Φ(x))∥2−ν2∥^y(proxλΦ(x))∥2 ≤ ν2∥y∗(proxλ^Φ(x))∥2−ν2∥^y(proxλΦ(x))∥2 ≤ νDY2,
where the first inequality holds by strong convexity of and optimality of for , the first equality holds by definition of , the second inequality holds by optimality of , the third inequality holds by optimality of , the second equality holds by definition of , the fourth inequality holds by optimality of , the last inequality holds by compact domain , which concludes the proof.
It remains to characterize the distance between and and show that is a sub-Gaussian random variable. For the distance between and , by definition, it is equivalent to the difference between the optimal solutions on of strongly-convex strongly-concave (SC-SC) population minimax problem and its empirical minimax problem. We utilize the existing stability-based results for SC-SC minimax optimization (zhang2021generalization) to build the upper bound for the distance and show the variable is sub-Gaussian. The proof of Theorem 3.2 is deferred to Appendix C.
To the best of our knowledge, this is the first algorithm-agnostic generalization error result in NC-C stochastic minimax optimization. Similar to the NC-SC setting, Theorem 3.2 indicates that the sample complexity to guarantee an -generalization error in the NC-C case for any algorithm is . In comparison, it is much better than the sample complexity achieved by the SOTA stochastic approximation-based algorithms (rafique2021weakly) for NC-C stochastic minimax optimization for small accuracy and moderate dimension .
###### Remark 3.1 (Comparison Between Minimization, NC-SC, and NC-C Settings)
For general stochastic nonconvex optimization, the sample complexity of achieving uniform convergence is (davis2022graphical; mei2018landscape). There are two key differences in minimax optimization.
1. The primal function is not in the form of averaging over samples and thus existing analysis for minimization problem is not directly applicable. Instead if we care about the uniform convergence in terms of the gradient of , i.e., , the existing analysis in mei2018landscape directly gives a sample complexity.
2. For a given , the optimal point differs from and such difference brings in an additional error term. In the NC-SC case, such error is upper bounded by , which is of the same scale of the error from . Thus the eventual uniform convergence bound is of the same order as that for minimization problem (mei2018landscape; davis2022graphical). However, in the NC-C case, may not be well defined. Instead, we bound the distance between
^y∗S(x)≜argmaxy∈YFS(x,y)−ν2∥y∥2and^y∗(x)≜argmaxy∈YF(x,y)−ν2∥y∥2
for a small . Such error is controlled by . Thus the sample complexity for achieving -uniform convergence for the NC-C case is large than that of the NC-SC case.
We leave it for future investigation to see if one could achieve smaller sample complexity in the NC-C case via a better characterization of the extra error brought in by in the NC-C setting.
### 3.3 Gradient Complexity for Solving Stochastic Nonconvex Minimax Optimization
The uniform convergence and the algorithm-agnostic generalization error shed light on the tightness of the complexity of algorithms for solving stochastic minimax optimization. We summarize related results in Table 1 and elaborate the details in this subsection.
Combining sample complexities for achieving -generalization error and gradient complexities of existing algorithms for solving empirical minimax problems, we can directly obtain gradient complexities of these algorithms for solving population minimax problems. Note that the SOTA gradient complexity for solving NC-SC empirical problems is (luo2020stochastic)333Such gradient complexity holds when as mentioned in luo2020stochastic. Our sample complexity result in Theorem 3.1 aligns such requirements. Also the results therein assume average smoothness, which is a weaker condition than individual smoothness in our paper. and (zhang2021complexity), while for solving NC-C empirical problems it is (yang2020catalyst). We substitute the required sample size given by Theorem 3.1 and Theorem 3.2 to get the corresponding gradient complexity for solving the population minimax problem (1). Recall the definition of (near)-stationarity, the next theorem shows the achieved gradient complexity and we defer the proof to Appendix D..
###### Theorem 3.3 (Gradient Complexity of Specific Algorithms)
Under Assumption 2.1, we have:
• [leftmargin = 2em]
• In the NC-SC case, if we use the finite-sum version of SREDA proposed in luo2020stochastic for the empirical minimax problem (2) with , the algorithm output is an -stationary point of the population minimax problem (1), and the corresponding gradient complexity is .
• In the NC-C case, if we use the Catalyst-SVRG algorithm proposed in yang2020catalyst for the empirical minimax problem (2) with size , the algorithm output is an -stationary point of the population minimax problem (1), and the corresponding gradient complexity is .
#### Dependence on the Dimension d
The gradient complexities obtained in Theorem 3.3 come with a dependence on the dimension , which stems from the analysis of uniform convergence argument as it aims to bound the error on the worst-case . On the other hand, to achieve a small optimization error on the population minimax problem, it only requires a small generalization error on the specific output . Thus the gradient complexity obtained from uniform convergence has its own limitation.
Nevertheless, the obtained sample and gradient complexities are still meaningful in terms of the dependence on . In addition, we point out that the dependence on can generally be avoided if one directly analyzes some SGD-type methods for the population minimax problem. We have witnessed in various settings that the complexity bound of SAA has a dependence on dimension while there exist some SGD-type algorithms with dimension-free gradient complexities. See kleywegt2002sample and nemirovski2009robust for classical stochastic convex optimization, davis2022graphical and davis2019stochastic for stochastic nonconvex optimization.
On the other hand, there are several structured machine learning models that enjoy dimension-free uniform convergence results
(davis2022graphical; foster2018uniform; mei2018landscape). We leave the investigation of dimension-free uniform convergence for specific applications with nonconvex minimax structure as a future direction.
#### Matching SGD-Type Algorithms in Stochastic Nonconvex Minimax Problems
In fact, the above argument that one can get rid of dependence on in SGD-type algorithm analysis is already verified. In NC-SC stochastic minimax optimization, the stochastic version of the SREDA algorithm in luo2020stochastic achieves gradient complexity, which matches the first bullet point in Theorem 3.3 except for dependence on dimension. In the NC-C case, the PG-SMD algorithm proposed in rafique2021weakly achieves gradient complexity, which matches the second result in Theorem 3.3 while it is free of the dimension dependence.
We point out that the discussion above relies on the gradient complexities of existing SOTA algorithms in NC-SC or NC-C finite-sum minimax optimization, which may not be sharp enough in terms of the dependence on the condition number or the sample size . It is still possible to further improve the gradient complexity if one could design faster algorithms for solving empirical nonconvex minimax optimization problems.
###### Remark 3.2 (Tightness of Lower and Upper Complexity Bounds)
In the NC-SC setting, zhang2021complexity provides a lower complexity bound for NC-SC finite-sum problems as with the average smoothness assumption, which is strictly lower than the SOTA upper bounds in (luo2020stochastic; zhang2021complexity). If the lower bound is sharp in our setting, with the error decomposition (4), we can conjecture that there exists an algorithm for solving NC-SC stochastic minimax problems with a better gradient complexity as . For the NC-C setting, there is no specific lower complexity bound 444To the best of our knowledge, currently there is no lower bound result specifically for NC-C minimax optimization. The existing lower bounds for nonconvex minimization (carmon2019lower; carmon2019lowerII; fang2018spider; zhou2019lower; arjevani2019lower) and NC-SC minimax problems (zhang2021complexity; han2021lower; li2021complexity) are trivial lower bounds for nonconvex minimax problems., so it is still an open problem whether the currently SOTA complexity in yang2020catalyst is optimal. It remains an open question whether one can design an algorithm with improved complexity and what is the lower complexity bound of the NC-C setting .
On the other hand, the SOTA gradient complexity bound for NC-SC finite-sum problems is (luo2020stochastic) and (zhang2021complexity): one has a better dependence on the sample size and the other has a better dependence on the condition number . With the latter upper bound, the error decomposition (4) implies an gradient complexity, which is clearly sub-optimal in terms of the dependence on accuracy . Note that the -dependence of the gradient complexity induced by the former upper bound has hit the lower bound in nonconvex smooth optimization (arjevani2019lower). We conjecture that the gradient complexity achieved in luo2020stochastic has an optimal dependence on the sample size as it provides a matching dependence on accuracy with the lower bound when combined with our uniform convergence result.
## 4 Conclusion
In this paper, we take an initial step towards understanding the the uniform convergence and corresponding generalization performances of NC-SC and NC-C minimax problems measured by the first-order stationarity. We hope that this work will shed light on the design of algorithms with improved complexities for solving stochastic nonconvex minimax optimization.
Several future directions are worthy further investigation. It remains interesting to see whether we can improve the uniform convergence results under the NC-C setting, particularly the dependence on accuracy . In addition, is it possible to design algorithms for the NC-SC finite-sum setting with better complexities and close the gap from the lower bound. In terms of generalization bounds, it remains open to derive algorithm-specific stability-based generalization bounds under the stationarity measurement.
## Appendix A Additional Definitions and Tools
For convenience, we summarize the notations commonly used throughout the paper.
• [leftmargin = 2em]
• Population minimax problem and its primal function555Another commonly used convergence criterion in minimax optimization is the first-order stationarity of , i.e., and (or its corresponding gradient mapping) [lin2020gradient, xu2020unified]. We refer readers to lin2020gradient, yang2022faster for a thorough comparison of these two measurements. In this paper, we always stick to the convergence measured by the stationarity of the primal function.
F(x,y)≜Eξf(x,y;ξ),Φ(x)≜maxy∈YF(x,y),y∗(x)≜argmaxy∈YF(x,y).
• Empirical minimax problem and its primal function
FS(x,y)≜1nn∑i=1f(x,y;ξi),ΦS(x)≜maxy∈YFS(x,y),y∗S(x)≜argmaxy∈YFS(x,y).
• Moreau envelope and corresponding proximal point:
Φλ(x)≜minz∈X{Φ(z)+12λ∥z−x∥2},proxλΦ(x)≜argminz∈X{Φ(z)+12λ∥z−x∥2},ΦλS(x)≜minz∈X{ΦS(z)+12λ∥z−x∥2},proxλΦS(x)≜argminz∈X{ΦS(z)+12λ∥z−x∥2}.
• : -norm.
• : the gradient of a function .
• : the projection operator.
• : the output of an algorithm on the empirical minimax problem (2) with dataset .
• NC / WC: nonconvex, weakly convex.
• NC-SC / NC-C: nonconvex-(strongly)-concave.
• SOTA: state-of-the-art.
• : dimension number of .
• : condition number , : Lipschitz smoothness parameter, : strong concavity parameter.
• hides poly-logarithmic factors.
• if for some and nonnegative functions and .
• We say a function is convex if and , we have .
• A function is -smooth666Here the smoothness definition for single-variable functions is subtly different from that of two-variable functions in Definition 2.1, so we list it here for completeness. if is continuously differentiable in and there exists a constant such that holds for any .
For completeness, we introduce the definition of a sub-Gaussian random variable and related lemma, which are important tools in the analysis.
###### Definition A.1 (Sub-Gaussian Random Variable)
A random variable
is a zero-mean sub-Gaussian random variable with variance proxy
if and either of the following two conditions hold:
(a) E[exp(sη)]≤exp(σ2ηs22) for any s∈R;(b) P(|η|≥t)≤2exp(−t22σ2η) for any t>0.
We use the following McDiarmid’s inequality to show that a random variable is sub-Gaussian.
###### Lemma A.1 (McDiarmid’s inequality)
Let be independent random variables. Let be any function with the -bounded differences property: for every and every , and that differ only in the -th coordinate ( for all ), we have
∣∣h(η1,…,ηn)−h(η′1,…,η′n)∣∣≤ci.
For any , it holds that
P(|h(η1,…,ηn)−Eh(η1,…,ηn)|≥t)≤2exp(−2t2∑ni=1c2i).
###### Lemma A.2 (Properties of Φ and Φλ, Restate)
In the NC-SC setting (), both and are -smooth with the condition number , both and are -Lipschitz continuous and . In the NC-C setting (), the primal function is -weakly convex, and its its Moreau envelope is differentiable, Lipschitz smooth, also
∇Φλ(x)=λ−1(x−^x),∥∥∇Φλ(x)∥∥≥dist(0,∂Φ(^x)), (13)
where and .
For completeness, we formally define the stationary point here. Note that the generalized gradient is defined on while the Moreau envelope is defined on the whole domain .
###### Definition A.2 (Stationary Point)
Let , for an -smooth function , we call a point an -stationary point of if , where is the gradient mapping (or generalized gradient) defined as ; for an -weakly convex function , we say a point an -(nearly)-stationary point of if .
## Appendix B Proof of Theorem 3.1
Proof To derive the desired generalization bounds, we take an -net on so that there exists a for any such that . Note that such -net exists with for compact [kleywegt2002sample]. Utilizing the definition of the -net, we have
Emaxx∈X∥∇ΦS(x)−∇Φ(x)∥≤Emaxx∈X[∥∇ΦS(x)−∇ΦS(xk | 2022-08-12 00:32:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896248459815979, "perplexity": 962.4400590935761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00301.warc.gz"} |
https://blender.stackexchange.com/questions/193947/align-active-camera-to-view-not-working-in-2-9 | # Align Active Camera to View - Not working in 2.9
Align Active Camera to View
I used to do use this command all the time in 2.79. Now in 2.8 it's not working anymore - greyed out in the menu.
Why might this be?
For anyone who doesn't know - this operation is useful because you can use the Viewport to approximate what you want framed in the shot, then place the active Camera there (as a starting point before fine-tuning its position). I mean, was useful when it still worked.
• @Gorgious Yes, I do. It shows up in my Outliner, and in the Viewport. When I press 0 I enter the Camera View. I can render and everything... everything except align the camera to my view in the Viewport. Sep 9 '20 at 6:57 | 2021-10-25 06:35:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2567578852176666, "perplexity": 1670.8879343996318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00070.warc.gz"} |
http://gmatclub.com/forum/of-the-84-parents-who-attended-a-meeting-at-a-school-35-volunteered-111450.html?kudos=1 | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 03 May 2016, 06:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Of the 84 parents who attended a meeting at a school, 35
Author Message
TAGS:
### Hide Tags
Senior Manager
Joined: 10 Nov 2010
Posts: 266
Location: India
Concentration: Strategy, Operations
GMAT 1: 520 Q42 V19
GMAT 2: 540 Q44 V21
WE: Information Technology (Computer Software)
Followers: 5
Kudos [?]: 189 [0], given: 22
Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
24 Mar 2011, 22:10
4
This post was
BOOKMARKED
00:00
Difficulty:
25% (medium)
Question Stats:
77% (02:58) correct 23% (01:57) wrong based on 406 sessions
### HideShow timer Statictics
Of the 84 parents who attended a meeting at a school, 35 volunteered to supervise children during the school picnic and 11 volunteered both to supervise children during the picnic and to bring refreshments to the picnic.If the number of parents who volunteered to bring refreshments was 1.5 times the number of parents who neither volunteered to supervise children during the picnic nor volunteered to bring refreshments, how many of the parents volunteered to bring refreshments?
A) 25
B) 36
C) 38
D) 42
E) 45
Official Guide 12 Question
Question: 14 Page: 22 Difficulty: 750
Find All Official Guide Questions
Video Explanations:
[Reveal] Spoiler: OA
_________________
The proof of understanding is the ability to explain it.
Last edited by GMATD11 on 24 Mar 2011, 23:56, edited 1 time in total.
SVP
Joined: 16 Nov 2010
Posts: 1673
Location: United States (IN)
Concentration: Strategy, Technology
Followers: 33
Kudos [?]: 434 [3] , given: 36
Re: is Double Set martix a good option to solve dis question [#permalink]
### Show Tags
27 Mar 2011, 00:51
3
KUDOS
1
This post was
BOOKMARKED
This can be solved by using Double Matrix too, depends on what you're comfortable with. Please see the attached image.
So 2.5x + 24 = 84
=> 2.5x = 60
=> x = 600/25 = 24
=> 1.5x = 36
Attachments
Parents_Meeting.png [ 10.27 KiB | Viewed 17025 times ]
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
GMAT Club Premium Membership - big benefits and savings
Director
Status: No dream is too large, no dreamer is too small
Joined: 14 Jul 2010
Posts: 650
Followers: 37
Kudos [?]: 593 [2] , given: 39
Re: is Double Set martix a good option to solve dis question [#permalink]
### Show Tags
13 Jun 2011, 07:56
2
KUDOS
neither volunteered to supervise children during the picnic nor volunteered to bring refreshments = x
number of parents volunteered to bring refreshments = 1.5x
so, 84 = 35 + 1.5x - 11 + x
2.5x = 60
x = 24
Thus, number of parents volunteered to bring refreshments = 24*1.5=36
Ans. B
_________________
Collections:-
PSof OG solved by GC members: http://gmatclub.com/forum/collection-ps-with-solution-from-gmatclub-110005.html
DS of OG solved by GC members: http://gmatclub.com/forum/collection-ds-with-solution-from-gmatclub-110004.html
100 GMAT PREP Quantitative collection http://gmatclub.com/forum/gmat-prep-problem-collections-114358.html
Collections of work/rate problems with solutions http://gmatclub.com/forum/collections-of-work-rate-problem-with-solutions-118919.html
Mixture problems in a file with best solutions: http://gmatclub.com/forum/mixture-problems-with-best-and-easy-solutions-all-together-124644.html
Math Expert
Joined: 02 Sep 2009
Posts: 32585
Followers: 5645
Kudos [?]: 68501 [2] , given: 9810
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
26 Feb 2012, 01:48
2
KUDOS
Expert's post
ElDiablo wrote:
Baten80 wrote:
Of the 84 parents who attended a meeting at a school, 35 volunteered to supervise children during the school picnic and 11 volunteered both to supervise children during the picnic and to bring refreshments to the picnic.If the number of parents who volunteered to bring refreshments was 1.5 times the number of parents who neither volunteered to supervise children during the picnic nor volunteered to bring refreshments, how many of the parents volunteered to bring refreshments?
A) 25
B) 36
C) 38
D) 42
E) 45
neither volunteered to supervise children during the picnic nor volunteered to bring refreshments = x
number of parents volunteered to bring refreshments = 1.5x
so, 84 = 35 + 1.5x - 11 + x
2.5x = 60
x = 24
Thus, number of parents volunteered to bring refreshments = 24*1.5=36
Ans. B
Why isn't the overlap substracted also from the parents who bring refreshments?
When we are given 35 who volunteered to supervise, we substract the overlap (11), so total = 24; why isn't the final number of parents who voluntereed to refreshments = 1.5x - 11 ?
84 = (35-11) + (1.5x - 11) + x
???
Because parents who supervise (1.5x) consist of the parent who supervise AND bring refreshments (11) and of the parents who supervise but DO NOT bring refreshments (1.5x-11). So if we subtract 11 (the parent who supervise AND bring refreshments) we'll get the parents who supervise but DO NOT bring refreshments and no the total parents who supervise.
Generally: {Total}={Supervise}+{Refreshments}-{Both}+{Neither} (notice that we subtract {Both} since it's already included twice: once in {Supervise} and once in {Refreshments});
Given: 84=35+1.5x-11+x --> x=24 --> 1.5x=36.
Below matrix might help to understand the question better:
Attachment:
Parents.PNG [ 5.62 KiB | Viewed 16000 times ]
Notice that numbers in black are given and in red are calculated. We need the value of yellow box: 1.5x+(24+x)=84 --> x=24 --> 1.5x=36.
Hope it's clear.
_________________
SVP
Joined: 16 Nov 2010
Posts: 1673
Location: United States (IN)
Concentration: Strategy, Technology
Followers: 33
Kudos [?]: 434 [1] , given: 36
Re: is Double Set martix a good option to solve dis question [#permalink]
### Show Tags
27 Mar 2011, 01:08
1
KUDOS
Btw, this can be done without either as :
y = number of parents who volunteered to bring refreshments only
x = the number of parents who neither volunteered to supervise children during the picnic nor volunteered to bring refreshments
So 11 + y = 1.5x
and 84 - x = 11 + y + (35 - 11)
=> 84 - x = 1.5x + 24
=> 2.5x = 60
=> x = 24 and 1.5x = 36
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
GMAT Club Premium Membership - big benefits and savings
Senior Manager
Joined: 13 Aug 2012
Posts: 464
Concentration: Marketing, Finance
GMAT 1: Q V0
GPA: 3.23
Followers: 22
Kudos [?]: 344 [1] , given: 11
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
01 Dec 2012, 23:48
1
KUDOS
You can use a chart to help you solve this problem as taught in the MGMAT.
= S stands for Supervise and ~S means not Supervise
= R stands for Bring Refreshments and ~R means not Bring Refreshments
So how to use the chart?
= Place in the rightmost corner the total of parents which is 84.
= We know that there are 35 who volunteered to Supervise but since we do not know whether they are R or ~R, we put 35 at the bottom of column S.
= We know that there are 11 who are both S and R. So we placed 11 accordingly.
= We know those who are R is 1.5 times of those who are ~R and ~S. So we use a variable x to denote that relationship.
From the chart we can construct our equation clearly.
24 + x + 1.5X = 84
Solve for X.
X = 24
We know R = 1.5X = 1.5(24) = 36
Attachments
c.jpg [ 5.94 KiB | Viewed 14302 times ]
_________________
Impossible is nothing to God.
Manager
Joined: 06 Jun 2012
Posts: 147
Followers: 0
Kudos [?]: 123 [1] , given: 37
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
20 Mar 2013, 05:16
1
KUDOS
The language used in these problems confuses me!!
Example : 35 volunteered to supervise children during the school picnic ---> Only supervise + supervise & bring refreshments
However, how many of the parents volunteered to bring refreshments --> This ideally should mean: supervise & bring refreshments + Only bring refreshment.
But sadly the answer choice has value which = only bring refreshment
Any suggestions on how to go about?
_________________
Please give Kudos if you like the post
Math Expert
Joined: 02 Sep 2009
Posts: 32585
Followers: 5645
Kudos [?]: 68501 [1] , given: 9810
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
20 Mar 2013, 05:20
1
KUDOS
Expert's post
summer101 wrote:
The language used in these problems confuses me!!
Example : 35 volunteered to supervise children during the school picnic ---> Only supervise + supervise & bring refreshments
However, how many of the parents volunteered to bring refreshments --> This ideally should mean: supervise & bring refreshments + Only bring refreshment.
But sadly the answer choice has value which = only bring refreshment
Any suggestions on how to go about?
Not so, check here: of-the-84-parents-who-attended-a-meeting-at-a-school-35-volunteered-111450.html#p1050015
_________________
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 6480
Location: Pune, India
Followers: 1759
Kudos [?]: 10494 [1] , given: 206
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
20 Mar 2013, 06:46
1
KUDOS
Expert's post
summer101 wrote:
The language used in these problems confuses me!!
Example : 35 volunteered to supervise children during the school picnic ---> Only supervise + supervise & bring refreshments
However, how many of the parents volunteered to bring refreshments --> This ideally should mean: supervise & bring refreshments + Only bring refreshment.
But sadly the answer choice has value which = only bring refreshment
Any suggestions on how to go about?
You need to actively look for the word 'only' in these questions.
35 volunteered to supervise children: Overall, 35 people volunteered to supervise. This includes people who volunteered to do both - supervise and bring refreshments
35 volunteered to only supervise children: Does not include people who volunteered to do both.
How many of the parents volunteered to bring refreshments? Overall, how many volunteered to bring refreshments - it will include people who volunteered to do both. There is no ambiguity here.
How many of the parents volunteered to only bring refreshments? Now this is only refreshments.
Re-consider the calculations done above.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Manager Joined: 14 Feb 2011 Posts: 196 Followers: 4 Kudos [?]: 100 [0], given: 3 Re: is Double Set martix a good option to solve dis question [#permalink] ### Show Tags 24 Mar 2011, 23:04 GMATD11 wrote: 14) of the 84 parents who attended a meeting at a school, 35 volunteered to supervise children during the school picnic and 11 volunteered both to supervise children during the picnic and to bring refreshments to the picnic.If the number of parents who volunteered to bring refreshments was 1.5 times the number of parents who neither volunteered to supervise children during the picnic nor volunteered to bring refreshments, how many of the parents volunteered to bring refreshments? a) 25 b) 36 c) 38 d) 42 e) 45 Bring refreshment Volunteered Total to Supervise 11 35 Volunteered x=? y Total 11+x 35+y 84 11+x+35+y=84 x+y= 38 x=1.5y x=24 A is dis the correct method to solve. when to use double set matrix and when to use venn diagram for two type of information Are you sure about the OA? I thinker the right answer should be 36. Here is how: Total = Supervise + Refreshments + None - Both We are given Total = 84, Supervise = 35, Both = 11 and Refreshments = 1.5*None So, we get 84=35+1.5*None+None-11 or None = 24 So, Refreshments = 1.5*24 = 36. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6480 Location: Pune, India Followers: 1759 Kudos [?]: 10494 [0], given: 206 Re: is Double Set martix a good option to solve dis question [#permalink] ### Show Tags 25 Mar 2011, 19:00 Expert's post 2 This post was BOOKMARKED GMATD11 wrote: 14) of the 84 parents who attended a meeting at a school, 35 volunteered to supervise children during the school picnic and 11 volunteered both to supervise children during the picnic and to bring refreshments to the picnic.If the number of parents who volunteered to bring refreshments was 1.5 times the number of parents who neither volunteered to supervise children during the picnic nor volunteered to bring refreshments, how many of the parents volunteered to bring refreshments? a) 25 b) 36 c) 38 d) 42 e) 45 when to use double set matrix and when to use venn diagram for two type of information Most GMAT questions can be easily and quickly solved using Venn Diagrams. (Different people prefer different strategies but I have found that no matter how tricky the question wording is, once you draw the Venn diagram, it all seems very clear) Check out the Venn diagram for this question: Attachment: Ques2.jpg [ 13.38 KiB | Viewed 17247 times ] Now, since it is given that x+11 = 1.5 (84-(24+11+x)) x = 25 So number of parents who volunteered to bring refreshments = x+11 = 25+11 = 36 _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Senior Manager
Joined: 08 Nov 2010
Posts: 417
Followers: 7
Kudos [?]: 84 [0], given: 161
Re: is Double Set martix a good option to solve dis question [#permalink]
### Show Tags
27 Mar 2011, 11:36
For me, the easiest way to solve questions similar to this one is by the formula
T=A+B-AB+NON
_________________
Manager
Joined: 09 Aug 2010
Posts: 107
Followers: 1
Kudos [?]: 34 [0], given: 7
Re: is Double Set martix a good option to solve dis question [#permalink]
### Show Tags
29 Mar 2011, 03:50
The Double Set Matrix draws the equation for you nicely. Yes! But, if you can solve it without the help of this tool.. Good for you..
The answer is 1.5 x 24 = 36!!
Attachments
answer.png [ 3.06 KiB | Viewed 16926 times ]
Intern
Joined: 22 Oct 2011
Posts: 1
Concentration: General Management, Technology
GPA: 3.56
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
25 Feb 2012, 22:13
Baten80 wrote:
neither volunteered to supervise children during the picnic nor volunteered to bring refreshments = x
number of parents volunteered to bring refreshments = 1.5x
so, 84 = 35 + 1.5x - 11 + x
2.5x = 60
x = 24
Thus, number of parents volunteered to bring refreshments = 24*1.5=36
Ans. B
Why isn't the overlap substracted also from the parents who bring refreshments?
When we are given 35 who volunteered to supervise, we substract the overlap (11), so total = 24; why isn't the final number of parents who voluntereed to refreshments = 1.5x - 11 ?
84 = (35-11) + (1.5x - 11) + x
???
Manager
Joined: 24 Jul 2010
Posts: 91
Followers: 0
Kudos [?]: 9 [0], given: 43
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
23 Mar 2013, 11:49
The difficulty for this question is 750. Impossible! Then some of the questions I came across in the OG are at a 10,000 difficulty
GMAT Club Legend
Joined: 09 Sep 2013
Posts: 9269
Followers: 455
Kudos [?]: 115 [0], given: 0
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
20 Oct 2014, 19:44
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Intern
Joined: 07 Oct 2014
Posts: 28
Followers: 0
Kudos [?]: 0 [0], given: 44
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
18 Jul 2015, 03:01
Hi,
I tried solving this problem exactly as per the method described.
Solving the equation that you have mentioned:
x+11 = 1.5 (84-(24+11+x))
x +11 = 1.5 (84-24-11-x)
x + 11 = 1.5 (49 - x)
Solving for x gives x=29.4
VeritasPrepKarishma wrote:
GMATD11 wrote:
14) of the 84 parents who attended a meeting at a school, 35 volunteered to supervise children during the school picnic and 11 volunteered both to supervise children during the picnic and to bring refreshments to the picnic.If the number of parents who volunteered to bring refreshments was 1.5 times the number of parents who neither volunteered to supervise children during the picnic nor volunteered to bring refreshments, how many of the parents volunteered to bring refreshments?
a) 25
b) 36
c) 38
d) 42
e) 45
when to use double set matrix and when to use venn diagram for two type of information
Most GMAT questions can be easily and quickly solved using Venn Diagrams. (Different people prefer different strategies but I have found that no matter how tricky the question wording is, once you draw the Venn diagram, it all seems very clear)
Check out the Venn diagram for this question:
Attachment:
Ques2.jpg
Now, since it is given that
x+11 = 1.5 (84-(24+11+x))
x = 25
So number of parents who volunteered to bring refreshments = x+11 = 25+11 = 36
Manager
Joined: 17 Mar 2014
Posts: 164
Location: United States
GPA: 3.97
Followers: 5
Kudos [?]: 4 [0], given: 72
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink]
### Show Tags
08 Dec 2015, 11:53
GMATD11 wrote:
Of the 84 parents who attended a meeting at a school, 35 volunteered to supervise children during the school picnic and 11 volunteered both to supervise children during the picnic and to bring refreshments to the picnic.If the number of parents who volunteered to bring refreshments was 1.5 times the number of parents who neither volunteered to supervise children during the picnic nor volunteered to bring refreshments, how many of the parents volunteered to bring refreshments?
A) 25
B) 36
C) 38
D) 42
E) 45
Official Guide 12 Question
Question: 14 Page: 22 Difficulty: 750
Find All Official Guide Questions
Video Explanations:
Again, unless ONLY is mentioned DO NOT ASSUME that 35 does not include 11 who volunteered to do both (supervise as well as bring refreshments). CRITICAL for such question types.
_________________
KUDOS!!!, I need them too
Re: Of the 84 parents who attended a meeting at a school, 35 [#permalink] 08 Dec 2015, 11:53
Similar topics Replies Last post
Similar
Topics:
2 Each person who attended a company meeting was either a stockholder in 2 20 Oct 2015, 03:01
3 Each person who attended a company meeting was either a stockholder in 1 14 Sep 2015, 08:36
17 All of the 540 people who attended an education convention 10 28 Mar 2013, 12:48
20 Of the 84 parents who attended a meeting at a school, 35 19 02 Jul 2012, 02:18
3 Each person who attended a company meeting was either a 5 26 Apr 2012, 03:51
Display posts from previous: Sort by | 2016-05-03 13:13:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21169494092464447, "perplexity": 8444.051827573056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121534.33/warc/CC-MAIN-20160428161521-00200-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://www.cs.umanitoba.ca/~chrisib/vision/ | Computer Vision
Computer vision is one of the fundamental tasks for modern robotics. For four years I was primarily responsible for designing and maintaining the software used for our FIRA and RoboCup teams, including the vision processing.
Given this experience with applied computer vision, I was hired by Lumo Interactive (formerly Po-Mo), a Winnipeg interactive art and multimedia company that specializes in interactive projection installations. I was hired specifically as a computer vision specialist to work on the Lumo projector toy's motion-tracking system.
Robot Vision
The DARwIn-OP robots I used for FIRA and RoboCup are relatively low-powered in terms of hardware. They use a single-core, 1GHz Atom CPU (newer models feature a dual-core CPU), with relatively little RAM compared to contemporary smartphones. This single CPU needed to manage all of the forward- and inverse-kinematics calculations for motions, decision-making, balancing calculations, and the vision. Of these tasks, vision is by-far the most computationally-intensive, and the most-likely to run into bottleneck issues.
This possibility of a bottleneck necessitated the use of robust-yet-cheap vision algorithms in order to track objects of interest during competitions.
Jimmy practicing the ladder-climbing event, with the robot's PoV
I decided to categorize objects of interest into four categories:
1. single-blob: a unique, closed shape in the environment (e.g. soccer ball),
2. multi-blob: a collection of similar but separate blobs (e.g. barriers in the obstacle course),
3. single-line: a single, linear object in the environment (e.g. marathon tape), and
4. multi-line: a collection of linear objects (e.g. ladder rungs, soccer field lines).
In practice, single-line was rarely used; I developed a special-purpose, blob-based vision system for the marathon tape.
When I was working on the robotics competitions the environments were tailored to be high-visibility, with generally-uniform colours for most objects. Therefore, the vision system was designed primarily around colour recognition, with shape as a secondary consideration. Multi-coloured objects (such as the sprint target) were defined as a set of distinct blobs, with post-processing to assemble them into the larger target.
Blob Detection
The first year I worked on the robots (2011) we used very simple colour thresholding and blob detection, implemented with OpenCV. We would define a maximim and minimum YUV thresholds for the colour of interest and create a binary image based on those values. We would then use OpenCV's built-in contour detection to determine the bounding boxes of objects in the scene.
This system was trivial to implement, which gave us more time to focus on the rest of the robot's software. 2011 was the first time the DARwIn-OP had been used at a competition, so there was a steep learning curve for all of us involved.
Unfortunately this system was also very prone to failure. Because of the strict thresholding we had to be very careful when calibrating the colours. Changes in lighting would break the vision.
Given these difficulties I made the decision to completely rewrite the vision system using a more complicated, but ultimately more robust algorithm.
Scanline
The scanline algorithm had been used on some of the AA Lab's older robots (e.g. Bioloids with Nokia cellphones as the camera/brain), and had been shown to be an effective for humanoid robots in competitions.
The algorithm uses a combination of horizontal scanline segmentation and flood-fills. Optionally, the segmentation can be subsampled to increase throughput.
Like the blob detection, the algorithm uses a pre-configured max/min YUV range for the colour of interest. Unlike the blob detection agorithm, this range can safely be made over-broad without hugely negatively impacting the performance.
Step one invoves walking across each pixel row, looking for $$n$$ contiguous pixles whose YUV values are within the defined range and with a maximum per-channel difference between pixels that is less than or equal to $$\epsilon_1$$. Pixels that have been filled by the following step are skipped during this search.
If such a line of pixels is found, the average colour $$\bar{c}$$ of the line is recorded, and we perform a 4-connected (or 8-connected, depending on preference) flood-fill of the region, colouring pixels whose value is within $$\epsilon_2$$ of $$\bar{c}$$.
From the flood-filled region we can extract the bounding box/aspect ratio of the object as well as its compactness ($$\frac{\mbox{number of filled pixels}}{\mbox{area of bounding box}}$$). These pieces of information are further used in post-processing for filtering out false-positives (e.g. a round ball should have a roughly square aspect ratio and a known compactness).
In practice this algorithm worked very well for detecting single- and multi-blob objects in the scene. The algorithm was so successful that it remained our default object detection system from 2012 through to 2015 when I stopped working with the robots. (The algorithm may still be in use -- I am unaware of the specifics of the robots' current software.)
Most importantly, because the algorithm used dynamic thresholds based on an initial, over-calibrated range, the vision could compensate for lighting changes that occur outdoors; while working on the hockey project, the robot was able to identify a red ball indoors in the lab and the same ball in direct sunlight on an ourdoor skating rink without the need for recalibration. Moving from direct sunlight to deep shadow likewise necessitated no re-calibration.
Line Detection
For detecting linear targets (defined as long, thin, straight segments) we needed something better than the scanline detector. While linear targets could be identified as a blob some between the aspect ratio and compactness, for a line we really want just two endpoints of the segment, not an entire bounding box.
The best solution we found for this problem was OpenCV's Probablistic Hough Line Detector.
This algorithm was relatively simple to implement; as with blob detection we define the max and min YUV values for the colour of interest, threshold the image based on these values, apply a simple edge detector (e.g. Canny), and pass the binary image to the OpenCV function. The output of this is literally just the endpoints of all possible line segments that meet our minimum length and angle criteria.
This output tends to be over-numerous, so as I final step I use a bucket-based algorithm to group similarly-inclined lines in the same general area together, averaging together their endpoints.
Sprint Targets
For the sprint event at FIRA, teams are allowed to place a small, coloured marker at the end of their lane to assist in navigation.
Between 2011 and 2014 we went through three different marker designs.
In 2011 our target was a simple, flat sheet of card. This was effective for the forward leg of the sprint, but failed when used for the reverse leg. Withough any depth the robot was unable to determine its position left-right within the lane, and frequently walked diagonally out of its lane.
In 2012 I designed a new target made of two diagonal panels in contrasting colours with a two-coloured strip across the front.
Our 2012 sprint target in-use. Still some bugs to work out.
The idea was that the robot would be able to identify the edge between the two colours on the front strip and the edge between the background colours. By shuffling sideways such that these edges are lined up the robot would stay in its lane.
In practice, this did not work as planned. The theory was sound, but the four coloured areas (six if we consider that the top and bottom panels are cut in half by the centre strip) were very small and fuzzy. Add to this poor lighting at the venue in Bristol, and the fact that we had to re-colour one of the panels from green to red (which had low-constrast with the pink) and the execution was, in general, not what we'd hoped for.
That said, we learned a lot from the experience in Bristol, and designed a new, more robust target for the 2013 competition.
Our 2013-2014 target was based on a design from Plymouth University. The design is a simple chair-shape, with two large, single-colour panels. The chair shape ensures one panel is located above and behind the other.
Our 2013-2014 sprint target in use. We finished the sprint in 4th place in 2013.
We keep the robot centred in its lane by ensuring that the top and bottom panels' centres are within a narrow tolerance of each other.
In practice this design proved to be much more reliable than the 2012 design. The simplified colour calibration (two colours instead of 4), combined with larger panels and less occlusion led to a winning design. Using this target we took 4th place in the sprint at the 2013 HuroCup competition, helping us secure first-place overall at the event.
Marathon
The marathon event requires the robot to follow a line of tape the entire length of the track. The track length increases every year, and recent years have added breaks in the tape. This section describes the work I did on the marathon from 2011 to 2013 -- before breaks were added to the tape.
The marathon course is not a single linear track. It contains many curves and hard corners (up to 90 degree corners, and a minimum curve radius of 1m). While line detection algorithms, like HoughLines could be used, our implementation uses hard thresholding, which as previously discussed, is not suitable for dynamic lighting. Since the marathon event happens outdoors we needed a vision system that would work in variable lighting without the need for recalibration.
Given these requirements, I developed a blob-based system for tracking the marathon tape. My algorithm divides the entire image into thin slices and treats each slice as an image, looking for a multi-blob object in that slide whose colour matches our calibration colours.
Once blobs in each slice have been detected, I build a connect blobs in adjacent segments to build a multi-segment line with minimal deviation in the angle of adjacent segments. (This is a fancy way of saying I try to build the straightest line possible out of the blobs that were detected.)
In order to detect hard corners I divide the image not only into horizontal slices (which will build a vertical line), but also into horizontal slices in the left and right halves of the frame. These vertical slides will produce horizontal lines going to the left or right.
This gives us at-most 3 multi-segment candidate lines. Averaging these lines we can calculate the average angle and distance to the line. These values are put into a set of PID controllers to adjust the robot's stride length/speed, bearing, and lateral stride amount.
In general, the robot will walk faster when the tape is straight and we do not register any corners. The robot will take corners more slowly, as turning at speed can cause the robot to fall over more easily.
Practicing the marathon in the hallways of UofM (2013)
This vision system worked very well in practice; we took 4th place in the 2012 marathon (we would have done better, but the robot's battery went dead about halfway through the race). In 2011 we took second place with an similar algorithm that relied on hard thresholds and blob detection instead of scanline/flood-fill. | 2022-01-18 21:54:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3931336998939514, "perplexity": 1904.755462225443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00133.warc.gz"} |
http://mathhelpforum.com/differential-equations/132640-help-second-order-linear-ordinary-differential-equation.html | Thread: help Second-order linear ordinary differential equation
1. help Second-order linear ordinary differential equation
find the second solution that is linearly independent to the solution given for
(1-x cot x)y'' - xy' +y = 0
y1(x) = x
0<x<pi
hint : integration of (x/(1-x cot x)) dx = ln |x cos x - sin x|
i've tried to rearrage the equation in the form :
but how can i get y2(x) = function of x????
i get stack here... pls help me.....
2. A general method to find, once you know a particular solution of a second order linear 'incomplete' DE, a second solution independent from it is illustrated here...
http://www.mathhelpforum.com/math-he...tion-case.html
Kind regards
$\chi$ $\sigma$ | 2016-10-24 02:01:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7211744785308838, "perplexity": 2861.9894662900433}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719463.40/warc/CC-MAIN-20161020183839-00544-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.studypool.com/discuss/1089210/use-the-product-rule-to-simplify-the-radical?free | ##### use the product rule to simplify the radical
Algebra Tutor: None Selected Time limit: 1 Day
square root of 60
Jul 23rd, 2015
--------------------------------------------------------------------------------------------------------------
According to the product rule for radicals, if $\sqrt[n]{a}$ and $\sqrt[n]{b}$ are real numbers and n is a natural number, then
$\sqrt[n]{a}\times\sqrt[n]{b}=\sqrt[n]{ab}$
The prime factorisation of 60 is
60 = 2 x 2 x 3 x 5 = 4 x 15
$\\ \sqrt{60}=\sqrt{4\times15}\\ \\ Applying\hspace{5} the\hspace{5} product\hspace{5} rule\hspace{5} for\hspace{5} radicals\\ \\ \sqrt{60}=\sqrt{4}\times\sqrt{15}\\ \\ \sqrt{60}=2\sqrt{15}\\$
ANSWER: $\\ \sqrt{60}=2\sqrt{15}\\$
--------------------------------------------------------------------------------------------------------------
Jul 24th, 2015
thanks i see what i did wrong
Jul 24th, 2015
You are welcome :)
Jul 24th, 2015
...
Jul 23rd, 2015
...
Jul 23rd, 2015
Mar 23rd, 2017
check_circle | 2017-03-23 10:54:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5866443514823914, "perplexity": 3595.531934925155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186891.75/warc/CC-MAIN-20170322212946-00388-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/insights/bell-states-and-conservation-of-spin-angular-momentum/ | # Exploring Bell States and Conservation of Spin Angular Momentum
In a recent thread, I outlined how to compute the correlation function for the Bell basis states
\begin{split}|\psi_-\rangle &= \frac{|ud\rangle \,- |du\rangle}{\sqrt{2}}\\
|\psi_+\rangle &= \frac{|ud\rangle + |du\rangle}{\sqrt{2}}\\
|\phi_-\rangle &= \frac{|uu\rangle \,- |dd\rangle}{\sqrt{2}}\\
|\phi_+\rangle &= \frac{|uu\rangle + |dd\rangle}{\sqrt{2}} \end{split}\label{BellStates}
when they represent spin states. The first state ##|\psi_-\rangle## is called the “spin singlet state” and it represents a total spin angular momentum of zero (S = 0) for the two particles involved. The other three states are called the “spin triplet states” and they each represent a total spin angular momentum of one (S = 1, in units of ##\hbar = 1##). In all four cases, the entanglement represents the conservation of spin angular momentum for the process creating the state. The “correlation function,” or “correlation” for short, is simply the average of the product of the two outcomes for the two spin measurements ##\sigma_1## and ##\sigma_2## in each trial of the experiment. For the spin singlet state we would have
$$\langle \psi_-|\sigma_1\sigma_2|\psi_-\rangle$$
for example. To remind you of how this notation works, we will be using the Pauli spin matrices to construct our ##\sigma_i## operators. In the eigenbasis of ##\sigma_z## the Pauli spin matrices are
$$\sigma _z = \begin{pmatrix} 1 & 0\\0 & -1 \end{pmatrix} \label{spinz}$$
$$\sigma _x = \begin{pmatrix} 0 & 1\\1 & 0 \end{pmatrix} \label{spinx}$$
$$\sigma _y = \begin{pmatrix} 0 & -i\\i & 0 \end{pmatrix} \label{spiny}$$
so that
$$\sigma _z |u\rangle = \begin{pmatrix} 1 & 0\\0 & -1 \end{pmatrix} \begin{pmatrix} 1 \\0 \end{pmatrix} = \begin{pmatrix} 1 \\0 \end{pmatrix} = |u\rangle \label{spinzOnU}$$
$$\sigma _z |d\rangle = \begin{pmatrix} 1 & 0\\0 & -1 \end{pmatrix} \begin{pmatrix} 0 \\1 \end{pmatrix} = -\begin{pmatrix} 0 \\1 \end{pmatrix} = -|d\rangle \label{spinzOnD}$$
$$\sigma _x |u\rangle = \begin{pmatrix} 0 & 1\\1 & 0 \end{pmatrix} \begin{pmatrix} 1 \\0 \end{pmatrix} = \begin{pmatrix} 0 \\1 \end{pmatrix} = |d\rangle \label{spinxOnU}$$
That is, using the Pauli spin matrices above with ##|u\rangle = \left ( \begin{array}{rr} 1 \\ 0\end{array} \right )## and ##|d\rangle = \left ( \begin{array}{rr} 0 \\ 1\end{array} \right )##, we see that ##\sigma_z|u\rangle = |u\rangle##, ##\sigma_z|d\rangle = -|d\rangle##, ##\sigma_x|u\rangle = |d\rangle##, ##\sigma_x|d\rangle = |u\rangle##, ##\sigma_y|u\rangle = i|d\rangle##, and ##\sigma_y|d\rangle = -i|u\rangle##. The juxtaposed notation simply means ##\sigma_x\sigma_z|ud\rangle = -|dd\rangle## and ##\sigma_x\sigma_y|ud\rangle = -i|du\rangle## etc. Essentially, this notation is simply ignoring the tensor product sign ##\otimes##, so that ##\left(\sigma_x \otimes \sigma_z \right)||u\rangle \otimes |d\rangle \rangle = \sigma_x\sigma_z|ud\rangle##. It will be obvious which spin matrix is acting on which Hilbert space vector via the juxtaposition. If I flip the orientation of a vector from right pointing (ket) to left pointing (bra) or vice-versa, I transpose and take the complex conjugate. For example, if ##|A\rangle =i\begin{pmatrix} 1\\0 \end{pmatrix} = i|u\rangle##, then ##\langle A| = -i\begin{pmatrix} 1\;\; 0 \end{pmatrix} = -i\langle u|##. That means ##|A\rangle\langle A|## is a matrix. For example,
$$|u\rangle\langle u| = \begin{pmatrix} 1\\0 \end{pmatrix} \begin{pmatrix} 1\;\; 0 \end{pmatrix} = \begin{pmatrix} 1\;\; 0 \\0\;\; 0 \end{pmatrix} \label{matrixFmVec}$$
Finally, all spin matrices have the same eigenvalues of ##\pm 1## and I will denote the corresponding eigenvectors as ##|u\rangle## and ##|d\rangle## for spin up and spin down, respectively. Thus, any spin matrix can be written as ##(+1)|u\rangle\langle u| + (-1)|d\rangle\langle d|##. In the eigenbasis of that spin matrix, this looks like ##\sigma_z##, since all three spin matrices have the same eigenvalues ##\pm 1##. If you write your spin matrix in the eigenbasis of another spin matrix, then it looks different, e.g., ##\sigma_x## and ##\sigma_y## above are written in the eigenbasis of ##\sigma_z## above.
If Alice is making her spin measurement ##\sigma_1## in the ##\hat{a}## direction and Bob is making his spin measurement ##\sigma_2## in the ##\hat{b}## direction (Figure 1), we have
\begin{aligned}
&\sigma_1 = \hat{a}\cdot\vec{\sigma}=a_x\sigma_x + a_y\sigma_y + a_z\sigma_z \\
&\sigma_2 = \hat{b}\cdot\vec{\sigma}=b_x\sigma_x + b_y\sigma_y + b_z\sigma_z \\ \label{sigmas}
\end{aligned}
###### Figure 1. Alice and Bob making spin measurements on a pair of spin-entangled particles with their SG magnets and detectors. In this particular case, the plane of conserved spin angular momentum is the x-z plane.
Using this formalism and the fact that ##\{|uu\rangle,|ud\rangle,|du\rangle,|dd\rangle\}## is an orthonormal set (##\langle uu|uu\rangle = 1##, ##\langle uu|ud\rangle = 0##, ##\langle du|du\rangle = 1##, etc.), we see that the correlation functions are given by
\label{gencorrelations}
\begin{aligned}
&\langle\psi_-|\sigma_1\sigma_2|\psi_-\rangle = &-a_xb_x – a_yb_y – a_zb_z\\
&\langle\psi_+|\sigma_1\sigma_2|\psi_+\rangle = &a_xb_x + a_yb_y – a_zb_z\\
&\langle\phi_-|\sigma_1\sigma_2|\phi_-\rangle = &-a_xb_x + a_yb_y + a_zb_z\\
&\langle\phi_+|\sigma_1\sigma_2|\phi_+\rangle = &a_xb_x – a_yb_y + a_zb_z\\
\end{aligned}
We now explore the conservation being depicted by the Bell spin states. Let us start with the spin singlet state ##|\psi_-\rangle##.
If we transform our basis per
\label{Ytransform}\begin{aligned}&|u\rangle \rightarrow &\cos(\Theta)|u\rangle + \sin(\Theta)|d\rangle \\
&|d\rangle \rightarrow &-\sin(\Theta)|u\rangle + \cos(\Theta)|d\rangle \\ \end{aligned}
where ##\Theta## is the angle in Hilbert space, then ##|\psi_-\rangle \rightarrow |\psi_-\rangle##. In other words, ##|\psi_-\rangle## is invariant with respect to this SU(2) transformation. Constructing the corresponding spin measurement operator from these transformed up and down vectors gives
$$|u\rangle\langle u| – |d\rangle\langle d| = \begin{pmatrix} \cos(2\Theta) & \sin(2\Theta)\\\sin(2\Theta) & -\cos(2\Theta) \end{pmatrix} = \cos(2\Theta)\sigma_z + \sin(2\Theta)\sigma_x \label{sigmaOp1}$$
So, we see that the invariance of the state under this Hilbert space SU(2) transformation means the experimental outcomes are invariant under SO(3) rotation of the Stern-Gerlach (SG) magnets in the x-z plane in real space. Specifically, ##|\psi_-\rangle## says that when the SG magnets are aligned in the z direction the outcomes are always opposite (##\frac{1}{2}## ud and ##\frac{1}{2}## du). Since ##|\psi_-\rangle## has that same functional form under an SU(2) transformation in Hilbert space representing an SO(3) rotation in the x-z plane per Eqs. (\ref{Ytransform}) & (\ref{sigmaOp1}), the outcomes are always opposite (##\frac{1}{2}## ud and ##\frac{1}{2}## du) for aligned SG magnets in the x-z plane. That is the conservation associated with this SU(2) symmetry. When the angle in Hilbert space is ##\Theta## the angle of the rotated SG magnets in the x-z plane is ##2\Theta##, which we will denote as ##\theta## (Figure 1). Notice that when ##\Theta = 45^o##, our operator is ##\sigma_x##, i.e., we have rotated to the eigenbasis of ##\sigma_x## from the eigenbasis of ##\sigma_z##.
There is another SU(2) transformation that leaves ##|\psi_-\rangle## invariant
\label{Xtransform}
\begin{aligned}
&|u\rangle \rightarrow &\cos(\Theta)|u\rangle + i\sin(\Theta)|d\rangle \\
&|d\rangle \rightarrow &i\sin(\Theta)|u\rangle + \cos(\Theta)|d\rangle \\
\end{aligned}
Constructing our spin measurement operator from these states gives us
$$|u\rangle\langle u| – |d\rangle\langle d| = \begin{pmatrix} \cos(2\Theta) & -i\sin(2\Theta)\\i\sin(2\Theta) & -\cos(2\Theta) \end{pmatrix} = \cos(\theta)\sigma_z + \sin(\theta)\sigma_y$$
So, we see that the invariance of the state under this Hilbert space SU(2) transformation means the experimental outcomes are invariant under an SO(3) rotation of the Stern-Gerlach (SG) magnets in the y-z plane. Notice that when ##\Theta = 45^o##, our spin operator is ##\sigma_y##, i.e., we have rotated to the eigenbasis of ##\sigma_y## from the eigenbasis of ##\sigma_z##.
Finally, we see that ##|\psi_-\rangle## is invariant under the third SU(2) transformation
\label{Ztransform}
\begin{aligned}
&|u\rangle \rightarrow &(\cos(\Theta) + i\sin(\Theta))|u\rangle \\
&|d\rangle \rightarrow &(\cos(\Theta) – i\sin(\Theta))|d\rangle \\
\end{aligned}
since this takes ##|ud\rangle \rightarrow |ud\rangle##. Constructing our spin measurement operator from these transformed vectors gives us
$$|u\rangle\langle u| – |d\rangle\langle d| = \left ( \begin{array}{rr} 1 & 0\\0 & -1 \end{array} \right ) = \sigma_z$$
In other words, Eq. (\ref{Ytransform}) is the Hilbert space SU(2) transformation that represents an SO(3) rotation about the y axis in real space and can be written
\left ( \begin{array}{rr} u \\ d\end{array} \right ) \rightarrow \left (\begin{array}{rr} \cos(\Theta) & \sin(\Theta)\\-\sin(\Theta) & \cos(\Theta) \end{array} \right )\left ( \begin{array}{rr} u \\ d\end{array} \right ) = \left( \cos(\Theta)I + i\sin(\Theta)\sigma_y \right)\left ( \begin{array}{rr} u \\ d\end{array} \right )
Eq. (\ref{Xtransform}) is the Hilbert space SU(2) transformation that represents an SO(3) rotation about the x axis in real space and can be written
\left ( \begin{array}{rr} u \\ d\end{array} \right ) \rightarrow \left (\begin{array}{rr} \cos(\Theta) & i\sin(\Theta)\\i\sin(\Theta) & \cos(\Theta) \end{array} \right )\left ( \begin{array}{rr} u \\ d\end{array} \right ) = \left( \cos(\Theta)I + i\sin(\Theta)\sigma_x \right)\left ( \begin{array}{rr} u \\ d\end{array} \right )
And Eq. (\ref{Ztransform}) is the Hilbert space SU(2) transformation that represents an SO(3) rotation about the z axis in real space and can be written
\left ( \begin{array}{rr} u \\ d\end{array} \right ) \rightarrow \left (\begin{array}{cc} \cos(\Theta) + i\sin(\Theta) & 0\\0 & \cos(\Theta) -i\sin(\Theta) \end{array} \right )\left ( \begin{array}{rr} u \\ d\end{array} \right ) = \left( \cos(\Theta)I + i\sin(\Theta)\sigma_z \right)\left ( \begin{array}{rr} u \\ d\end{array} \right )
The SU(2) transformation matrix is often written ##e^{i\Theta\sigma_j}##, where ##j = \{x,y,z\}##, by expanding the exponential and using ##\sigma_j^2 = I##. Since we are in the ##\sigma_z## eigenbasis, this third SU(2) transformation means our spin measurement operator is just ##\sigma_z##. The invariance of ##|\psi_-\rangle## under all three SU(2) transformations makes sense, since the spin singlet state represents the conservation of a total spin angular momentum of S = 0, which is directionless, and each SU(2) transformation in Hilbert space corresponds to an element of SO(3) in real space.
Now, since our state has the same functional form for any plane, we are free to choose any plane we like and not lose generality. Let us work in the eigenbasis of ##\sigma_1 = \sigma_z## so that ##\sigma_2 = \cos(\theta)\sigma_z + \sin(\theta)\sigma_x## in computing our correlation function for ##|\psi_-\rangle##. We have
$$\frac{1}{2}(\langle ud| – \langle du|)\sigma_z [\cos(\theta)\sigma_z + \sin(\theta)\sigma_x](|ud\rangle – |du\rangle) = -\cos(\theta) \label{PsiMinuscorr}$$
per the rules of the formalism. This agrees with Eq. (\ref{gencorrelations}) where we found the correlation function for the spin singlet state is ##-\hat{a}\cdot\hat{b}##, which is ##-\cos(\theta)## in this notation. Before continuing to the spin triplet states, let me briefly review the type of conservation represented by the spin singlet state.
The kind of conservation depicted by the spin singlet state is explained in my Insight Why the Quantum. Essentially, it is an “average-only” conservation principle applicable to the actual data, not some unobservable “hidden variables” (Figure 2).
###### Figure 2. Conservation of spin angular momentum for the spin singlet state in any plane, since the conserved S = 0 vector is directionless. Reading from left to right, as Bob rotates his SG magnets relative to Alice’s SG magnets for her +1 outcome, the average value of his outcome varies from –1 (totally down, arrow bottom) to 0 to +1 (totally up, arrow tip). This obtains per conservation of angular momentum on average. Bob can say exactly the same about Alice’s outcomes as she rotates her SG magnets relative to his SG magnets for his +1 outcome. That is, their outcomes can only satisfy conservation of angular momentum on average, because they only measure +1/-1, never a fractional result as would be required for conservation to hold on a trial-by-trial basis for different measurements. The physical reason behind ##\theta = 2\Theta## is evident in this figure.
What we see from this analysis is that the conserved spin angular momentum (S = 0), being directionless, has the same functional form in any plane defined by the two spin measurements. Now for the spin triplet states.
I will begin with ##|\phi_+\rangle##. The only SU(2) transformation that takes ##|\phi_+\rangle \rightarrow |\phi_+\rangle## is Eq. (\ref{Ytransform}). Thus, this state says we have rotational (SO(3)) invariance for our SG measurement outcomes in the x-z plane. Specifically, ##|\phi_+\rangle## says that when the SG magnets are aligned in the z direction the outcomes are always the same (##\frac{1}{2}## uu and ##\frac{1}{2}## dd). Since ##|\psi_+\rangle## has that same functional form under a rotation in the x-z plane per Eqs. (\ref{Ytransform}) & (\ref{sigmaOp1}), the outcomes are always the same (##\frac{1}{2}## uu and ##\frac{1}{2}## dd) for aligned SG magnets in the x-z plane. That is the conservation associated with this SU(2) symmetry. In this case however, since ##|\phi_+\rangle## is only invariant under Eq. (\ref{Ytransform}), we can only expect rotational invariance for our SG measurement outcomes in the x-z plane. This is confirmed by Eq. (\ref{gencorrelations}) where we see that the correlation function for arbitrarily oriented ##\sigma_1## and ##\sigma_2## is given by ##a_xb_x – a_yb_y + a_zb_z##. Thus, unless we restrict our measurements to the x-z plane, we don’t have the rotationally invariant ##\hat{a}\cdot\hat{b}## analogous to the spin singlet state. Restricting our measurements to the x-z plane as with the spin singlet state gives us
$$\frac{1}{2}(\langle uu| + \langle dd|)\sigma_z [\cos(\theta)\sigma_z + \sin(\theta)\sigma_x](|uu\rangle + |dd\rangle) = \cos(\theta) \label{PhiPluscorr}$$
per the rules of the formalism in agreement with Eq. (\ref{gencorrelations}). I next consider ##|\phi_-\rangle##.
The only SU(2) transformation that leaves ##|\phi_-\rangle## invariant is Eq. (\ref{Xtransform}). Thus, this state says we have rotational (SO(3)) invariance for the SG measurement outcomes in the y-z plane. Since ##|\phi_-\rangle## is only invariant under Eq. (\ref{Xtransform}), we can only expect rotational invariance for our SG measurement outcomes in the y-z plane. This is confirmed by Eq. (\ref{gencorrelations}) where we see that the correlation function for arbitrarily oriented ##\sigma_1## and ##\sigma_2## for ##|\phi_-\rangle## is given by ##-a_xb_x + a_yb_y + a_zb_z##. Thus, unless we restrict our measurements to the y-z plane, we don’t have the rotationally invariant ##\hat{a}\cdot\hat{b}## analogous to the spin singlet state. Restricting our measurements to the y-z plane gives us
$$\frac{1}{2}(\langle uu| – \langle dd|)\sigma_z [\cos(\theta)\sigma_z + \sin(\theta)\sigma_y](|uu\rangle – |dd\rangle) = \cos(\theta) \label{PhiMinuscorr}$$
per the rules of the formalism in agreement with Eq. (\ref{gencorrelations}).
Finally, the only SU(2) transformation that leaves ##|\psi_+\rangle## invariant is Eq. (\ref{Ztransform}). Thus, this state says we have rotational (SO(3)) invariance for our SG measurement outcomes in the x-y plane. But, unlike the situation with ##|\psi_-\rangle##, we will need to transform to either the ##\sigma_x## or ##\sigma_y## eigenbasis to see what we are going to find in the x-y plane. We can either transform first from the ##\sigma_z## eigenbasis to the ##\sigma_x## eigenbasis and then look for our SU(2) invariance transformation, or first transform from the ##\sigma_z## eigenbasis to the ##\sigma_y## eigenbasis. I will do both to show how they each work and give self-consistent results.
To go to the ##\sigma_x## eigenbasis from the ##\sigma_z## eigenbasis we use Eq. (\ref{Ytransform}) with ##\Theta = 45^o##
\begin{aligned}
&|u\rangle \rightarrow \frac{1}{\sqrt{2}}|u\rangle + \frac{1}{\sqrt{2}}|d\rangle \\
&|d\rangle \rightarrow -\frac{1}{\sqrt{2}}|u\rangle + \frac{1}{\sqrt{2}}|d\rangle \\
\end{aligned}
This takes ##|\psi_+\rangle## in the ##\sigma_z## eigenbasis to ##-|\phi_-\rangle## in the ##\sigma_x## eigenbasis and we know the transformation that leaves this invariant is Eq. (\ref{Xtransform}) which then gives a spin measurement operator of ##\cos(\theta)\sigma_x + \sin(\theta)\sigma_y##, since we have simply switched the ##\sigma_z## eigenbasis with the ##\sigma_x## eigenbasis. As shown in Eq. (\ref{gencorrelations}), the correlation function for arbitrarily oriented ##\sigma_1## and ##\sigma_2## for ##|\psi_+\rangle## is given by ##a_xb_x + a_yb_y – a_zb_z##. Thus, unless we restrict our measurements to the x-y plane, we do not have the rotationally invariant ##\hat{a}\cdot\hat{b}## analogous to the spin singlet state. Restricting our measurements to the x-y plane gives us
$$\frac{1}{2}(\langle uu| – \langle dd|)\sigma_x [\cos(\theta)\sigma_x + \sin(\theta)\sigma_y](|uu\rangle – |dd\rangle) = \cos(\theta) \label{PsiPluscorr}$$
where ##|u\rangle## and ##|d\rangle## are now the eigenstates for ##\sigma_x##. That is, ##|u\rangle = \left ( \begin{array}{rr} 1/\sqrt{2} \\ 1/\sqrt{2}\end{array} \right )## and ##|d\rangle = \left ( \begin{array}{rr} -1/\sqrt{2} \\ 1/\sqrt{2}\end{array} \right )##, so that ##\sigma_x|u\rangle = |u\rangle##, ##\sigma_x|d\rangle = -|d\rangle##, ##\sigma_y|u\rangle = i|d\rangle##, and ##\sigma_y|d\rangle = -i|u\rangle##. Again, this agrees with Eq. (\ref{gencorrelations}).
If we instead start by going to the ##\sigma_y## eigenbasis from the ##\sigma_z## eigenbasis using Eq. (\ref{Xtransform}) with ##\Theta = 45^o##
\begin{aligned}
&|u\rangle \rightarrow \frac{1}{\sqrt{2}}|u\rangle + i\frac{1}{\sqrt{2}}|d\rangle \\
&|d\rangle \rightarrow i\frac{1}{\sqrt{2}}|u\rangle + \frac{1}{\sqrt{2}}|d\rangle \\
\end{aligned}
we take ##|\psi_+\rangle## in the ##\sigma_z## eigenbasis to ##i|\phi_+\rangle## in the ##\sigma_y## eigenbasis and we know the SU(2) transformation that leaves this invariant is Eq. (\ref{Ytransform}) giving a spin measurement operator of ##\cos(\theta)\sigma_y + \sin(\theta)\sigma_x## since we have simply switched the ##\sigma_z## eigenbasis with the ##\sigma_y## eigenbasis. Thus, we have
$$\frac{-i}{\sqrt{2}}(\langle uu| + \langle dd|)\sigma_y [\cos(\theta)\sigma_y + \sin(\theta)\sigma_x]\frac{i}{\sqrt{2}}(|uu\rangle + |dd\rangle) = \cos(\theta)$$
where ##|u\rangle## and ##|d\rangle## are now the eigenstates for ##\sigma_y##. That is, ##|u\rangle = \left ( \begin{array}{rr} -i/\sqrt{2} \\ 1/\sqrt{2}\end{array} \right )## and ##|d\rangle = \left ( \begin{array}{rr} 1/\sqrt{2} \\-i/\sqrt{2}\end{array} \right )##, so that ##\sigma_y|u\rangle = |u\rangle##, ##\sigma_y|d\rangle = -|d\rangle##, ##\sigma_x|u\rangle = |d\rangle##, and ##\sigma_x|d\rangle = |u\rangle##. Again, this agrees with Eq. (\ref{gencorrelations}).
What does all this mean? Obviously, the invariance of each of the spin triplet states under its respective SU(2) transformation in Hilbert space represents the conserved spin angular momentum S = 1 for each of the planes x-z (##|\phi_+\rangle##), y-z (##|\phi_-\rangle##), and x-y (##|\psi_+\rangle##) in real space. Specifically, when the SG magnets are aligned anywhere in the respective plane of symmetry, the outcomes are always the same (##\frac{1}{2}## uu and ##\frac{1}{2}## dd). It is a planar conservation according to our analysis and our experiment would determine which plane (for work with ##|\phi_+\rangle## see Dehlinger, D., & Mitchell, M.W.: Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory. American Journal of Physics 70, 903-910 (2002)). If you want to model a conserved S = 1 for some other plane, you simply create a superposition, i.e., expand in the spin triplet basis. And in that plane, you’re right back to the mysterious violation of the Bell inequality per conserved spin angular momentum via a correlation function of ##\cos(\theta)##, as with any of the spin triplet states. To see that let me simply revise the argument I used in Why the Quantum for the spin singlet state.
We have two sets of data, Alice’s set and Bob’s set. They were collected in N pairs with Bob’s(Alice’s) SG magnet at ##\theta## relative to Alice’s(Bob’s). We want to compute the correlation of these N pairs of results which is
$$\langle \alpha,\beta \rangle =\frac{(+1)_A(-1)_B + (+1)_A(+1)_B + (-1)_A(-1)_B + …}{N}$$
Now organize the numerator into two equal subsets, the first is that of all Alice’s +1 results and the second is that of all Alice’s -1 results
$$\langle \alpha,\beta \rangle =\frac{(+1)_A(\sum \mbox{BA+})+(-1)_A(\sum \mbox{BA-})}{N}$$
where ##\sum \mbox{BA+}## is the sum of all of Bob’s results corresponding to Alice’s +1 result and ##\sum \mbox{BA-}## is the sum of all of Bob’s results corresponding to Alice’s -1 result. Notice this is all independent of the formalism of quantum mechanics. Now, we rewrite that equation as
$$\langle \alpha,\beta \rangle =\frac{(+1)_A(\sum \mbox{BA+})}{N} + \frac{(-1)_A(\sum \mbox{BA-})}{N} = \frac{(+1)_A(\sum \mbox{BA+})}{2\frac{N}{2}} + \frac{(-1)_A(\sum \mbox{BA-})}{2\frac{N}{2}}$$
which is
$$\langle \alpha,\beta \rangle = \frac{1}{2}(+1)_A\overline{BA+} + \frac{1}{2}(-1)_A\overline{BA-} \label{consCorrel}$$
with the overline denoting average. Again, this correlation function is independent of the formalism of quantum mechanics. All we have assumed is that Alice and Bob measure +1 or -1 with equal frequency at any setting in computing this correlation in agreement with the relativity principle aka “no preferred reference frame” (NPRF). [See this Insight.] Now I introduce the conservation principle.
In classical physics, one would say the projection of the angular momentum vector of Alice’s particle ##\vec{S}_A = +1\hat{a}## along ##\hat{b}## is ##\vec{S}_A\cdot\hat{b} = \cos{(\theta)}## where again ##\theta## is the angle between the unit vectors ##\hat{a}## and ##\hat{b}## (Figure 1). From Alice’s perspective, had Bob measured at the same angle, i.e., ##\beta = \alpha##, he would have found the spin angular momentum vector of his particle was ##\vec{S}_B = +1\hat{a}##, so that ##\vec{S}_A + \vec{S}_B = \vec{S}_{Total} = 2## (this is S = 1 in units of ##\frac{\hbar}{2} = 1##). Since he did not measure the spin angular momentum of his particle at the same angle, he should have obtained a fraction of the length of ##\vec{S}_B##, i.e., ##\vec{S}_B\cdot\hat{b} = +1\hat{a}\cdot\hat{b} = \cos{(\theta)}## (Figure 3). Of course, Bob only ever obtains +1 or -1 per NPRF, so Bob’s outcomes can only average the required ##\cos{(\theta)}##. Thus, NPRF dictates
\overline{BA+} = 2P(+1,+1\mid \theta)(+1) + 2P(+1,-1\mid \theta)(-1) = \cos (\theta) \label{BA+}
NPRF also dictates ##P(-1,+1\mid \theta) = P(+1,-1\mid \theta)##, so we have
\begin{align*}
P(+1,+1\mid\theta) + P(+1,-1\mid \theta) & = \frac {1}{2} \\
P(+1,-1\mid\theta) + P(-1,-1\mid \theta) & = \frac {1}{2},
\end{align*}
These equations now allow us to uniquely solve for the joint probabilities
P(+1,+1 \mid \theta) = P(-1,-1 \mid \theta) = \frac{1}{2} \mbox{cos}^2 \left(\frac{\theta}{2} \right) \label{QMjointLike}
and
P(+1,-1 \mid \theta) = P(-1,+1 \mid \theta) = \frac{1}{2} \mbox{sin}^2 \left(\frac{\theta}{2} \right) \label{QMjointUnlike}
Now we can use these to compute ##\overline{BA-}##
\overline{BA-} = 2P(-1,+1\mid \theta)(+1) + 2P(-1,-1\mid \theta)(-1) = -\cos (\theta) \label{BA-}
Using Eqs. (\ref{BA+}) and (\ref{BA-}) in Eq. (\ref{consCorrel}) we obtain
\langle \alpha,\beta \rangle = \frac{1}{2}(+1)_A(\mbox{cos} \left(\theta\right)) + \frac{1}{2}(-1)_A(-\mbox{cos} \left(\theta\right)) = \mbox{cos} \left(\theta\right)
which is precisely the correlation function for a spin triplet state in its symmetry plane using the qubit structure of Hilbert space that we obtained in Eq. (\ref{gencorrelations}) (also see this Scientific Reports paper). Thus, “average-only” conservation maps beautifully to our classical expectation (Figure 4). | 2021-10-25 07:39:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 20, "x-ck12": 0, "texerror": 0, "math_score": 0.992374062538147, "perplexity": 2493.5345858628134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00184.warc.gz"} |
https://codeforces.com/problemset/problem/1491/D | D. Zookeeper and The Infinite Zoo
time limit per test
3 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
There is a new attraction in Singapore Zoo: The Infinite Zoo.
The Infinite Zoo can be represented by a graph with an infinite number of vertices labeled $1,2,3,\ldots$. There is a directed edge from vertex $u$ to vertex $u+v$ if and only if $u\&v=v$, where $\&$ denotes the bitwise AND operation. There are no other edges in the graph.
Zookeeper has $q$ queries. In the $i$-th query she will ask you if she can travel from vertex $u_i$ to vertex $v_i$ by going through directed edges.
Input
The first line contains an integer $q$ ($1 \leq q \leq 10^5$) — the number of queries.
The $i$-th of the next $q$ lines will contain two integers $u_i$, $v_i$ ($1 \leq u_i, v_i < 2^{30}$) — a query made by Zookeeper.
Output
For the $i$-th of the $q$ queries, output "YES" in a single line if Zookeeper can travel from vertex $u_i$ to vertex $v_i$. Otherwise, output "NO".
You can print your answer in any case. For example, if the answer is "YES", then the output "Yes" or "yeS" will also be considered as correct answer.
Example
Input
5
1 4
3 6
1 6
6 2
5 5
Output
YES
YES
NO
NO
YES
Note
The subgraph on vertices $1,2,3,4,5,6$ is shown below. | 2022-05-23 10:49:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5549917817115784, "perplexity": 441.81493282251705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00213.warc.gz"} |
https://miktex.org/packages/decision-table__source | # decision-table__source
An easy way to create Decision Model and Notation decision tables
Version:
0.0.4
Copyright:
Simon Vandevelde, Francois Pantigny
License:
lppl1.3c
Packaged on:
10/03/2021 20:11:13
Number of files:
3
Size on disk:
26.73 kB
The decision-table package allows for an easy way to generate decision tables in the Decision Model and Notation (DMN) format. This package ensures consistency in the tables (i.e. fontsize), and is thus a better alternative to inserting tables via images. The decision-table package adds the \dmntable command, with which tables can be created. This command expands into a tabular, so it can be used within a table or figure environment. Furthermore, this allows labels and captions to be added seamlessly. It is also possible to place multiple DMN tables in one table/figure environment. The package relies on nicematrix and l3keys2e. | 2022-12-03 16:49:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807447552680969, "perplexity": 5276.277376624861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00681.warc.gz"} |
https://zbmath.org/?q=an:0369.22009 | # zbMATH — the first resource for mathematics
Ergodic equivalence relations, cohomology, and von Neumann algebras. I. (English) Zbl 0369.22009
##### MSC:
22D40 Ergodic theory on groups 28D05 Measure-preserving transformations 20Jxx Connections of group theory with homological algebra and category theory 22D25 $$C^*$$-algebras and $$W^*$$-algebras in relation to group representations 46L10 General theory of von Neumann algebras 18G99 Homological algebra in category theory, derived categories and functors
Full Text:
##### References:
[1] Warren Ambrose, Representation of ergodic flows, Ann. of Math. (2) 42 (1941), 723 – 739. · Zbl 0025.26901 [2] Hirotada Anzai, Ergodic skew product transformations on the torus, Osaka Math. J. 3 (1951), 83 – 99. · Zbl 0043.11203 [3] Louis Auslander and Calvin C. Moore, Unitary representations of solvable Lie groups, Mem. Amer. Math. Soc. No. 62 (1966), 199. · Zbl 0204.14202 [4] Alain Connes, Une classification des facteurs de type \?\?\?, Ann. Sci. École Norm. Sup. (4) 6 (1973), 133 – 252 (French). · Zbl 0274.46050 [5] Alain Connes and M. Takesaki, Flots des poids sur les facteurs de type \?\?\?, C. R. Acad. Sci. Paris Sér. A 278 (1974), 945 – 948 (French). · Zbl 0274.46051 [6] -[2], The flow of weights on a factor of type III (preprint). [7] Dang Ngoc Nghiem, On the classification of dynamical systems, Ann. Inst. H. Poincaré Sect. B (N.S.) 9 (1973), 397 – 425. · Zbl 0278.58009 [8] H. A. Dye, On groups of measure preserving transformation. I, Amer. J. Math. 81 (1959), 119 – 159. · Zbl 0087.11501 [9] H. A. Dye, On groups of measure preserving transformations. II, Amer. J. Math. 85 (1963), 551 – 576. · Zbl 0191.42803 [10] Eilenberg and S. Mac Lane [1], Cohomology theory in abstract groups. I, Ann. of Math. (2) 48 (1947), 51-78. MR 8, 367. · Zbl 0029.34001 [11] J. Feldman and D. A. Lind, Hyperfiniteness and the Halmos-Rohlin theorem for nonsingular Abelian actions, Proc. Amer. Math. Soc. 55 (1976), no. 2, 339 – 344. · Zbl 0302.46047 [12] Jacob Feldman and Calvin C. Moore, Ergodic equivalence relations, cohomology, and von Neumann algebras, Bull. Amer. Math. Soc. 81 (1975), no. 5, 921 – 924. · Zbl 0317.22002 [13] J. M. G. Fell, A Hausdorff topology for the closed subsets of a locally compact non-Hausdorff space, Proc. Amer. Math. Soc. 13 (1962), 472 – 476. · Zbl 0106.15801 [14] Toshihiro Hamachi, Yukimasa Oka, and Motosige Osikawa, Flows associated with ergodic non-singular transformation groups, Publ. Res. Inst. Math. Sci. 11 (1975/76), no. 1, 31 – 50. · Zbl 0316.28007 [15] Shizuo Kakutani, Induced measure preserving transformations, Proc. Imp. Acad. Tokyo 19 (1943), 635 – 641. · Zbl 0060.27406 [16] A. Krieger [1], On non-singular transformations of a measure space. I, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete. 11 (1969), 83-97. MR 39 # 1628. · Zbl 0185.11901 [17] Wolfgang Krieger, On non-singular transformations of a measure space. I, II, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 11 (1969), 83-97; ibid. 11 (1969), 98 – 119. · Zbl 0185.11901 [18] Wolfgang Krieger, On constructing non-*isomorphic hyperfinite factors of type III, J. Functional Analysis 6 (1970), 97 – 109. · Zbl 0209.44601 [19] Wolfgang Krieger, On a class of hyperfinite factors that arise from null-recurrent Markov chains, J. Functional Analysis 7 (1971), 27 – 42. · Zbl 0215.25901 [20] Wolfgang Krieger, On the Araki-Woods asymptotic ratio set and non-singular transformations of a measure space, Contributions to Ergodic Theory and Probability (Proc. Conf., Ohio State Univ., Columbus, Ohio, 1970) Springer, Berlin, 1970, pp. 158 – 177. Lecture Notes in Math., Vol. 160. [21] Wolfgang Krieger, On ergodic flows and the isomorphism of factors, Math. Ann. 223 (1976), no. 1, 19 – 70. · Zbl 0332.46045 [22] Kuratowski [1], Topologie, Warsaw-Livoue, 1933. · JFM 59.0563.02 [23] George W. Mackey, Point realizations of transformation groups, Illinois J. Math. 6 (1962), 327 – 335. · Zbl 0178.17203 [24] George W. Mackey, Ergodic theory and virtual groups, Math. Ann. 166 (1966), 187 – 207. · Zbl 0178.38802 [25] Calvin C. Moore, Extensions and low dimensional cohomology theory of locally compact groups. I, II, Trans. Amer. Math. Soc. 113 (1964), 40 – 63. · Zbl 0131.26902 [26] -[2], Extensions and low dimensional cohomology theory of locally compact groups. II, Trans. Amer. Math. Soc. 113 (1964), 64-86. MR 30 #2106. · Zbl 0131.26902 [27] Calvin C. Moore, Group extensions and cohomology for locally compact groups. III, Trans. Amer. Math. Soc. 221 (1976), no. 1, 1 – 33. · Zbl 0366.22005 [28] Calvin C. Moore, Group extensions and cohomology for locally compact groups. IV, Trans. Amer. Math. Soc. 221 (1976), no. 1, 35 – 58. · Zbl 0366.22006 [29] Joseph Max Rosenblatt, Equivalent invariant measures, Israel J. Math. 17 (1974), 261 – 270. · Zbl 0286.28014 [30] Shôichirô Sakai, \?*-algebras and \?*-algebras, Springer-Verlag, New York-Heidelberg, 1971. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 60. · Zbl 0233.46074 [31] Schmidt [1], Cohomology and skew products of ergodic transformations, Warwick, 1974 (preprint). [32] Joel J. Westman, Cohomology for the ergodic actions of countable groups, Proc. Amer. Math. Soc. 30 (1971), 318 – 320. · Zbl 0229.28012
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-11-27 11:54:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6746233701705933, "perplexity": 975.9147376523651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00236.warc.gz"} |
https://burykibalozuvu.gloryland-church.com/perfect-numbers-book-37788ct.php | Last edited by Taujind
Thursday, April 23, 2020 | History
3 edition of Perfect numbers. found in the catalog.
Perfect numbers.
Richard W. Shoemaker
# Perfect numbers.
Written in English
Edition Notes
ID Numbers Contributions National Council of Teachers of Mathematics. Open Library OL21520297M ISBN 10 0873530810
You might also like
POLIS
POLIS
Potential socioeconomic consequences of planned fertility reduction
Potential socioeconomic consequences of planned fertility reduction
A Century of Enterprise
A Century of Enterprise
Testing computer telephony systems and networks
Testing computer telephony systems and networks
Torah studies
Torah studies
Comparison of flume and towing methods for verifying the calibration of a suspended-sediment sampler
Comparison of flume and towing methods for verifying the calibration of a suspended-sediment sampler
The Sun Guide To The Jumps 2005/2006
The Sun Guide To The Jumps 2005/2006
First report to His Majestys Commissioners on the public records of Ireland, with appendixes.
First report to His Majestys Commissioners on the public records of Ireland, with appendixes.
Military establishment appropriation bill for 1942.
Military establishment appropriation bill for 1942.
Immigration Reform and Control Act
Immigration Reform and Control Act
Sonata in G minor
Sonata in G minor
Molloy.
Molloy.
Perfect number, a positive integer that is equal to the sum of its proper divisors. The smallest perfect number is 6, which is the sum of 1, 2, and 3.
The discovery of such numbers is lost in prehistory, but it is known that the Pythagoreans (founded c. BCE) studied. PERFECT NUMBERS 5 Must all perfect numbers be of Euclid’s type. Leonard Euler, in a posthumous paper, proved that every even perfect number is of this type.
Many ingenious proofs of this fact exist. Theorem 9 (Euler). If N is an even perfect number, then N can be written in the form N = 2n−1(2n −1), where 2n −1 is prime. Proof. What are the Perfect Numbers. Definition: A Perfect Number N is defined as any positive integer where the sum of its divisors minus the number itself equals the number.
The first few of these, already known to the ancient Greeks, are 6, 28,and A Perfect Number “n”, is a positive integer which is equal to the sum of its factors, excluding “n” itself. There are too many books about great numbers. You can read such biographies about great people as Abraham Lincoln by Carl Sandburg.
If you want to read biographies about a series of elite people, you may read Decisive Moments in History [1] by Ste. The traditional criteria for importance in number theory are aesthetic and historic.
What people find important is what's interesting to them. That differs from person to person. The concept of a perfect number is ancient. They've been studied. The following is a list of the known perfect numbers, and the exponents p that can be used to generate them (using the expression 2 p−1 × (2 p − 1)) whenever 2 p − 1 is a Mersenne even perfect numbers are of this form.
It is not known whether there are any odd perfect numbers. As of there are 51 known perfect numbers in total. For even perfect numbers, the ratio p. A number is called a perfect number if by adding all the positive divisors of the number (except itself), the result is the number itself.
6 is the first perfect number. Its divisors (other than the number itself: 6) are 1, 2, and 3 and 1 + 2 + 3 equals 6. Other perfect numbers incl and As you can see, these are regarded as even perfect numbers.
An even perfect number is a perfect number that is even, i.e., an even number n whose sum of divisors (including n itself) equals n. All known perfect numbers are even. Ochem and Rao () have demonstrated that any odd perfect number must be larger than 10^ progress made in the understanding of perfect numbers and to discuss why perfect numbers continue to be one of the world’s oldest unsolved puzzles.
2 Euclid’s Perfect Number Theorem Born in Alexandria around B.C, the great mathematician Euclid was instrumental in the advances made in the study of perfect numbers. In Book 2File Size: KB.
COVID Resources. Reliable information about the coronavirus (COVID) is available from the World Health Organization (current situation, international travel).Numerous and frequently-updated resource results are available from this ’s WebJunction has pulled together information and resources to assist library staff as they consider how to handle coronavirus.
OFFSET: 1,1; COMMENTS: A number n is abundant if sigma(n) > 2n (cf. A), perfect if sigma(n) = 2n (this entry), deficient if sigma(n). Numbers like 6 that equal the sum of their factors are called perfect numbers.
6 is the first perfect number. 4 is not a perfect number because the sum of its factors (besides 4 itself), 1+2, is less than 4. Numbers like 4 are known as deficient numbers.
What does the word deficient mean. For example, 6 is a perfect number, because 6 = 1 + 2 + 3. Write method Perfect that determines whether parameter value is a perfect number. Use this method in an app that determines and displays all the perfect numbers between 2 and Display the factors of each perfect number to confirm that the number is indeed perfect.".
The Arab mathematicians were also interested in the idea of perfect numbers, and in the late s and early s, one Arab mathematician in particular wrote based on Nicomachus’ work. Ismail ibn Fallus produced 10 perfect numbers (which is significant growth in the quest for perfect numbers), but.
It has been shown that there are no odd perfect numbers in the interval from 1 to 10^ We do know that all even perfect numbers end in 6 or 8. You wanted a list of perfect numbers. Well, as of (that is the date of my source), there were 30 known perfect numbers, beginning with. Enjoy the cold winter of with Pixel Perfect Number Coloring Book Images.
Pixel Perfect is the perfect app for stress-relief, relaxation, improving concentration, and develops fine motor skills four children and adults in any age. Listen to melodic concentration improvement music while drawing and enjoy hours inside the the app.
Pixel Perfect Number Coloring Book offers a big library of /5(). Prime numbers of the form 2 p – 1 have come to be called Mersenne primes named in honor of Marin Mersenne (–), one of many people who have studied these numbers.
The four smallest perfect numbers, 6, 28,andwere known to the ancient Greek mathematicians. The Mersenne primes 2 p – 1 corresponding to these four perfect. Perfect Numbers of the Bible Perfect Numbers of the Bible “As for God His way is perfect.” {2 Samuel } God is a being of perfect magnitude and we learn of his trials and tribulations through his word, the Bible.
BIB B73 Sept. 26th The Book of Job The book of Job is a non-fiction biography of a righteous man. Show that every even perfect number except 6 6 6 is 1 1 1 mod 9 9 9. Summing the digits and iterating preserves the congruence class mod 9 9 9.
For even perfect numbers this is clear from Euclid-Euler; for odd perfect numbers, two odd prime factors would lead to a factor of 4 4 4 in σ (n) \sigma(n) σ (n), but 2 n 2n 2 n isn't divisible by 4 4 4.
1 Mersenne Primes and Perfect Numbers Basic idea: try to construct primes of the form an − 1; a,n ≥ 1. e.g., 21 − 1=3but24 − 1=35 23 − 1=7 25 − 1=31 26 − 1=63=32 7 27 − 1= −1==(23)(89) −1= Lemma: xn − 1=(x − 1)(xn−1 +xn−2 ++x +1) Corollary:(x − 1)|(xn − 1)Soforan − 1tobeprime,weneeda =2.
Moreover,ifn =md,wecanapplythelemmawith x File Size: 61KB. $\begingroup$ I object to the idea that things like perfect numbers need practical applications to be studied or important.
Historically, perfect numbers were hugely important for almost religious reasons e.g. by the pythagoreans. As for the duplicate answer and your complaint that those answers only mention Mersenne primes, you should know that (even) perfect numbers and Mersenne primes are.
A natural number is perfect, if it's value is equal to a sum of all his positive divisors (excluding number itself). The first perfect numbers are 6, 28,and all of them are in mathematician Euclid proved that the formula is en even perfect number whenever is a prime.
So far only even perfect numbers have been discovered, but the existence of odd perfect numbers was. THE PERFECT HARMONY OF THE NUMBERS OF THE HEBREW KINGS. By Harold Camping. Chapter 1. The Perfect Harmony of the Numbers of the Hebrew Kings.
In the book Adam When?, calculations were made to provide an exact chronology from the year of creation (11, B.C.) to the laying of the temple foundation in B.C. Applications of Mersenne numbers: signed/unsigned integers, towers of Hanoi. Applications of Fermat numbers: relation to constructible polygons.
But for perfect numbers the best I could find is: The earth was created in 6 days by God because 6 is perfect. Also, the cycle of the moon is 28 days. Perfect Numbers - A Case Study. Perfect numbers are those numbers that equal the sum of all their divisors including 1 and excluding the number itself.
Most numbers do not fit this description. At the heart of every perfect number is a Mersenne prime. All of the other divisors are either powers of 2 or powers of 2 times the Mersenne prime.
is perfect if $2^m-1$ is a prime number. Euler showed that these exhaust all even perfect numbers. It is still () unknown whether the set of even perfect numbers is finite or infinite, that is, it is unknown whether the set of Mersenne primes $2^m-1$ is finite or not.
It is also unknown whether or not there are any odd perfect numbers. Finding perfect numbers (optimization) Ask Question Asked 9 years, if your still looking for something to calculate perfect numbers.
this goes through the first ten thousand pretty quick, but the 33 million number is a little slower. I want to use my course material to write a book in the future.
Around AD, Nicomachus of Gerasa () lists the first four perfect numbers in the book ‘Introduction to Arithmetica’ and said in this text “God created all things in six days, because the number six is perfect.” Is this a coincidence.
Nicomachus gives a classification of numbers into three classes: perfect numbers (σ(n) = n. And also know that 6 is the perfect number, and Nikola Tesla considered 3 the best number, and he was probably better mathematician than Pythagoras, or that few people that just came across claiming that 10 is the perfect number, which made me confused, as didn't see the logic.
it is just 9+1, and 10 does really exsist, in my own research. Define perfect number. perfect number synonyms, perfect number pronunciation, perfect number translation, English dictionary definition of perfect number.
perfect number; Perfect numbers; Perfect numbers; Perfect Nutrition Center Europe; Perfect Nutrition Systems; Perfect octave; Perfect octave; Perfect octave; Perfect octave. Cataldi also showed = 2 19 - 1 was prime, yielding another perfect number, The correspondence of a French monk named Marin Mersenne became a seventeenth-century form of Lexis-Nexis.
Mersenne became interested in multiply perfect numbers, that is, numbers where σ 0 (N) = kN where k is some number greater than 1. (i) ∵ - 1 = = 72 - 3 = 72 - 15 = 57 - 5 = 54 - 17 = 40 - 7 = 40 - 19 = 21 - 9 = 96 21 - 21 = 0 96 - 11 = 85 i.e.
= 1+3+5+7+9+11+13+15+17+19+ Thus, is a perfact square.(ii) 55∵ 55 - 1 = 54 30 - 11 = 19 54 - 3 = 51 19 - 13 = 6 51 - 5 = 46 6 - 15 = -9 46 - 7 = 39 39 - 9 = 30Since, 55 cannot be expressed as the sum of successive old.
Happy Perfect Picture Book Friday, dear friends. Our story today is written by one of my nonfiction picture book idols, Laurie Wallmark. Last November, I got to have dinner with her at the NCTE (National Council of Teachers of English) conference what fun.
NUMBERS IN MOTION: SOPHIE KOWALEVSKI – QUEEN OF MATHEMATICS. Written by Laurie Wallmark. We start by defining perfect numbers. A positive integer $$n$$ is called a perfect number if $$\sigma(n)=2n$$.
In other words, a perfect number is a positive integer which is the sum of its proper divisors. The first perfect number is 6, since $$\sigma(6)=12$$. You can also view this as $$6=1+2+3$$. With all that in mind, let’s take a look at our perfect Cleric.
Going by the numbers, we know that the most popular race is human, so we’ll go with the boring option here, which means either +1 to all stats or +1 to two and picking a bonus feat.
If 2 k-1 is prime, then 2 k-1 (2 k-1) is perfect and every even perfect number has this form. It turns out that for 2 k-1 to be prime, k must also be prime--so the search for Perfect numbers is the same as the search for Mersenne primes.
Armed with this information it does not take too long, even by hand, to find the next two perfect numbers. It's a Numbers Game. Basketball: The math behind the perfect bounce pass, the buzzer-beating bank shot, and so much more. (National Geographic Kids Espn) has 9 reviews and 6 ratings.
Reviewer Jaemin wrote: I love basketball my favorite player is steph curry/5(6). Understand perfect numbers with this quiz and worksheet which correspond to the lesson. Topics you will be responsible for include defining a perfect number and identifying perfect numbers. | 2020-12-05 08:40:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45290738344192505, "perplexity": 1171.8385787556479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00631.warc.gz"} |
http://ja.googology.wikia.com/wiki/%E6%8B%A1%E5%BC%B5%E5%9E%8BE%E3%82%B7%E3%82%B9%E3%83%86%E3%83%A0 | ## FANDOM
556 ページ
バードの配列表記を BAN、超階乗配列表記を HAN と呼ぶのに習って、ExE は、"SAN" (Saibian の配列表記) とも呼ばれる。
All notations in Extensible-E have a number of standardized traits. All notations can be expressed generically in the form:
Ea&a&a& ... &a&a
where the a's are all positive integer arguments, and the &'s are delimiters chosen from the delimiter set defined for the particular notation. Each notation also defines fundamental sequences for all decomposition delimiters.
Hyper-E only allows use of the single hyperion ( # ) as a delimiter.
Extended Hyper-E uses delimiters below #^#.
Cascading-E Notation uses delimiters below #^^#.
• Its extension, Limit Extension Cascading-E Notation uses delimiters below #^^#>#.
Extended Cascading-E Notation uses delimiters below #{#}#.
• Its extension, Hyper Extended Cascading-E Notation uses delimiters below {#,#,1,2}.
## 基本法則 編集
All ExE type notations follow 5 fundamental laws. These laws have priority, where the earlier the law, the higher priority it has. The priority of any given law only fails if it's conditions are not met, in which case one proceeds to the next law. The first law whose conditions are met is the one that is executed. Such a law is guaranteed to exist because the last law has no requirements other than the failure of all the previous laws. The 5 law are:
1. Base Case. If there is only a single argument: En = 10^n
2. Decomposition Case. If the last delimiter is decomposable: @m&n = @m&[n]m
3. Terminating Case. If the last argument = 1: @&1 = @
4. Expansion Case. If the last delimiter is not the proto-hyperion: @m&*#n = @m&m&*#(n-1)
5. Recursive Case. Otherwise: @m#n = @(@m#(n-1))
The laws are set up so many necessary conditions are implicit. For example, the decomposition case wouldn't apply unless there is more than one argument. This doesn't need to be explicitly stated because the decomposition case can only apply if the base case has already failed, which can only happen if there is more than one argument. Consequently although the last law has no conditions, in fact it can only apply if there is at least two arguments, the last argument is greater than one, and the last delimiter is the proto-hyperion.
For the lowest level notations, some of the rules may never apply. For example in xE#, there are no decomposable delimiters, so Rule 2 never applies. In E#, there is only the hyperion as a delimiter, so neither Rule 2 or Rule 4 ever applies. | 2017-08-21 06:33:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486770987510681, "perplexity": 2274.88244337603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107720.63/warc/CC-MAIN-20170821060924-20170821080924-00420.warc.gz"} |
http://www.show-my-homework.com/2016/04/maxwell-boltzmann.html | # Maxwell Boltzmann
You are given N atoms that are distinguishable with two possible energy levels $E1 = 0$ and $E2 = E$. What is the Maxwell-Boltzmann distribution (number of atoms n1 and n2) for N = 10000 atoms at
(i) $kT = 0.3E$,
(ii) $kT = E$,
(iii) $kT = 5E$, and
(iv) in the limit $kT→∞$
The Maxwell Boltzmann statistics for the number of particles found as having energy $Ei$ is
$N_i/N=(exp(-E_i/KT))/(∑_j exp(-E_j/KT))$
where the summation is done over all possible energies.
Thus for $E1 =0$ and $E2= E$ one has
$N_1=N*1/(1+exp(-E/KT))$ and
$N_2=N*(exp(-E/KT))/(1+exp(-E/KT))=N*1/(1+exp(+E/KT) )(=N-N_1)$
For $KT =0.3*E$ one has
$N_1=10000/(1+exp(-0.3))=5744.42=5744$ and
$N2= 10000/(1+exp(+0.3))=4255.57=4256$
For $KT = E$ one has
$N_1=10000/(1+1/e)=7310.58=7311$ and
$N_2=10000/(1+e)=2689.41=2689$
For $KT =5*E$ one has
$N_1=10000/(1+exp(-5) )=9933.07=9933$ and
$N_2=10000/(1+exp(5) )=66.93=67$
For $KT=∞$ one has (inifinite activation energy)
$N_1= 10000/(1+0)=10000$ and
$N_2=10000/(1+∞)=0$
Rerefence
Maxwell Boltzmann Statistics | 2017-12-17 02:20:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973349332809448, "perplexity": 948.7508508078959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00173.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/116628/a-1-10-g-sample-contains-only-glucose-c6-h12-o6-and-sucrose-c12-h22-o11-when-the-1 | # Problem: A 1.10-g sample contains only glucose (C6H12O6) and sucrose (C12H22O11 ). When the sample is dissolved in water to a total solution volume of 25.0 mL, the osmotic pressure of the solution is 3.78 atm at 298 K. What is the percent by mass of sucrose in the sample?
⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet.
###### Problem Details
A 1.10-g sample contains only glucose (C6H12O6) and sucrose (C12H22O11 ). When the sample is dissolved in water to a total solution volume of 25.0 mL, the osmotic pressure of the solution is 3.78 atm at 298 K. What is the percent by mass of sucrose in the sample? | 2020-07-15 02:41:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152622580528259, "perplexity": 1497.0426897707487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657154789.95/warc/CC-MAIN-20200715003838-20200715033838-00500.warc.gz"} |
https://www.enotes.com/homework-help/what-two-steps-simplifying-radicals-can-either-376087 | # What are the two steps for simplifying radicals? Can either step be deleted? If you could add a step that might make it easier or easier to understand, what step would you add?
I teach 3 steps/rules for simplifying radicals:
(1) There can be no fractions in the radicand.
Use the "rule" `sqrt(a/b)=sqrt(a)/sqrt(b)` (True for any index -- thus `root(3)(a/b)=(root(3)(a))/(root(3)(b))` etc...)
(2) There can be no radicals in the denominator.
Use conjugation to clear the radical from the denominator.
`7/sqrt(2)=7/sqrt(2)*sqrt(2)/sqrt(2)=(7sqrt(2))/2`
This is somewhat complicated for other indices. We multiply both numerator and denominator by a radical that creates a perfect nth power in the radicand of the denominator where n is the index.
`7/root(3)(3)=7/root(3)(3)*root(3)(9)/root(3)(9)=(7root(3)(9))/3`
(3) There can be no perfect nth powers in the radicand where n is the index.
`sqrt(18x^4y^5)=sqrt(9*2*x^4*y^4*y)=sqrt(9x^4y^4)sqrt(2y)=3x^2y^2sqrt(2y)`
`root(3)(54)=root(3)(27*2)=root(3)(27)root(3)(2)=3root(3)(2)`
Approved by eNotes Editorial Team | 2022-09-27 02:08:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436533451080322, "perplexity": 961.3699383835512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00462.warc.gz"} |
https://www.lmfdb.org/Variety/Abelian/Fq/5/2/ag_s_abh_bs_ace | # Properties
Label 5.2.ag_s_abh_bs_ace Base Field $\F_{2}$ Dimension $5$ Ordinary No $p$-rank $3$ Principally polarizable Yes Contains a Jacobian No
## Invariants
Base field: $\F_{2}$ Dimension: $5$ L-polynomial: $( 1 - 2 x + 2 x^{2} )^{2}( 1 - 2 x + 2 x^{2} - x^{3} + 4 x^{4} - 8 x^{5} + 8 x^{6} )$ Frobenius angles: $\pm0.161334789180$, $\pm0.250000000000$, $\pm0.250000000000$, $\pm0.327009058845$, $\pm0.739882802642$ Angle rank: $3$ (numerical)
This isogeny class is not simple.
## Newton polygon
$p$-rank: $3$ Slopes: $[0, 0, 0, 1/2, 1/2, 1/2, 1/2, 1, 1, 1]$
## Point counts
This isogeny class is principally polarizable, but does not contain a Jacobian.
$r$ 1 2 3 4 5 6 7 8 9 10 $A(\F_{q^r})$ 4 2600 100048 6890000 53126324 1040499200 32604824284 849412980000 35849268493168 1174955063165000
$r$ 1 2 3 4 5 6 7 8 9 10 $C(\F_{q^r})$ -3 5 18 49 47 62 123 193 522 1065
## Decomposition and endomorphism algebra
Endomorphism algebra over $\F_{2}$
The isogeny class factors as 1.2.ac 2 $\times$ 3.2.ac_c_ab and its endomorphism algebra is a direct product of the endomorphism algebras for each isotypic factor. The endomorphism algebra for each factor is: 1.2.ac 2 : $\mathrm{M}_{2}($$$\Q(\sqrt{-1})$$$)$ 3.2.ac_c_ab : 6.0.2464727.1.
Endomorphism algebra over $\overline{\F}_{2}$
The base change of $A$ to $\F_{2^{4}}$ is 1.16.i 2 $\times$ 3.16.q_ey_yp. The endomorphism algebra for each factor is: 1.16.i 2 : $\mathrm{M}_{2}(B)$, where $B$ is the quaternion algebra over $$\Q$$ ramified at $2$ and $\infty$. 3.16.q_ey_yp : 6.0.2464727.1.
All geometric endomorphisms are defined over $\F_{2^{4}}$.
Remainder of endomorphism lattice by field
• Endomorphism algebra over $\F_{2^{2}}$ The base change of $A$ to $\F_{2^{2}}$ is 1.4.a 2 $\times$ 3.4.a_i_ab. The endomorphism algebra for each factor is: 1.4.a 2 : $\mathrm{M}_{2}($$$\Q(\sqrt{-1})$$$)$ 3.4.a_i_ab : 6.0.2464727.1.
## Base change
This is a primitive isogeny class.
## Twists
Below are some of the twists of this isogeny class.
Twist Extension Degree Common base change 5.2.ac_c_ab_i_aq $2$ (not in LMFDB) 5.2.ac_c_b_e_ai $2$ (not in LMFDB) 5.2.c_c_ab_e_i $2$ (not in LMFDB) 5.2.c_c_b_i_q $2$ (not in LMFDB) 5.2.g_s_bh_bs_ce $2$ (not in LMFDB) 5.2.a_a_d_c_ac $3$ (not in LMFDB)
Below is a list of all twists of this isogeny class.
Twist Extension Degree Common base change 5.2.ac_c_ab_i_aq $2$ (not in LMFDB) 5.2.ac_c_b_e_ai $2$ (not in LMFDB) 5.2.c_c_ab_e_i $2$ (not in LMFDB) 5.2.c_c_b_i_q $2$ (not in LMFDB) 5.2.g_s_bh_bs_ce $2$ (not in LMFDB) 5.2.a_a_d_c_ac $3$ (not in LMFDB) 5.2.ae_i_an_w_abi $6$ (not in LMFDB) 5.2.a_a_ad_c_c $6$ (not in LMFDB) 5.2.e_i_n_w_bi $6$ (not in LMFDB) 5.2.ae_k_ar_ba_abk $8$ (not in LMFDB) 5.2.ac_ac_h_a_am $8$ (not in LMFDB) 5.2.ac_g_aj_q_au $8$ (not in LMFDB) 5.2.a_c_ab_g_ae $8$ (not in LMFDB) 5.2.a_c_b_g_e $8$ (not in LMFDB) 5.2.c_ac_ah_a_m $8$ (not in LMFDB) 5.2.c_g_j_q_u $8$ (not in LMFDB) 5.2.e_k_r_ba_bk $8$ (not in LMFDB) 5.2.a_a_ad_c_c $12$ (not in LMFDB) 5.2.ac_a_d_e_ao $24$ (not in LMFDB) 5.2.ac_e_af_m_as $24$ (not in LMFDB) 5.2.c_a_ad_e_o $24$ (not in LMFDB) 5.2.c_e_f_m_s $24$ (not in LMFDB) | 2020-06-06 18:09:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756602048873901, "perplexity": 3878.0231619534443}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348517506.81/warc/CC-MAIN-20200606155701-20200606185701-00284.warc.gz"} |
https://www.physicsforums.com/threads/question-on-the-slope-of-the-tangent.159462/ | # Question on the slope of the tangent
Find the points on the graph of y= (1/3)x^3-5x- (4/x) at which the slope of the tangent is horizontal.
what i know:
- we have to use m=[f(a+h)-f(a)]/h
- if we change the equation we can get 3x^4 - 15x^2 -12
- the slope of the tangent is zero.
THANX
Related Calculus and Beyond Homework Help News on Phys.org
jtbell
Mentor
hint: how does the derivative of a function relate to the slope of the tangent of its graph?
Gib Z
Homework Helper
Find the derivative, and set it equal to the slope of a horizontal line. What is that?
HallsofIvy
Homework Helper
What everyone is saying is "DO IT"! By the way are you really required to use the "quotient difference" formula? It's not too difficult but tedious and most problems like this allow the use of derivative formulas.
jtbell
Mentor
Find the derivative, and set it equal to the slope of a horizontal line. What is that?
Do you mean, "what is the slope of a horizontal line?"
Draw a graph that shows a horizontal line. Pick two points $(x_1,y_1)$ and $(x_2,y_2)$ on the line. Do you know how to calculate the slope of a line from two points?
Gib Z
Homework Helper
yes thats what i meant, but i knew the answer..set the derivative to zero is what i meant | 2020-04-07 23:49:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5397285223007202, "perplexity": 400.02111913779163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00398.warc.gz"} |
http://www.mathnet.ru/php/contents.phtml?wshow=issue&jrnid=mzm&series=0&year=1994&volume=56&volume_alt=&issue=1&issue_alt=&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
General information Latest issue Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Mat. Zametki: Year: Volume: Issue: Page: Find
First and second order differentiation operators with weight functions of variable signA. P. Gurevich, A. P. Khromov 3 Extrapolation of functions in the Denjoy class on a star-shaped compact setL. N. Znamenskaya 16 Orthogonal invariants of skew-symmetric matricesN. V. Ilyushechkin 26 On the solvability of nonlinear equations of Shrödinger type in the class of rapidly oscillating functionsL. A. Kalyakin, S. G. Glebov 32 Attractors of resonance wave type equations: Discontinuous oscillationsYu. S. Kolesov 41 On the asymptotics near the piecewise smooth boundary of singular solutions of semilinear elliptic equationsV. A. Kondrat'ev, V. A. Nikishkin 50 Divergence almost everywhere of square partial sums of Fourier–Walsh series of integrable functionsS. F. Lukomskii 57 Convergence of Fourier series for functions in the classes of Besov–Lizorkin–TriebelA. P. Petukhov 63 The formula of the regularized trace for the Laplace–Beltrami operator with odd potential on the sphere $S^2$V. E. Podolskii 71 Pointwise multiplicators in weighted Sobolev spaces on a half-lineV. S. Rychkov 78 $\operatorname{RUC}$-bases in $E(L_\infty\overline\otimes B(H))$ and $F(C_E)$F. A. Sukochev 88 Continuous solutions of a generalized Cauchy–Riemann system with a finite number of singular pointsA. Tungatarov 105 Remark on the Serre $(\mathrm{mod} 5)$-invariant for groups of type $E_8$V. I. Chernousov 116 On a simultaneous approximation of logarithms and algebraic powers of algebraic numbersA. A. Shmelev 122 The Smetanich logic $T^{\Phi}$ and two definitions of a new intuitionistic connectiveA. D. Yashin 135 Brief Communications A criterion for the Volterra property of boundary value problems for Sturm–Liouville equationsB. N. Biyarov, S. A. Dzhumabaev 143 On a class of primality criteriaM. A. Vsemirnov 146 Computability by nondeterministic program and the Moschovakis search computabilityV. D. Solov'ev 149 Letter to the editorA. V. Domrin 153 Letter to the editorNguyên Thị Thiêu Hoa 153 Letter to the editorI. V. Protasov 153 | 2019-03-20 20:41:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4780920445919037, "perplexity": 1532.248678993984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212050-00006.warc.gz"} |
https://ideas.repec.org/a/inm/ormnsc/v35y1989i3p270-284.html | # Discount Rates Inferred from Decisions: An Experimental Study
## Author Info
• Uri Benzion
(Technion---Israel Institute of Technology, Haifa, Israel and Baruch College, City University of New York, New York, New York 10010)
• Amnon Rapoport
(University of Haifa, Haifa, Israel and Department of Psychology, University of North Carolina, Chapel Hill, North Carolina 27514)
• Joseph Yagil
(University of Haifa, Haifa, Israel and New York University, New York, New York 10006)
## Abstract
Two hundred and four students of economics and finance participated in an intertemporal choice experiment which manipulated three dimensions in a 4 \times 4 \times 4 factorial design: scenario (postponing a receipt, postponing a payment, expediting a receipt, expediting a payment), time delay (0.5, 1, 2, and 4 years), and size of cashflow ($40,$200, $1000, and$5000). Individual discount rates were inferred from the responses, and then used to test competitively four hypotheses regarding the behavior of discount rates. The classical hypothesis asserting that the discount rate is uniform across scenarios, time delays, and sums of cashflow was flatly rejected. A market segmentation approach was found lacking. The results support an implicit risk hypothesis according to which delayed consequences are associated with an implicit risk value, and an added compensation hypothesis which asserts that individuals require compensation for a change in their financial position.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
File URL: http://dx.doi.org/10.1287/mnsc.35.3.270
## Bibliographic Info
Article provided by INFORMS in its journal Management Science.
Volume (Year): 35 (1989)
Issue (Month): 3 (March)
Pages: 270-284
as
in new window
Handle: RePEc:inm:ormnsc:v:35:y:1989:i:3:p:270-284 Contact details of provider: Postal: 7240 Parkway Drive, Suite 300, Hanover, MD 21076 USAPhone: +1-443-757-3500Fax: 443-757-3515Web page: http://www.informs.org/Email: More information through EDIRC
## References
No references listed on IDEAS
You can help add them by filling out this form.
## Lists
This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.
## Corrections
When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:35:y:1989:i:3:p:270-284. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Mirko Janc)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data. | 2016-07-28 14:38:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23618732392787933, "perplexity": 2794.431870361245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828283.6/warc/CC-MAIN-20160723071028-00105-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-the-following-system-x-y-3-4x-5y-23-0 | # How do you solve the following system: x-y=3 , 4x-5y-23=0 ?
Mar 15, 2016
$- 4 \cdot \left(x - y\right) = - 4 \cdot 3$
+$4 x - 5 y = 23$
$- y = 11 \mathmr{and} y = - 11$
$- 5 \left(x - y\right) = - 5 \cdot 3$
+$4 x - 5 y = 23$
$- x = 8 \mathmr{and} x = - 8$
Therefore, $x = - 8$ and $y = - 11$
#### Explanation:
To solve this problem, you must first solve for one variable ($x$) and then the other ($y$). To solve for $y$, we eliminate the $x$ variable by multiplying the first equation by -4 on both sides:
$- 4 \left(x - y\right) = - 4 \cdot 3$ ----> $- 4 x + 4 y = - 12$
Then we add the two equations:
$- 4 x + 4 y = - 12$
+$4 x - 5 y = 23$
This gives us $\left(- 4 x + 4 x\right) + \left(4 y - 5 y\right) = \left(23 - 12\right)$ = $- y = 11$ or $y = - 11$
To solve for $x$ we then eliminate the $y$ variable by multiplying the first equation by -5 on both sides:
$- 5 \left(x - y\right) = - 5 \cdot 3$ ----> $- 5 x + 5 y = - 15$
Then we add the two equations:
$- 5 x + 5 y = - 15$
+ $4 x - 5 y = 23$
This gives us $\left(- 5 x + 4 x\right) + \left(5 y - 5 y\right) = 8$ = $- x = 8$ or $x = - 8$
You can check the answer by substituting -8 for $x$ and -11 for $y$, and you will find that both equations are satisfied. | 2020-08-09 01:45:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 30, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524721264839172, "perplexity": 242.1187867601804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738380.22/warc/CC-MAIN-20200809013812-20200809043812-00439.warc.gz"} |
http://fcnt.arezzo5stelle.it/plotting-random-effects-in-r.html | # Plotting Random Effects In R
3 odds ratios, mean difference and incidence rate ratio) for different types of data (e. Plotting separate slopes with geom_smooth() The geom_smooth() function in ggplot2 can plot fitted lines from models with a simple structure. In this post, I want to focus on the simplest of questions: How do I generate a random number? The answer depends on what kind of random number you want to generate. Random Integer Generator. glmer(fit, type = "fe", sort = TRUE) To summarize, you can plot random and fixed effects in the way as shown above. Random Effect Models The preceding discussion (and indeed, the entire course to this point) has been limited to fixed effects" models. In particular, I compare output from the lm() command with that from a call to lme(). Fixed and random factors can be nested or crossed with each other, depending on. Fonton N, Atindogbe G, Honkonnou N, and Dohou R. Single factors (~g) or crossed factors (~g1*g2) are. type = "std2" Forest-plot of standardized beta values, however, standardization is done by dividing by two sd (see 'Details'). It is also a R data object like a vector or data frame. That is, qqmath is great at plotting the intercepts from a hierarchical model with their errors around the point estimate. Function adonis evaluates terms sequentially. It internally calls via. Random forest is one of those algorithms which comes to the mind of every data scientist to apply on a given problem. effect, and summary. All packages except MIXOR can provide estimates of the random effects. But why?! Well, this will become clear if we understand what our interaction effect really means. 1 How do we describe populations? KEYWORDS: Blooms: Remember 2. None of the above. glmer(fit, type = "re. To obtain Type III SS, vary the order of variables in the model and rerun the analyses. , the fixed effects) and the population variation (i. The trials of intravenous magnesium after myocardial infarction provide an extreme example of the differences between fixed and random effects analyses that can arise in the presence of funnel plot asymmetry. Main Effects Residual Plots. 10 means that 10 percent of the variance in Y is predictable from X; an R 2 of 0. Let’s get started. Tutorial index. A normal probability plot of the effects is shown below. Model I and Model II anova. The syntax for including a random effect in a formula is shown below. Hundreds of charts are displayed in several sections, always with their reproducible code available. It covers a many of the most common techniques employed in such models, and relies heavily on the lme4 package. field (Intercept) 16. Nested plots with two plot sizes (12. The trace plot has a stationary pattern, which is what we would like to see. Plotting separate slopes with geom_smooth() The geom_smooth() function in ggplot2 can plot fitted lines from models with a simple structure. Alternative names: split-plot design; mixed two-factor within-subjects design; repeated measures analysis using a split-plot design; univariate mixed models approach with subject as a random effect. Note that each point on the plot corresponds to the odds ratio of each level of the fixed effect period relative to period=1. This is a basic introduction to some of the basic plotting commands. A video showing basic usage of the "lme" command (nlme library) in R. Pause During Mplus Analysis. Spatial Statistics using R-INLA and Gaussian Markov random fields DavidBolinandJohanLindstrom 1 Introduction In this lab we will look at an example of how to use the SPDE models in the. The dots in a scatter plot not only report the values of individual data points, but also patterns when the data are taken as a whole. Graphing change in SPSS The simplest way is to produce a scatter plot of the variable you are interested in over time, this is also called a profile or spaghetti plot. An example of blocking factor might be the gender of a patient (by blocking on gender), this is a source of variability controlled for, leading to greater accuracy. -You cannot make inferences to a larger experiment. Keywords: discrete choice models, random parameters, simulated maximum likelihood, R, individual-speci c. For the random intercept model, this thing that we're taking the covariance of, is just u j + e ij and we've actually written this here as r ij because, if you remember, in the variance components model, when we were calculating residuals we actually defined r ij to be just u j + e ij. So let's inspect our profile plots. qq-plot of random effects. And then this last point, the residual is positive. Figure 8 shows a diagnostic graph that contains a trace plot, a histogram and density plots for our MCMC sample, and a correlegram. This model is a three-level random intercepts model, which splits the variance between lecturers, students, and the residual variance. The summary effect and its confidence interval are displayed at the bottom. Use an image as a free-writing exercise. n is of length > 1, random effects indicated by the values in sample. ## ## Random effects: ## Groups Name Variance Std. Not necessarily that is. That is, qqmath is great at plotting the intercepts from a hierarchical model with their errors around the point estimate. only parameter in the random part of the model. I illustrate this with an analysis of Bresnan et al. For details see here Epil. lmer and sjp. Select the data in which we want to plot the 3D chart. bmeta is a R package that provides a collection of functions for conducting meta-analyses and meta-regressions under a Bayesian context, using JAGS. qq") Probability curves of odds ratios. args = list ( family = "binomial" ), se = FALSE ) par ( mar = c ( 4 , 4 , 1 , 1 )) # Reduce some of the margins so that the plot fits better plot ( dat $mpg , dat$ vs ) curve ( predict ( logr_vm , data. JMP - AN INTRODUCTORY USER'S GUIDE by Susan J. population distribution b. The table result showed that the McFadden Pseudo R-squared value is 0. To produce a forest plot, we use the meta-analysis output we just created (e. With roots dating back to at least 1662 when John Graunt, a London merchant, published an extensive set of inferences based on mortality records, survival analysis is one of the oldest subfields of Statistics [1]. Weibull plot The fit of a Weibull distribution to data can be visually assessed using a Weibull plot. The first argument is the formula object describing both the fixed-effects and random effects part of the model, with the response on the left of a ~ operator and the terms, separated by + operators, on the right. The R chart is used to evaluate the consistency of. Pareto plots, main effects and Interactions plots can be automatically displayed from the Data Display tool for study and investigation. plot_model() is a generic plot-function, which accepts many model-objects, like lm, glm, lme, lmerMod etc. RANDOM_WALK_2D_SIMULATION, a MATLAB program which simulates a random walk in a 2D region. Another diagnostic plot is the qq-plot for random effects. Model I and Model II anova. The workshop covers the new General Cross-Lagged Panel Model (GCLM) in Mplus. re requests the GLS random-effects (mixed) estimator. This is valid simple random sampling, because every part of the study area is equally likely to be sampled and the location of one line does not affect the location. A main‐effects plot clearly shows that depositional effects are the strongest, especially contrasting high versus either medium or low depositional areas along mMDS axis 1 (Figure 3a). I’m not super familiar with all that ggpubr can do, but I’m not sure it includes a good “interaction plot” function. • Caution if random effects return meaningfully different results from fixed effects. For example, in many experiments. + ( effect expression | groups ) The following are a few examples of specifying random effects. In statistics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. In R, we’ll use the simple plot function to compare the model-predicted values to the observed ones. I found the combination of R/ggplot/maps package extremely flexible and powerful, and produce nice looking map based visualizations. In addition, it provides the weight for each study; the effect measure, method and the model used to perform the meta-analysis; the confidence intervals used; the effect estimate from each study, the overall effect estimate, and the statistical significance of the analysis. (PDF) Table S1. Cases or individuals can and do move into and out of the population. args = list ( family = "binomial" ), se = FALSE ) par ( mar = c ( 4 , 4 , 1 , 1 )) # Reduce some of the margins so that the plot fits better plot ( dat $mpg , dat$ vs ) curve ( predict ( logr_vm , data. extract() function from texreg package) as well as plot_model() function from the sjPlot package. This is a basic introduction to some of the basic plotting commands. ) The action “na. Use type = "re. It is assumed that you know how to enter data or read data files which is covered in the first chapter, and it is assumed that you are familiar with the different data types. when r is much smaller than 1 in magnitude. The random walk pattern shown in animation 2 indicates problems with the chain. Random Image. R Pubs by RStudio. Fixed effects model If the effect is the same in all. , random intercept / subject=block*year. A normal 3d surface plot in excel appears below, but we cannot read much from this chart as of now. For mixed effects models, only fixed effects are. This indicates that everyone has a different change rate. Scatter plots: This type of graph is used to assess model assumptions, such as constant variance and linearity, and to identify potential outliers. We apply five fertilizers, each of different quality, on five plots of land each of wheat. type = "std" Forest-plot of standardized coefficients. ) The action “na. The ggplot2 package is extremely flexible and repeating plots for groups is quite easy. The effects are instantaneous, they can be permanent or last for up to 24 hours depending on what is appropriate and/or funny. On the other hand, the log likelihood in the R output is obtained using truly Weibull density. Instant access to millions of Study Resources, Course Notes, Test Prep, 24/7 Homework Help, Tutors, and more. effects: Plot random effects of model in Bayesthresh: Bayesian thresholds mixed-effects models for categorical data rdrr. This form allows you to generate random integers. We have some repeated observations (Time) of a continuous measurement, namely the Recall rate of some words, and several explanatory variables, including random effects (Auditorium where the test took place; Subject name); and fixed effects, such as Education, Emotion (the emotional connotation of the word to remember), or $\small \text{mgs. observations independent of time. Discussion includes extensions into generalized mixed models and realms beyond. -You cannot make inferences to a larger experiment. So I present to you: a list of random potion effects. It comprises data, a model description, fitted coefficients, covariance parameters, design matrices, residuals, residual plots, and other diagnostic information for a linear mixed-effects model. 3D plotting 3d reasoning random effects. Plotting separate slopes with geom_smooth() The geom_smooth() function in ggplot2 can plot fitted lines from models with a simple structure. plot_model() is a generic plot-function, which accepts many model-objects, like lm, glm, lme, lmerMod etc. ggpubr is a fantastic resource for teaching applied biostats because it makes ggplot a bit easier for students. population d. ) The action “na. Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Extreme weather increases the risk of large-scale crop failure. But why?! Well, this will become clear if we understand what our interaction effect really means. Here are two suggestions for how to use these images: 1. It helps to know that R has different functions to create an initial graph and to add to an existing graph. lmer and that of priming. 0, Shiny has built-in support for interacting with static plots generated by R’s base graphics functions, and those generated by ggplot2. the random effects slope of each cluster. Plot symbols and colours can be specified as vectors, to allow individual specification for each point. a two-sided linear formula object describing both the fixed-effects and random-effects part of the model, with the response on the left of a ~ operator and the terms, separated by + operators, on the right. extract() function from texreg package) as well as plot_model() function from the sjPlot package. , a "trellis" object). list, print. There are some R packages that are made specifically for this purpose; see packages effectsand visreg, for example. pt = min(length(unique(pred. RANDOM_WALK_2D_SIMULATION, a MATLAB program which simulates a random walk in a 2D region. 37 m and a DBH less than 14 cm) trees, respectively. Scatter Plot; With a scatter plot a mark, usually a dot or small circle, represents a single data point. And then this last point, the residual is positive. • Partial plots and interpretation of effects. This is because the association is nonlinear. The package includes functions for computing various effect size or outcome measures (e. Psychological Methods, 2, 64-78. Main Effects Residual Plots. The lmfunction in R can handle factorial design with fixed effects without taking the special experimental design or the random effects into account. A model for such a split-plot design is the following:. As a language for statistical analysis, R has a comprehensive library of functions for generating random numbers from various statistical distributions. The only reason that we are working with the data in this way is to provide an example of linear regression that does not use too many data points. The plotlines generated are not guaranteed to make sense but they do inspire writers by triggering a creative chain of thought. Because there are not random effects in this second model, the gls function in the nlme package is used to fit this model. The Spatial Patterns of Functional Groups and Successional Direction in a Coastal Dune Community. Bayesian random effects meta-analysis of trials with binary outcomes: methods for absolute risk difference and relative risk scales. The analysis based on a random-effects model is shown in Figure 2. For Example: If there were only one random effect per subject (e. 9919) [1] 0. Mixed Models and Random Effect Models. + ( effect expression | groups ) The following are a few examples of specifying random effects. The default is type = "fe", which means that fixed effects (model coefficients. If not, consider a random effects model. The Spatial Patterns of Functional Groups and Successional Direction in a Coastal Dune Community. To calculate the mixed effects limits of agreement, we analysed the paired differences of each device compared with the gold-standard using a mixed effects regression model, including participant as a random effect and activity as a fixed effect, using the nlme package in R software version 3. The dots should be plotted along the line. Students do NOT need to be knowledgeable and/or experienced with R software to successfully complete this course. • There is no built-in quantile plot in R, but it is relatively simple to produce one. Using R to Compute Effect Size Confidence Intervals. random variable). Conspiracy theory definition, a theory that rejects the standard explanation for an event and instead credits a covert group or organization with carrying out a secret plot: One popular conspiracy theory accuses environmentalists of sabotage in last year's mine collapse. Random effects probit and logit specifications are common when analyzing economic experiments. The lme function in thenlme(Pinheiro et al. Add something like + (1|subject) to the model for the random subject effect. plot = TRUE, then partial makes an internal call to plotPartial (with fewer plotting options) and returns the PDP in the form of a lattice plot (i. Note that each point on the plot corresponds to the odds ratio of each level of the fixed effect period relative to period=1. observations independent of time. The syntax for including a random effect in a formula is shown below. This model is a three-level random intercepts model, which splits the variance between lecturers, students, and the residual variance. The odds ratios is simply the exponentials of the regression coefficients. where z 1 and z 2 are Fisher transformations of r, and the two n i 's in the denominator represent the sample size for each study. They must be a representative or random sample. These will be either linear or generalized linear. plot_model() is a generic plot-function, which accepts many model-objects, like lm, glm, lme, lmerMod etc. 3 – Plotting Anomalies. I found the combination of R/ggplot/maps package extremely flexible and powerful, and produce nice looking map based visualizations. Each split plot. population growth c. If you use the ggplot2 code instead, it builds the legend for you automatically. In a random effects model, the values of the categorical independent variables represent a random sample from some population of values. (pdf file) Slides: Mixed Pattern-Mixture and Selection Models for Missing Data (pdf file). Scatter plots’ primary uses are to observe and show relationships between two numeric variables. Infix functions. For details see here Surg. Each whole plot was divided into four split plots, and b= 4 plant densities were randomly assigned to the split plots within each whole plot. Fabio Veronesi, data scientist at WRC plc. Alternative names: split-plot design; mixed two-factor within-subjects design; repeated measures analysis using a split-plot design; univariate mixed models approach with subject as a random effect. is there a significant variation due to the random effects) Test statistic: Chi-square (likelihood ratio test) H 0: µ 1 = µ 2 = … = µ t H 1: µ i ≠ µ j for some i, j in the set 1 … t H 0: σ g 2 = 0 H 1: σ g 2 > 0. Finally, a slight word of warning: our model assumed that the random. Yet, we do have choose an estimator for $$\tau^{2}$$. The plotlines generated are not guaranteed to make sense but they do inspire writers by triggering a creative chain of thought. The upper edge (hinge) of the box indicates the 75th percentile of the data set, and the lower hinge indicates the 25th percentile. population d. Supported model types include models fit with lm(), glm(), nls(), and mgcv::gam(). Change in size (grow shorter or taller). In R, I know how to do it. Feel free to suggest a chart or report a bug; any feedback is highly welcome. This, of course, is a very bad thing because it removes a lot of the variance and is misleading. For example, suppose the business school had 200. Now in the Insert Tab under the charts section click on the surface chart. These will be either linear or generalized linear. Question 8. These plotting functions have been implemented to easier. Because you’re likely to see the base R version, I’ll show you that version as well (just in case you need it). Model I and Model II anova. Following is a scatter plot of perfect residual distribution. The counts were registered over a 30 second period for a short-lived, man-made radioactive compound. These partial terms are often regarded as similar to random effects, but they are still fitted in the same way as other terms and strictly speaking they are fixed terms. Scatter plots: This type of graph is used to assess model assumptions, such as constant variance and linearity, and to identify potential outliers. This section is intended to supplement the lecture notes by implementing PPA techniques in the R programming environment. Plots involving these estimates can help to evaluate whether the random effects are plausibly normally distributed, whether there are extreme values, and whether predictors may have omitted nonlinear effects. lmer and sjp. That is, qqmath is great at plotting the intercepts from a hierarchical model with their errors around the point estimate. The results of the individual studies are shown grouped together according to their subgroup. Let us see how we can use the plm library in R to account for fixed and random effects. Even though the association is perfect, because you can predict Y exactly from X, the correlation coefficient r is exactly zero. Fixed effects model If the effect is the same in all. Each of the above offer different underlying engines and capabilities and therefore choice of package, will dependend on the nature of the data and the desired model. Processing. Because there are not random effects in this second model, the gls function in the nlme package is used to fit this model. McMurry Written specifically as material for CHANCE courses July 24, 1992 This guide is intended to help you begin to use JMP, a basic statistics package,. effects: Plot random effects of model in Bayesthresh: Bayesian thresholds mixed-effects models for categorical data rdrr. Look closely at. It is efficient at detecting relatively large shifts (typically plus or minus 1. Change in size (grow shorter or taller). For forecasting, o R2 matters (a lot!) o Omitted variable bias isn’t a problem! o We will not worry about interpreting coefficients in forecasting models o External validity is paramount: the model estimated using historical data must hold into the (near) future. The protagonist sets out to defeat something that threatens him/her or a group they belong to. The Spatial Patterns of Functional Groups and Successional Direction in a Coastal Dune Community. Marginal Effects (related vignette) type = "pred" Predicted values (marginal effects) for specific model terms. You can also include polynomial terms of the covariates. You will also learn about training and validation of random forest model along with details of parameters used in random forest R package. plot_model() allows to create various plot tyes, which can be defined via the type-argument. Partial dependence plot gives a graphical depiction of the marginaleffect of a variable on the class probability (classification) orresponse (regression). R has a built-in editor that makes it easy to submit commands selected in a script file to the command line. values <- seq(-4,4,. In this post I will explain how to interpret the random effects from linear mixed-effect models fitted with lmer (package lme4). Inference summary(m1) Linear mixed model fit by REML ['lmerMod'] Formula: Biomass ~ Temp + N + (1 + Temp | Site) Data: data REML criterion at convergence: 327. For now, we'll ignore the main effects-even if they're statistically significant. To calculate the mixed effects limits of agreement, we analysed the paired differences of each device compared with the gold-standard using a mixed effects regression model, including participant as a random effect and activity as a fixed effect, using the nlme package in R software version 3. If the points in a residual plot are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a nonlinear model is more appropriate. Welcome to the Python Graph Gallery. A First Course in Design and Analysis of Experiments Gary W. ) When both factors are fixed effects, as in this unit, you should look at both profile plots (see Problem 7. Of the remaining two parameters, one can be chosen to draw a family of graphs, while the fourth parameter is kept constant. The model can include main effect terms, crossed terms, and nested terms as defined by the factors and the covariates. n is of length > 1, random effects indicated by the values in sample. In a model with right-hand-side ~ A + B the effects of A are evaluated first, and the effects of B after removing the effects of A. We may try to relate the size of the effect to characteristics of the studies and their subjects, such as average age, proportion of females, intended dose of drug, or baseline risk. All packages except MIXOR can provide estimates of the random effects. frame ( mpg = x ), type = "response" ), add = TRUE ). There is a video tutorial link at the end of the post. (illustrated with R on Bresnan et al. " These words begin a report on a statistical study of the effects of logging in Borneo. Below is an example of a forest plot with three subgroups. This model is a three-level random intercepts model, which splits the variance between lecturers, students, and the residual variance. Discussion includes extensions into generalized mixed models and realms beyond. Rags to Riches. Scatter Plot; With a scatter plot a mark, usually a dot or small circle, represents a single data point. In Rangeland Ecology & Management. Next click on Add to specify the plot (see Figure 9-6) and then click Continue. Following is a scatter plot of perfect residual distribution. Therefore, there is significant individual difference in the growth rate (slope). Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. Another example is the amount of rainfall in a region at different months of the year. args = list ( family = "binomial" ), se = FALSE ) par ( mar = c ( 4 , 4 , 1 , 1 )) # Reduce some of the margins so that the plot fits better plot ( dat$ mpg , dat $vs ) curve ( predict ( logr_vm , data. Of the remaining two parameters, one can be chosen to draw a family of graphs, while the fourth parameter is kept constant. We will select the Bonferroni interval adjustment to control the. We try to group the samples based on two feature variables - age and bmi. , the -1 term in the Corr column under Random effects). Each example provides the R formula, a description of the model parameters, and the mean and variance of the true model which is estimated by the regression and observed values. Identification of correlational relationships are common with scatter plots. 02 Residual 2. It is efficient at detecting relatively large shifts (typically plus or minus 1. -You cannot make inferences to a larger experiment. Seasonal effects are apparent along mMDS axis 2 (from winter to spring to summer), while the contrast of rain versus dry was relatively much smaller (Figure 3a). Estimating fixed Effects & Predicting Random Effects For a mixed model, we observe y, X, and Z!, u, R, and G are generally unknown Two complementary estimation issues (i) Estimation of ! and u Estimation of fixed effects Prediction of random effects BLUE = Best Linear Unbiased Estimator BLUP = Best Linear Unbiased Predictor Recall V = ZGZ T + R. The upper left plot in the above figure shows the effect of the median income in a district on the median house price; we can clearly see a linear relationship among them. The term “split plot” derives from agriculture, where fields may be split into plots and subplots. A model for such a split-plot design is the following:. Partial dependence plot. type = "std2" Forest-plot of standardized beta values, however, standardization is done by dividing by two sd (see 'Details'). Random-effects terms are distinguished by vertical bars ("|") separating expressions for design matrices from grouping factors. The table result showed that the McFadden Pseudo R-squared value is 0. The random walk pattern shown in animation 2 indicates problems with the chain. (To reduce the scale of the y-axis, the largest two effects, X4: Direction and X5: Batch, are not shown on the plot. A protagonist is in some way misfortune, usually financially. Quantile Plots • Quantile plots directly display the quantiles of a set of values. None of the above. Fonton N, Atindogbe G, Honkonnou N, and Dohou R. More than 90% of Fortune 100 companies use Minitab Statistical Software, our flagship product, and more students worldwide have used Minitab to learn statistics than any other package. We have some repeated observations (Time) of a continuous measurement, namely the Recall rate of some words, and several explanatory variables, including random effects (Auditorium where the test took place; Subject name); and fixed effects, such as Education, Emotion (the emotional connotation of the word to remember), or$\small \text{mgs. This type of prediction incorporates the uncertainty for the population average (i. Mixed Models and Random Effect Models. Titanic: Getting Started With R - Part 5: Random Forests. I’ve ended up with a good pipeline to run and compare many ordinal regression models with random effects in a Bayesian way using the handy R formula interface in the brms package. In ANNALS OF FOREST SCIENCE. Mixed and Random Effect Model Reports and Options. This is because the association is nonlinear. glmer(fit, type = "fe", sort = TRUE) To summarize, you can plot random and fixed effects in the way as shown above. glmer(fit, type = "re. With either base R graphics or ggplot 2, the first step is to set up a vector of the values that the density functions will work with: t. According to Christopher Booker, there are seven types of story. 9919) [1] 0. Maybe I’m wrong. The following graph plots BCG treatment effect on the y axis by distance from the equator on the x axis, with an ab line from a meta-regression. Plot size for modeling the spatial structure of Sudanian woodland trees. dygraphs() is an R package that takes R input and outputs the JavaScript needed to display it in your browser, and as its made by RStudio they also made it compatible with Shiny. Most of the results might be off-the-wall but some are pure gold. io Find an R package R language docs Run R in your browser R Notebooks. Equation (11. Not necessarily that is. To obtain Type III SS, vary the order of variables in the model and rerun the analyses. 282, which indicates a decent model fit. Below the output window are two additional windows. list and plot. The McFadden Pseudo R-squared value is the commonly reported metric for binary logistic regression model fit. Going Further. Michael Zyphur has made available a free 3-day workshop held in July 2019 at the University of Melbourne. • There is no built-in quantile plot in R, but it is relatively simple to produce one. Rags to Riches. plot_model() allows to create various plot tyes, which can be defined via the type-argument. This means that for every 1% increase in biking to work, there is a correlated 0. The basics of random intercepts and slopes models, crossed vs. Yet, we do have choose an estimator for $$\tau^{2}$$. Random effects can be thought as being a special kind of interaction. Tutorial index. ) The action “na. , gender: male/female). population birth rate d. Titanic: Getting Started With R - Part 5: Random Forests. For the other one, the residual is negative one, so we would plot it right over here. To produce a forest plot, we use the meta-analysis output we just created (e. The program provides a complete set of numeric reports and plots to allow the investigation and presentation of the studies. Fonton N, Atindogbe G, Honkonnou N, and Dohou R. For example, suppose the business school had 200. RANDOM_WALK_2D_SIMULATION, a MATLAB program which simulates a random walk in a 2D region. Below the output window are two additional windows. 09 m) were used to measure large (both live trees with a DBH larger than 14 cm and dead trees with a height of 3. A First Course in Design and Analysis of Experiments Gary W. R supports two additional syntaxes for calling special types of functions: infix and replacement functions. The effects can either be harmful or helpful, but not lethal. Assuming the model fitted is saved in the mymodel object, one can get the random + fixed effects of a multilevel model in R as follows:. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo-random number algorithms typically used in computer programs. Random effects probit and logit specifications are common when analyzing economic experiments. An example for such a behavior is shown. Oehlert University of Minnesota. R language uses many functions to create, manipulate and plot the time series data. , random intercept / subject=block*year. Discussion includes extensions into generalized mixed models and realms beyond. after using runmed(x,7) we remove the outlier effect from trend so the random part will have the outlier effect –> raw data(has. Most functions in R are “prefix” operators: the name of the function comes before the arguments. To divide each block into three equal sized plots ( whole plots ), and each plot is assigned a variety of oat according to a randomized block design. The qqmath function makes great caterpillar plots of random effects using the output from the lmer package. For mixed effects models, only fixed effects are. Figure 8 shows a diagnostic graph that contains a trace plot, a histogram and density plots for our MCMC sample, and a correlegram. Pareto plots, main effects and Interactions plots can be automatically displayed from the Data Display tool for study and investigation. 8764 Number of obs: 100. Here are data on the number of tree species in 12 unlogged forest plots and 9 similar plots logged 8 years earlier:. 31 Beneficial effects on mortality, found in a meta-analysis of small studies,32 were subsequently contradicted when the very large ISIS-4. Variance components are found in the output under Random effects (the. This is a demonstration of using R in the context of hypothesis testing by means of Effect Size Confidence Intervals. This process is described in Baayen page 305, through the languageR function plot. The upper edge (hinge) of the box indicates the 75th percentile of the data set, and the lower hinge indicates the 25th percentile. The shrinkage amount is based on how much information is contained in a random effect groups. See full list on rcompanion. are covered. SAS calls this the G matrix and defines it for all subjects, rather than for individuals. Write about whatever it makes you think of. • Caution if random effects return meaningfully different results from fixed effects. So, what I am trying to do is to plot each of the 30 versions of b3, i. Repeated measures analysis with R Summary for experienced R users The lmer function from the lme4 package has a syntax like lm. If you use the ggplot2 code instead, it builds the legend for you automatically. The new independent variable improves the predictive power of the regression. So it is just like that. The easiest is to plot data by the various parameters using different plotting tools (color, shape, line type, facet), which is what you did with your example except for the random effect site. Supported model types include models fit with lm(), glm(), nls(), and mgcv::gam(). block effects (instead of removing it as in the intrablock analysis) in estimating the treatment effects to conduct the analysis of design. In conclusion, it is possible to meta-analyze data using a Microsoft Excel spreadsheet, using either fixed effect or random effects model. We read in the data and subtract the background count of 623. Read blog posts, and download and share JMP add-ins, scripts and sample data. The world's largest digital library. For now, we'll ignore the main effects-even if they're statistically significant. r i=1 X ij rc SST = a c j=1 a r i=1 (X ij-X)2 You compute the among-group variation, also called the sum of squares among groups (SSA), by summing the squared differences between the sample mean of each group, and the grand mean, weighted by the number of blocks, r. lmer and that of priming. The dots in a scatter plot not only report the values of individual data points, but also patterns when the data are taken as a whole. By selecting the appropriate parameters for the y and the x axis, one parameter (a, power (1b), effect size, or sam- ple size) can be plotted as a function of another parame- ter. The difference between homogeneity and heterogeneity therefore lies in the different approaches taken to calculate the pooled result. Free Mplus workshops - Dr. It is not necessary to specify it separately. Therefore, there is significant individual difference in the growth rate (slope). In a random effects model, the values of the categorical independent variables represent a random sample from some population of values. 68(8): 1315-1321. In this post I will explain how to interpret the random effects from linear mixed-effect models fitted with lmer (package lme4). 1) After the graphs are complete, you’ll put the infinity symbol on the legends to denote the df for the standard normal distribution. A group of individuals of the same species occupying a given area defines an: a. In Figure Figure1 1 we show the box plots of the sampled random effects in WinBUGS for the first 10 centers of the binary logistic random effects model applied to the IMPACT data. In Figure 9, the Q-Q plot of the predicted random slopes of model (1) Þt to the radon data was inserted into the lineup, while the lineup in Figure 10 included a Q-Q plot of the random slopes in model (1) where the random e! ects were simulated from a. Finally, a slight word of warning: our model assumed that the random. linear, non-linear). Inference summary(m1) Linear mixed model fit by REML ['lmerMod'] Formula: Biomass ~ Temp + N + (1 + Temp | Site) Data: data REML criterion at convergence: 327. In R, the likelihood ratio test is carried out with the anova function: The value listed under Chisq equals twice the differ- ence between the log-likelihood (listed under logLik) for priming. • Some researchers believe that when there is evidence of heterogeneity, shouldnʻtʼcombine studies at all. In one recent project I needed to draw several maps and visualize different kinds of geographical data on it. ii) within-subjects factors, which have related categories also known as repeated measures (e. To calculate the mixed effects limits of agreement, we analysed the paired differences of each device compared with the gold-standard using a mixed effects regression model, including participant as a random effect and activity as a fixed effect, using the nlme package in R software version 3. In the past two years I’ve found myself doing lots of statistical analyses on ordinal response data from a (Likert-scale) dialectology questionnaire. The basics of random intercepts and slopes models, crossed vs. But why?! Well, this will become clear if we understand what our interaction effect really means. Welcome to the Python Graph Gallery. [12] The Weibull plot is a plot of the empirical cumulative distribution function F ^ ( x ) {\displaystyle {\widehat {F}}(x)} of data on special axes in a type of Q-Q plot. Seasonal effects are apparent along mMDS axis 2 (from winter to spring to summer), while the contrast of rain versus dry was relatively much smaller (Figure 3a). random-effects parameters; and (4) the ability to fit generalized linear mixed models (al-2 Linear Mixed Models with lme4 though in this paper we restrict ourselves to linear mixed models). Each split plot. In principle, a mixed-model formula may contain ar-bitrarily many random-effects terms, but in practice the number of such terms is typically low. See full list on jaredknowles. New Mplus Technical Note: Random starting values and multistage optimization. In this plot, the scatter in X for a given value of Y is very small, so the association is strong. This is an introduction to mixed models in R. DISCLAIMER: Any opinions, findings, conclusions, or recomendations expressed in this material are those of the author(s) and do not necessarily reflect the views of CIMSS, SECE, UW-Madison, NOAA, NESDIS, STAR, NASA, SPAWAR, or NRL-Monterey. RStudio is a set of integrated tools designed to help you be more productive with R. ## subject (Intercept) 0. Effects of the Shape Parameter, beta. These will be either linear or generalized linear. Looks good so far. Most of the results might be off-the-wall but some are pure gold. Priors can be defined for the residuals, the fixed effects, and the random effects. And so this thing that I have just created, where we're just seeing, for each x where we have a corresponding point, we plot the point above or below the line based on the residual. effects: Plot random effects of model in Bayesthresh: Bayesian thresholds mixed-effects models for categorical data rdrr. New Mplus Technical Note: Random starting values and multistage optimization. partialPlot {randomForest} R Documentation. Basic life-table methods, including techniques for dealing with censored data, were discovered before 1700 [2], and in the early eighteenth century, the old masters - de Moivre. The only reason that we are working with the data in this way is to provide an example of linear regression that does not use too many data points. Interpreting a Boxplot. The 'ggplot2' philosophy is to clearly separate data from the presentation. plot_model() is a generic plot-function, which accepts many model-objects, like lm, glm, lme, lmerMod etc. Plotting separate slopes with geom_smooth() The geom_smooth() function in ggplot2 can plot fitted lines from models with a simple structure. The trace plot has a stationary pattern, which is what we would like to see. Avoid the lmerTest package. The new independent variable improves the predictive power of the regression. Likelihood Ratio test (often termed as LR test) is a goodness of fit. It internally calls via. The protagonist sets out to defeat something that threatens him/her or a group they belong to. • Partial plots and interpretation of effects. If there are R random-effects terms, then the value of 'CovariancePattern' must be a string array or cell array of length R, where each element r of the array specifies the pattern of the covariance matrix of the random-effects vector associated with the rth random-effects term. glmer(fit, type = "fe", sort = TRUE) To summarize, you can plot random and fixed effects in the way as shown above. Similarly to AFT models, when we consider a more complex model that includes interactions and nonlinear terms, it is more useful to communicate the results of the model using an effects plot. The default is type = "fe", which means that fixed effects (model coefficients) are plotted. The following graph plots BCG treatment effect on the y axis by distance from the equator on the x axis, with an ab line from a meta-regression. Random effects can be thought as being a special kind of interaction. Partial Dependence Plots: also referred to as PD plots, shows the minor effect of a feature(s) on the model’s predictions Stat Trekking: Let’s take a look at the data…. The upcoming version of my sjPlot package will contain two new functions to plot fitted lmer and glmer. In particular, I compare output from the lm() command with that from a call to lme(). Point pattern analysis in R. In Figure 9, the Q-Q plot of the predicted random slopes of model (1) Þt to the radon data was inserted into the lineup, while the lineup in Figure 10 included a Q-Q plot of the random slopes in model (1) where the random e! ects were simulated from a. An example of the lmer and qqmath functions are below using the built-in data in the lme4 package called Dyestuff. Arguments formula. Below the output window are two additional windows. The Mixed ANOVA is used to compare the means of groups cross-classified by two different types of factor variables, including: i) between-subjects factors, which have independent categories (e. , gender: male/female). Add something like + (1|subject) to the model for the random subject effect. The random effects in the model can be tested by comparing the model to a model fitted with just the fixed effects and excluding the random effects. qq-plot of random effects. Main Effects Residual Plots. Has to be put in "" (e. Here, µis a grand mean, αh is an effect for the hth level of the whole plot factor (e. , random intercept / subject=block*year. A mathematical line'' has no thickness, so it's invisible; but when we plot circular dots at each point of an infinitely thin line, we get a visible line that has constant thickness. Select the data in which we want to plot the 3D chart. population. The package includes a command to produce funnel plots to assess small study effects, and L'Abbe plots to examine whether the assumption of a common odds ratio, risk ratio or risk difference is reasonable. Association of physical activity with BMI, waist circumference, body fat percentage, risk of obesity, and risk of overweight in a random effects meta-analysis of up to 218,166. For more informations on these models you can browse through the couple of posts that I made on this topic (like here, here or here). So we've written that here because it takes less space. This, of course, is a very bad thing because it removes a lot of the variance and is misleading. Basic life-table methods, including techniques for dealing with censored data, were discovered before 1700 [2], and in the early eighteenth century, the old masters - de Moivre. If there are two random effects, such as block and year, both affects must appear in the same random statement i. LN(1+r) ≈ r. To get p-values, use the car package. The effects are instantaneous, they can be permanent or last for up to 24 hours depending on what is appropriate and/or funny. The subject effect is, in a sense, "factored out" of the random effects. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo-random number algorithms typically used in computer programs. In the past week, colleagues of mine and me started using the lme4-package to compute multi level models. Forest plot of the association of the FTO rs9939609 SNP with physical activity in a random effects meta-analysis of 19,268 children and adolescents. A video showing basic usage of the "lme" command (nlme library) in R. term: name of a polynomial term in fit as string. Pareto plots, main effects and Interactions plots can be automatically displayed from the Data Display tool for study and investigation. These partial terms are often regarded as similar to random effects, but they are still fitted in the same way as other terms and strictly speaking they are fixed terms. 84536 Random effects: Groups Name Variance Std. The default is type = "fe", which means that fixed effects (model coefficients) are plotted. Let us study the effect of fertilizers on yield of wheat. The subject effect is, in a sense, "factored out" of the random effects. Plotting Estimates (Fixed Effects) of Regression Models Daniel Lüdecke 2020-05-23. You can have more effects for the plotting and the dendrogram with the R packages ggplot2 and dendextend respectively, but I will leave them out of the scope of this article. Thanks for visiting our lab's tools and applications page, implemented within the Galaxy web application and workflow framework. If asked, the effect function will compute effects for terms that. This is the same plot as is used as an example in the User Manual. The calculation of P-values for complex models with random effects and multiple experimental unit sizes is not a trivial matte. Students do NOT need to be knowledgeable and/or experienced with R software to successfully complete this course. It includes a console, syntax-highlighting editor that supports direct code execution, and a variety of robust tools for plotting, viewing history, debugging and managing your workspace. While R’s traditional graphics offers a nice set of plots, some of them require a lot of work. The first argument is the formula object describing both the fixed-effects and random effects part of the model, with the response on the left of a ~ operator and the terms, separated by + operators, on the right. Sam, the function is plotting based on the model object, not the data itself, that is why aes_string and the model parameters are in there. To show how this works, we will study the decompose( ) and STL( ) functions in the R language. Rows in the dot-plot are determined by the form argument (if not missing) or by the row names of the random effects (coefficients). Different values of the shape parameter can have marked effects on the behavior of the distribution. But if I’m not, here is a simple function to create a gg_interaction plot. For the center population plot, we are going to use posterior predicted means for a new (as yet unobserved) participant. Immediately we have a special case of a general model Y = fixed parameters + random effects where the only fixed parameter is. observations independent of time. The results of the individual studies are shown grouped together according to their subgroup. Infix functions. Mixed Models and Random Effect Models. Psychological Methods, 2, 64-78. 2% decrease in the incidence of heart disease. R uses recycling of vectors in this situation to determine the attributes for each point, i. If you use the ggplot2 code instead, it builds the legend for you automatically. Cases or individuals can and do move into and out of the population. In this example, the estimate of variance of random effects location x genotype (LC:CLT), year x genotype (YR:LC) and year (YR) is zero. Minitab is the leading provider of software and services for quality improvement and statistics education. The program RANDOM_WALK_2D_PLOT plots the trajectories of one or more random walks. Random sampling definition, a method of selecting a sample (random sample ) from a statistical population in such a way that every possible sample that could be selected has a predetermined probability of being selected. 19) shows the computa-tion for the among-group variation. OpenGL is the industry's most widely used and supported 2D and 3D graphics application programming interface (API), incorporating a broad set of rendering, texture mapping, special effects, and other powerful visualization functions. This document describes how to plot marginal effects of various regression models, using the plot_model() function. Random Effect Models The preceding discussion (and indeed, the entire course to this point) has been limited to fixed effects" models. But if I’m not, here is a simple function to create a gg_interaction plot. A model with uncorrelated random e ects The data plots gave little indication of a systematic relationship between a subject’s random e ect for slope and his/her random e ect for the intercept. Look closely at. Using R to Compute Effect Size Confidence Intervals. Supported model types include models fit with lm(), glm(), nls(), and mgcv::gam(). Plot the estimates of random effects with confidence intervals plot. In this plot, the scatter in X for a given value of Y is very small, so the association is strong. Our first mixed model. Introduction to R Overview. (Missing values in R appear in the data frame as NA. The formula and data together determine a numerical representation of the model from. Amongst all the packages that deal with linear mixed models in R (see lmm, ASReml, MCMCglmm, glmmADMB,…), lme4 by Bates, Maechler and Bolker, and nlme by Pinheiro and Bates are probably the most commonly used -in the frequentist arena-, with their respective main functions lmer. # scatter plot of expense vs csat plot (sts. Figure 9-6 Specifying plot. It is instructive to review completely randomized design (CRD) and randomized complete block. If there were two random effects per subject, e. mle requests the maximum-likelihood random-effects estimator. Using R to Compute Effect Size Confidence Intervals. The basics of random intercepts and slopes models, crossed vs. code Seeds: random effects logistic regression. Sam, the function is plotting based on the model object, not the data itself, that is why aes_string and the model parameters are in there. You can also create infix functions where the function name comes in between its arguments, like + or -. Here are the estimators implemented in meta , which we can choose using the method. ,2016) package handles the mixed effect model, and in this function, the user can specify the factors with a random effect. Welcome the R graph gallery, a collection of charts made with the R programming language. Main Effects Residual Plots. Each whole plot is divided into 4 plots ( split-plots) and the four levels of manure are randomly assigned to the 4 split-plots. 45609 for the first entry, which corresponds to the first point. (PDF) Table S1. Rags to Riches. Random Forests for Regression and Classification. This can be used to get a look at what what observations may be stressing the model. Amongst all the packages that deal with linear mixed models in R (see lmm, ASReml, MCMCglmm, glmmADMB,…), lme4 by Bates, Maechler and Bolker, and nlme by Pinheiro and Bates are probably the most commonly used -in the frequentist arena-, with their respective main functions lmer. This chapter describes how to compute and. A mathematical line'' has no thickness, so it's invisible; but when we plot circular dots at each point of an infinitely thin line, we get a visible line that has constant thickness. nested models, etc. Weibull plot The fit of a Weibull distribution to data can be visually assessed using a Weibull plot. The mechanisms involved are complex and intertwined, hence undermining the identification of simple adaptation levers to help improve the resilience of agricultural production. These plotting functions have been implemented to easier interprete odds ratios, especially for continuous covariates, by plotting the probabilities of predictors. You will also learn about training and validation of random forest model along with details of parameters used in random forest R package. interactions. Write about whatever it makes you think of. Hundreds of charts are displayed in several sections, always with their reproducible code available. and it is often called the random (or stochastic) part of the model. In this post I will demonstrate in R how to draw correlated random variables from any distribution The idea is simple. population distribution b. glmer(fit, type = "fe", sort = TRUE) To summarize, you can plot random and fixed effects in the way as shown above. If the p-value is significant (for example <0. A video showing basic usage of the "lme" command (nlme library) in R. F),Cigarettes) #resid() calls for the residuals of the model, Cigarettes was our initial outcome variables - we're plotting the residuals vs observered. The main advantage of nlme relative to lme4 is a user interface for fitting models with structure in the residuals (var-. (2005)’s dative data (the version supplied with the languageR library). In ADS, the DOE tool comes with full supporting plots that enable designers to determine simultaneously the individual and interactive effects of many factors that could affect the output results in any design. Fixed and random effects models Random effects model Less powerful because P values are larger and confidence intervals are wider. In above code, the plot_summs(poisson. | 2020-10-20 19:43:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5429002046585083, "perplexity": 1458.6619266308546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874135.2/warc/CC-MAIN-20201020192039-20201020222039-00481.warc.gz"} |
http://andrew-algorithm.blogspot.com/2017/02/leetcode-oj-find-mode-in-binary-search.html | ## Friday, February 10, 2017
### LeetCode OJ - Find Mode in Binary Search Tree
Problem:
Please find the problem here.
Analysis:
A standard idea is to build a frequency table. Once the table is built, we go through the table to find out who are the winners. In the worst case where the tree has all distinct elements, this will lead to an extra linear space.
Solution:
The key idea is that we can do an in-order traversal of the tree to get a sorted order. We build the tree as usual with a single difference - the table only contain two entries. The current best entry, and the last number entry. Suppose an element X doesn't have enough count, and in the in-order traversal we see Y now, X is never going to win, so we can get X out of the table. Now we have a constant extra space algorithm! (Of course, you still need to save all the answers to report them, which, in the worst case, linear)
Code:
#include "stdafx.h"
// https://leetcode.com/problems/find-mode-in-binary-search-tree/
#include "LEET_FIND_MODE_IN_BINARY_SEARCH_TREE.h"
#include <map>
#include <iostream>
#include <sstream>
#include <vector>
#include <string>
using namespace std;
namespace _LEET_FIND_MODE_IN_BINARY_SEARCH_TREE
{
struct TreeNode
{
int val;
TreeNode *left;
TreeNode *right;
TreeNode(int x) : val(x), left(NULL), right(NULL) {}
};
class Solution {
public:
void findMode(TreeNode* root, vector<int>& result, int& max_count, int& last_node, int& last_count)
{
if (root == nullptr)
{
return;
}
if (root->left != nullptr)
{
findMode(root->left, result, max_count, last_node, last_count);
}
if (max_count == 0)
{
// This is possible only for the very first call
result.push_back(root->val);
max_count = 1;
last_node = root->val;
last_count = 1;
}
else
{
if (root->val == last_node)
{
last_count++;
}
else
{
last_node = root->val;
last_count = 1;
}
if (last_count == max_count)
{
result.push_back(last_node);
}
else if (last_count > max_count)
{
result.clear();
result.push_back(last_node);
max_count = last_count;
}
}
if (root->right != nullptr)
{
findMode(root->right, result, max_count, last_node, last_count);
}
}
vector<int> findMode(TreeNode* root)
{
vector<int> result;
int max_count = 0;
int last_node = 0;
int last_count = 0;
findMode(root, result, max_count, last_node, last_count);
return result;
}
};
};
using namespace _LEET_FIND_MODE_IN_BINARY_SEARCH_TREE;
int LEET_FIND_MODE_IN_BINARY_SEARCH_TREE()
{
Solution solution;
TreeNode a(1);
TreeNode b(2);
TreeNode c(2);
a.left = nullptr;
a.right = &b;
b.left = &c;
b.right = nullptr;
c.left = nullptr;
c.right = nullptr;
vector<int> result = solution.findMode(&a);
for (int i = 0; i < result.size(); i++)
{
cout << result[i] << endl;
}
return 0;
} | 2017-09-24 12:12:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4413277208805084, "perplexity": 10010.78199388548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00017.warc.gz"} |
https://tex.stackexchange.com/questions/471647/defining-a-scaling-postfix-operator | # Defining a scaling postfix operator
I would like to find a way to scale a one-place defined (postfix) operator to match its surroundings. I would like it if this behavior imitated what can be accomplished using \left and \right.
An example will help make this more clear. This code:
\documentclass{amsart}
\newcommand{\pda}{\mathord{\downarrow}}
\begin{document}
$t\pda^X_Y$
$\left[t\pda^X_Y\right]^a_b\pda^Y_Z$
\end{document}
Produces the following output:
I'm happy with the size of the first arrow, but I would like a way to scale the second arrow so that it matches the scale of the bracketed bit to its left.
Because its stackexchange and because I haven't finished my coffee yet so am still mildly grumpy, I feel I should include these disclaimers: no, I'm not interested in using some completely different notation for this that you think looks better. I promise, I know what I'm doing and I need something that looks (essentially) like this. No, I won't define 'match' more carefully for you. If you're not sure what I mean by 'match', perhaps its best to just leave the question for someone else. Yes, I'm happy to use other packages or to define my arrows differently. Finally, 'pda' is my name for that thingy because its a Postscripted DownArrow.
What I would most love is if I could write something like
\match\pda^Y_Z
and have tex look, see the '\match' and think 'ah yes, I should scale this thing', then actually scale the thing.
When TeX has processed \right, it stores the result in a math list and goes along: the information about the size of the constructed list is not available.
You can do it by redefining \left, but I'm not sure you really want to do it.
\documentclass{article}
\usepackage{xparse}
\NewDocumentCommand{\shayleft}{u{\right}me{^_}}{%
\shayleftaux{#1}{#2}{#3}{#4}%
}
\ExplSyntaxOn
% remember the meaning of \left
\cs_new_eq:NN \shay_left: \left
\NewDocumentCommand{\shayleftaux}{mmmm}
{
\shay_right_or_middle:nnnn { #1 } { #2 } { #3 } { #4 }
}
\cs_new_protected:Nn \shay_right_or_middle:nnnn
{
\peek_meaning:NTF \pda
{
\shay_doubleleft:nnnn { #1 } { #2 } { #3 } { #4 }
}
{
\shay_singleleft:nnnn { #1 } { #2 } { #3 } { #4 }
}
}
\cs_new_protected:Nn \shay_doubleleft:nnnn
{
\shay_left: . \kern-\nulldelimiterspace
\shay_left: #1
\right#2
\tl_if_novalue:nF { #3 } { \sp{#3} }
\tl_if_novalue:nF { #4 } { \sb{#4} }
\bool_set_true:N \l_shay_right_bool
}
\cs_new_protected:Nn \shay_singleleft:nnnn
{
\left#1
\right#2
\tl_if_novalue:nF { #3 } { \sp{#3} }
\tl_if_novalue:nF { #4 } { \sb{#4} }
\bool_set_false:N \l_shay_right_bool
}
\NewDocumentCommand{\pda}{}
{
\bool_if:NTF \l_shay_right_bool
{
\right \downarrow
}
{
\mathord{\downarrow}
}
}
\bool_new:N \l_shay_right_bool
\cs_set_eq:NN \left \shayleft
\ExplSyntaxOff
\begin{document}
$t\pda^X_Y$
$\left[t\pda^X_Y\right]^a_b\pda^Y_Z$
\end{document} | 2019-09-21 19:26:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827061057090759, "perplexity": 2260.5513266900484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574662.80/warc/CC-MAIN-20190921190812-20190921212812-00281.warc.gz"} |
https://physics.stackexchange.com/questions/113557/neutron-stars-and-black-holes | # Neutron stars and black holes
The official limits for a neutron star is $1.4 - 3.2\;M_\odot$. But I read that the limit depends on the particular structure of a star to estimate which mass it must have. I also read that neutron stars with less than $1.4\;M_\odot$ were observed. Given this information, I wonder if we can be sure that our Sun has definitely not enough mass to become a neutron star. Are there absolut limits (without the need of further information) for a star to become a neutron star or a black hole ?
• Can you provide a link to observations of neutron stars with a mass of less than 1.4 times the solar mass. May 20, 2014 at 11:25
• John, see table 1 of The Nuclear Equation of State and Neutron Star Masses. There are several examples. May 20, 2014 at 12:02
• I received the following warning : Wait! Some of your past questions have not been well-received, and you're in danger of being blocked from asking any more. For help formulating a clear, useful question, see: How do I ask a good question? Also, edit your previous questions to improve formatting and clarity. Should I reopen the closed questions ? I do not think this makes sense because the questions do not seem to fulfill the criterias here. What shall I do not to be blocked ? May 26, 2014 at 9:44
Observed neutron stars range from $1.0 \pm 0.1 M_{\odot}$ to $2.7 \pm 0.2 M_{\odot}$ according to table 1 of The Nuclear Equation of State and Neutron Star Masses, which lists dozens of examples. Keep in mind that the mass of the neutron star is typically substantially smaller than the mass of its progenitor star; late in the stellar life cycle a lot of mass is blown away, for instance a star that goes though an AGB phase may lose >50% of its mass. So our $1M_\odot$ Sun is likely to end up as a stellar remnant with $M < 1M_\odot$, probably a white dwarf.
According to Structure of Quark Stars, the mass is the only parameter to consider for neutron stars (but not hypothetical quark stars), although I would think rotation rate would be a factor.
This reference also states that neutron stars can be as small as $0.1 M_{\odot}$, but this does not imply that the sun will actually become a neutron star.
According to Possible ambiguities in the equation of state for neutron stars, it is the theory (equation of state) of neutron stars that is causing the current uncertainty about the limits of neutron stars.
Also, it is unknown whether or not neutron stars may become quark stars before becoming black holes. There is a term "quark nova" for such a hypothetical event.
• +1, and added a mention of the distinction between the stellar remnant mass and the stellar progenitor mass, which seems to be a point of confusion in the question. May 20, 2014 at 19:32
• Yes, it is often not appreciated that the smallest neutron stars are less massive than what many people think is the "Chandrasekhar limit". A new, precise measurement exists for a neutron star at $1.174 \pm 0.004 M_{\odot}$ arxiv.org/abs/1509.08805 This is still a little above the Chandrasekhar mass for degenerate iron under GR conditions. Oct 14, 2015 at 11:31
Yes, there are absolute limits (with some theoretical uncertainty) for the mass of a progenitor star that can become a neutron star or black hole and the Sun is well below that limit.
The other answers here talk about the range of masses of neutron stars, but do not directly answer the question you pose: the answer arises from considerations of what happens in the core of a star during the course of its evolution.
In a star of similar mass to the Sun, core hydrogen burning produces a helium ash. After about 10 billion years, the core is extinguished and hydrogen burning in a shell results in the production of a red giant. The red giant branch is terminated with the onset of core helium burning, leaving a core ash of carbon and oxygen via the triple alpha process. After the core is extinguished again, there is a complicated cycle of hydrogen and helium burning in shells around the core. During this phase, the star swells enormously to become an asymptotic red giant branch star (AGB). AGB stars are unstable to thermal pulsations and lose a large fraction of their envelopes via a massive wind. The Sun is expected to lose about $0.4-0.5M_{\odot}$ at this time.
Now we get to the crux of the answer. What is left behind is a core of carbon and oxygen, with maybe a thin layer of hydrogen/helium on top. With no nuclear reactions going on, this core contracts as far as it is able and cools. In a star governed by "normal" gas pressure, this process would continue until the centre was hot enough to ignite carbon and oxygen burning (a higher temperature is needed to overcome the greater Coulomb repulsion between more proton-rich nuclei). However, the cores of progenitor stars with masses $<8M_{\odot}$ are so dense that electron degeneracy pressure takes over. The electrons in the gas are compressed so much that the Pauli Exclusion Principle results in all the low energy states being filled completely, leaving many electrons with very high energies and momenta. It is this momentum that provides the pressure that supports the star. Crucially, this pressure is independent of temperature. This means that the core can continue to cool without contracting any further. As a result it does not get any hotter in the centre and fusion never restarts. The final fate of stars like the Sun, and anything with a main sequence mass of $<8M_{\odot}$ is to be a cooling white dwarf. The figure of $8M_{\odot}$ is uncertain by about $\pm 1M_{\odot}$, because the details of mass loss during the AGB phase are not completely solved theoretically and it is difficult to empirically estimate the progenitor masses of white dwarfs.
Stars more massive than this have cores which do contract sufficiently to begin further stages of fusion, resulting in the production of an iron/nickel core. Fusion cannot produce any more energy from these nuclei, which are at the peak of the binding energy per nucleon curve, and thus the star will ultimately collapse and has a core mass greater than can be supported by electron degeneracy pressure. It is this collapsing core which forms a neutron star or black hole.
An interesting caveat to my answer is that there may be an evolutionary route for a star like the Sun to become a neutron star if it were in a binary system. Accretion from a companion might increase the mass of the white dwarf star, pushing it above the Chandrasekhar mass - the maximum mass that can be supported by electron degeneracy pressure. Though in principle this might form a neutron star, it is considered that a more likely scenario is that the entire star will detonate as a Type Ia Supernova, leaving nothing behind.
There are two questions here, namely about the limits on neutron star masses, and about the possibility of our sun becoming one. I'll try to argue that they are different questions, viz. the first about the stability and the second about the formation of such objects.
1) DavePhD's reference in the comments (here, for completeness) answers it completely. There is a lot of room for neutron star masses, because it depends intrinsically on the equation of state of nuclear (and possibly sub-nuclear) matter. Since we don't know the correct equation of state is hard to give strict boundaries. Without an equation of state one could have a mass as large as desired, just by increasing radius. So qualitatively the best one can do depends on the interplay between mass and radius, or density if you will.
The strictest limit comes from Schwarzschild radius, that is if you make too dense a star it would generate an event horizon and collapse into a black hole. Next to this, one notes that the speed of sound escalates with density, so if you try to make too dense a star it will have speed of sound greater than the speed of light, violating causality. This gives a limitation in the different equations of state possible. The upper bounds of about 3.5 solar masses comes from this consideration. You'll find all this more deeply discussed in the aforementioned paper. The summary is in Figure 3, page 51. I am completely ignorant of an analogous argument for lower bounds on the masses that use only some physical principles (in spite of my first, incorrect, answer that related it to angular momentum and Rob Jeffries kindly corrected me on the comments) so I have deleted the incorrect previous part.
2)Somewhat independently of the previous discussion, we can be pretty sure that the sun will never become a neutron star, no matter what equation of state is correct. This is because gravitational collapse of a star is a highly non-linear process, that besides the different nuclear fusion cycles, will generate shock waves. Therefore it will not proceed adiabatically, on the contrary this processes will shed most of a star's mass. Therefore to produce a neutron star we need to start with a very heavy one, typically of the order of tens of solar masses. This is the reason we attribute neutron star formation to supernova events.
• The boundary for the production of a neutron star - which is the key point of the question - is around $8M_{\odot}$. The minimum mass of a neutron star has little to do with its rotation. physics.stackexchange.com/questions/143166/… Jun 2, 2015 at 8:43
• @RobJeffries, thanks for the 8 solar masses bound, but if you noticed I only mentioned rotation as an hypotheses. If you use the lowest angular momentum measured you can get a lower bound (which the papers describes), but it is clear that if you assume zero angular momentum this bound would not be applicable. I was just summarizing the paper. Without an equation of state it is only possible to get lower bounds with angular momentum. Hopefully the question you linked will complement this discussion with considerations from the equation of state Jun 2, 2015 at 22:40
• don't know what you mean by "papers". I glanced at the Lattimer review. It (in section 2.1) does not discuss the minimum mass in terms of rotation. Low mass neutron stars, if they exist, would be large, not " tiny". The figure on p.51 has a curve representing a stability line for something rotating as fast as a millisecond pulsar. This is the maximum rotation ever observed, not the minimum. Rotation does not determine the minimum possible mass for a neutron star. Jun 2, 2015 at 23:11
• @RobJeffries, you are entirely correct, of course, I'm very sorry about this. I wrote incorrectly "lowest" instead of "highest". According to my notes from a Friedman lecture the argument goes as this: if you try to make a low mass, small radius neutros star and put angular momentum on it then you get an instability. But low mass large radius are subjet to lots of non-equtilibrium processes. Therefore one does not expect neutron stars with arbitrarily low massess. It is not a bound but a heuristic guide. I'll rewrite later to reflect that, thank you Jun 2, 2015 at 23:36
• For a given specific angular momentum, the ratio of centrifugal force to gravity scales as 1/r. Thus the effect of rotation on the structure of low mass neutron stars, which would have radii of ~200 km, will be smaller than standard neutron stars. Jun 3, 2015 at 6:08
Black hole existence was predicted by solving Einstein’s general relativity equations. Mathematically, the equations show that it is possible to have a singularity at the center of a black hole. The meaning of this singularity is that the mass of a black hole is confined to an infinitely small point at its center, thus the density at this point is infinite. The main problem with this conclusion is that, at the singularity, the laws of physics don’t work. This is what makes the black hole a mystery and gives rise to mind-boggling theories such as that black holes are portals to other Universes, or that it is possible to travel in time.
I claim that although this solution is mathematically possible, it does not have a physical meaning. (Note: the same argument was also expressed by… Einstein). I postulate that there must be a limit to the maximal density of bodies in the Universe. The current prevailing theory is as follows: A black hole is created when a star consumes its fuel and then gravitationally collapses. The end of this process is dependent on the mass of star. If the mass of the star is 1.39 solar masses (designated as Chandrasekar limit), gravity is strong enough to combine protons and electrons to make neutrons and thus creating a neutron star. The neutrons, and residual protons are packed in the neutron star at their maximum density. If the mass of the star is between 1.5 to 3 solar masses gravity becomes strong enough to break the nucleons into its constituents (quarks and gluons) and then the star becomes a black hole that has a singularity point at its center with an infinite density. Partially, I concur with the current theory. Specifically: 1) The origin of a neutron star and a black hole is the gravitational collapse of a star. 2) The final mass of the neutron star or the black hole relates to the initial mass of the star. Neutron stars have been observed in the Universe. The neutron star contains nucleons (neutrons and protons) that are packed to the maximum density possible in the Universe. The density of a neutron star is 3.7x10^17 to 5.9x10^17 kg/m^3, which is comparable to the approximate density of an atomic nucleus of 3x10^17 kg/m^3. The surface temperature of the neutron star is extremely high ~600000K. https://en.wikipedia.org/wiki/Neutron_star
As for the black hole: I claim, that the mechanism that creates a neutron star is also applicable to a black hole. I mean that the creation of a black hole is not by compressing its mass further, thus breaking of nucleons into their fundamental constituents, as postulated by current theory, but rather by adding nucleons to the nucleus to get the maximum density. There are two reasons for my claim. The first is theoretical, Pauli’s exclusion principle. The exclusion principle forbids two identical fermion particles to occupy the same place at the same time. If the size of the black hole becomes infinitely small, then the nucleons must overlap each other, contrary to the Pauli’s exclusion principle.
The second reason is experimental. I note two known experiments. The first experiment is measuring the force between nucleons as a function of distance between them. In tests done in particles colliders, it was found that the force between two nucleons is as described in https://en.wikipedia.org/wiki/Nuclear_force. In this graph, the force (in Newtons) is plotted against range - the distance between two nucleons (fm). The graph shows that for range smaller than 0.8fm, the force becomes large repulsive force. The conclusion is that two nucleons cannot be squeezed into the same space.
The second experiment was recently done by nuclear physicists at Jefferson Lab. They measured the distribution of pressure inside the proton. The findings show that the proton’s building blocks, the quarks, are subjected to a pressure of 100 decillion Pascal (10^35) near the center of a proton, which is about 10 times greater than the pressure in the heart of a neutron star. This means that the outward-directed pressure from the center of the proton is greater than the inward-directed pressure near the proton’s periphery and therefore a neutron star cannot collapse. https://www.jlab.org/node/7928
The question now is how come that black holes are not directly observed, while neutron stars are observed. My answer is: The visibility depends on the relation between the physical radius of the nucleus and Schwarzschild radius. If a celestial body has a nucleus radius that is bigger than its Schwarzschild radius, it will be observed. On the other hand, if a celestial body has a nucleus radius that is smaller than its Schwarzschild radius, it will be hidden. This is exemplified in the following calculations:
The mass limit between a neutron star and a black hole.
There is a limit to the mass of a neutron star. At this limit, if more mass is added to the neutron star it will become a black hole. The limit mass can be found by equating the Schwarzschild radius to the radius the nucleus of the neutron star.
This result is in good agreement with observations. The smallest black hole observed in the Universe is XTE_J1650-500. Its mass is estimated to be ~5-10 Sun masses.
https://en.wikipedia.org/wiki/XTE_J1650-500
To sum up:
1) A black hole is basically a neutron star. Like a neutron star (and also the nucleus of an atom) it is compressed to the maximum density possible in the Universe. 2) A black hole must have a mass that is bigger than ~5.25 Sun masses. At this mass the physical radius of the black hole is smaller than its Schwarzschild radius. 3) It is possible that the temperature of the black hole is higher than the temperature of a neutron star. However, this temperature cannot be measured by an observer outside the Schwarzschild radius. 4) It can be shown that the gravity of the neutron star and the gravity of the Milky Way’s black hole are of 2x10^12 m/sec^2 and 2.6x10^14 m/sec^2 respectfully. 5) The physical conditions of 3) and 4) show that a black hole cannot be a portal to other Universes.
• You have not understood what the Pauli Exclusion principle is. Neither are your statements about the force between nucleons preventing collapse correct. In GR, pressure is a source of gravity. Jun 24, 2018 at 20:41
• Rob, Relating to the Pauli Exclusion principle. I refer you to forbes.com/sites/startswithabang/2018/06/13/… . Ethan Siegel explains the principle very clearly. As for the forces between nucleons. I relate to Reid potential. en.wikipedia.org/wiki/Nuclear_force . Analyzing Reid’s formula shows that at r=0 the potential as well the force between nucleons becomes infinite. Jun 25, 2018 at 13:21
• The Forbes articles gets its explanation of the PEP right. Your version - "The exclusion principle forbids two identical fermion particles to occupy the same place at the same time. " - is not. The second point refers to the fact that in GR, even if the pressure approaches infinity this will still not stop the star collapsing because it produces an infinite curvature of space. Any attempt to understand the equilibrium or instability of neutron stars has to use general relativity - in particular the Tolman-Oppenheimer-Volkoff equation of hydrostatic equilibrium. Jun 25, 2018 at 13:43 | 2022-05-26 15:12:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7230257987976074, "perplexity": 328.1735382830942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00336.warc.gz"} |
https://math.stackexchange.com/questions/330948/why-the-nontrivial-nullspace-of-a-functional-has-codimension-1 | Why the nontrivial nullspace of a functional has codimension 1?
The nullspace of a linear functional that is not $$\equiv 0$$ is a linear subspace of codimension $$1$$.
I don't understand this statement on page 57, Functional Analysis(Pater Lax). Does it mean the dimension of nullspace of a linear functional is either zero or the dimension of the domain of the functional minus one, which I don't see why it's necessarily true.
Added: Thank you all for your valuable comments and answers. I didn't realize that it's wrong to interpret "The nullspace of a linear functional that is not $$\equiv 0$$" as the nullspace (of a linear functional) that is not $$\equiv 0$$ until I saw the answers.
• The image of a functional has dimension $1$. Mar 15, 2013 at 5:33
• Do you know what codimension means for a subspace of an infinite-dimensional vector space? Mar 15, 2013 at 5:36
• @QiaochuYuan: No, I don't :-( Mar 15, 2013 at 5:51
• @Metta: it means the dimension of the quotient space. Now this is just an application of the first isomorphism theorem. Mar 15, 2013 at 5:55
For simplicity, suppose $$X$$ is a vector space over $$\mathbb{R}$$, and $$f$$ a linear functional on $$X$$.
If $$f=0$$, then $$\ker f = X$$, so the codimension is zero.
If $$f \neq 0$$, then there is some $$x_0$$ such that $$f(x_0) \neq 0$$. Now consider the quotient space $$Q={X}/{\ker f}$$ (ie, two points $$x_1,x_2$$ are equivalent iff $$f(x_1) = f(x_2)$$, which basically 'flattens' $$X$$ 'down' to the values of $$f$$, ie $$\mathbb{R}$$).
Pick some $$q \in Q$$, then we must have $$q = \{y\}+\ker f$$ for some $$y$$. Note that by linearity we have $$f(y+\frac{-f(y)}{f(x_0)}x_0) = 0$$, hence $$q = \frac{f(y)}{f(x_0)}(\{x_0\}+ \ker f)$$. Hence $$\{ \{x_0\}+ \ker f \}$$ is a basis for $$Q$$, hence $$\ker f$$ has codimension one.
• $f(y+\frac{-f(y)}{f(x_0)}x_0) = 0$ could you clarify why this is true? Mar 29, 2021 at 20:51
• I understand that $f(0)=0$ I guess I am just confused why we are able to say that $y=\frac{f(y)}{f(x_0)}x_0$. Mar 29, 2021 at 20:59
• @NewbieMather You can't. But since $y+\frac{-f(y)}{f(x_0)}x_0 \in \ker f$, you can write $y = \frac{f(y)}{f(x_0)}x_0 + k$, where $k \in \ker f$ and so you can write $q= \frac{f(y)}{f(x_0)} \{ x_0 \} + \ker f$. Mar 29, 2021 at 21:05
• I guess I am not sure why the above implies that $y-\frac{f(y)}{f(x_0)}x_0\in \ker f$. Mar 29, 2021 at 21:08
• @copper.hat Please take a look at this question. Mar 29, 2021 at 21:09
If your linear functional is $f\colon V \to \mathbb R$ (or whatever field you're working over) it means the nullspace has dimension $\dim V$ if $f = 0$ and $\dim V - 1$ otherwise.
The reason why is the rank nullity theorem: $\dim\operatorname{rank}f + \dim\operatorname{nullity}f = \dim V$. The rank of $f$ is $0$ if $f = 0$. If $f \neq 0$ then the rank is $1$.
Edit: As Yuan points out the above argument only works in the finite dimensional case. For the infinite dimensional case take a basis of the nullspace and extend to a basis of $V$. You can add at most $1$ additional vector because if $f(v) = a$ and $f(w) = b$ then $f(bv - aw) = 0$.
• The rank-nullity theorem doesn't apply if $\dim V$ is infinite, but the statement is still true in that case. Mar 15, 2013 at 5:36
• @QiaochuYuan: Thanks, good catch :) I was assuming finite dim because of the wording of the question.
– Jim
Mar 15, 2013 at 5:40 | 2022-06-27 09:55:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8824253082275391, "perplexity": 161.74791017347087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00062.warc.gz"} |
https://gamedev.stackexchange.com/questions/53449/time-of-day-lighting-day-cycle | # Time of Day Lighting / Day Cycle
I'm trying to implement a simple "lighting" system that alters a value of light between 0.0-1.0, 1.0 is midday 0.0 is total blackness.
Are there any "good" information regarding this particular subject or is it something that you have to develop yourself and adjust to your game in particular. What I'm probably looking for is an algorithm that is flexible and could work by changing values like midday point and such.
My algorithm uses 12:00 / 12PM as it's center point and at that time it's 1.0 in lighting. What I'm trying to do is perhaps "change" the midday point to any value to say 10 and it should still work the same, and perhaps also be able to alter the number of hours the midday can be. Perhaps so that the lighting rises and falls exponentially. So that between night and 4 it raises slowly, between 4 and 8 it raises much faster and at 10 it's reached it high point until 14 when it starts to fall again but in the reverse order.
My simple algorithm only calculates a value between 0.0 and 1.0 and if there is any good info or some suggestions on how to "improve" this method or any other information regarding this subject it would be greatly appreciated.
My algoritm goes as following:
If (CurrentHour < (HoursPerDay / 2))
Lighting = CurrentHour / (HoursPerDay / 2)
Lighting += (CurrentMinute / MinutesPerHour) * (1 / (HoursPerDay / 2))
Else
Lighting = 1 - ((CurrentHour / (HoursPerDay / 2)) - 1)
Lighting -= (CurrentMinute / MinutesPerHour) * (1 / (HoursPerDay/2))
MIN_VALUE = 0.3f;
Max(MIN_VALUE, Min(Lighting, 1f))
or in C# code:
public float CalculateLighting(uint hour, uint minute, uint minutesPerHour, uint hoursPerDay)
{
float lighting = 1f;
if (hour < (hoursPerDay / 2))
{
lighting = (hour / (hoursPerDay / 2)) * lightingMax;
lighting += ((minute / minutesPerHour) * (1f / (hoursPerDay / 2))) * lightingMax;
}
else
{
lighting = lightingMax - (((hour / (hoursPerDay / 2)) * lightingMax) - lightingMax);
lighting -= ((minute / minutesPerHour) * (1f / (hoursPerDay / 2))) * lightingMax;
}
lighting = MathHelper.Clamp(lighting, lightingMin, lightingMax);
}
This way when the time is 6:00 AM the value is (6/12) and the same for (12+6) and also I clamp the results so I get a value between 0.3 and 1.0 or change that to whatever in case you wanted it to be Winter season so it's darker you can raise that value. This is however a very simple implementation of this concept.
Some functions that I'd like to bring in to it is:¨
1. When is the "midday" and how long is it? Say it's 4 hours between 10-14. During this time it should always be 1.0 in lighting.
2. Change the midday point to anytime or remove it completely. During dark nights in a game perhaps the sun shouldn't rise fully.
3. "Somehow" perhaps make it increase slow and decrease fast by using some value for each. If the sun rises slowly but falls rather quickly afterwards. Exponentially raise and exponentially fall faster.
Just some ideas I'm looking for. It doesn't have to be realistic, just a sense that the weather can change. This value should be a value between 0.0 and 1.0 or adjustable (for extra bright days or something) due to the fact that it can be used to light the entire scene. A reference point to how bright it is in the game world.
How can I best extend my algoritm and improve it and perhaps implement new features?
Any suggestions are appreciated or some information on good implementation of a Day-Cycle system.
My guess would be that daylight is somehow related to a sine wave rather than a line or exponential curve. For example,
public float CalculateLightning(float hour, float minute,
uint minutesPerHour, uint hoursPerDay)
{
float time = (hour + (minute / minutesPerHour)) / hoursPerDay;
float light = lightningMax * Math.Sin(2*Math.PI * time);
return MathHelper.Clamp(light, lightningMin, lightningMax);
}
Changing whether the light is centered on noon — you're looking for “solar noon”. To change this you can add a constant to time.
Changing how much the sun rises — a bit more complicated but there are some wikipedia articles about it. Or if you want to be super precise, look at the demo and source of this page.
• Also what I was thinking: that there's a sin/cos way to do this, and that light change at sunrise/sunset is not linear as it's based on light going through progressively more and more atmosphere. – Tim Holt Apr 7 '13 at 19:40
• I tried using this method in a for loop and the results are kind of weird, CalculateLightning(i, 0, 60, 24); and the results are (lightningMax = 1, lightningMin = 0.3) = 0-1: 0.3, 2 : 0.5, 3 : 0.7, 4: 0.8, 5: 0.9, 6: 1, and then from 6 AM it goes down again until 11 o'clock where it stays at 0,3 for the remainder of the "day". I guess it's a start and it needs some optimization. It clearly needs to be something similiar, a wave, a curve that increases, stays the same and decreases. I don't think that the sun should be dark at 11. :) – Deukalion Apr 8 '13 at 2:15
• Yes, it's “centered” at the wrong point (at 6am); you have to add a constant to time to recenter it where you want. Probably -½π; I'm not sure. – amitp Apr 9 '13 at 2:15
• Changing Math.PI to Math.PI * 0.5d works, but if I were to center at some other time? Also, the question remains on how to retain the same value during some of the values. Preferable without any ifs statements, just an algorithm that works that into it. Or perhaps add a constant with lightningMin, lightningMax, so it multiplies it somehow with the max value. Say I want a value between 0,0 and 2,0 then add that to the constant. Like in my method. – Deukalion Apr 9 '13 at 8:52
• Perhaps calculate so that (HoursPerDay / 2) - MiddayHours (2) is a reference point to when the value should be max and (HoursPerDay/2) + MiddayHours(2) it should start to decrease, so always set to 1*lightningMax between those hours or otherwise decrease the value. – Deukalion Apr 9 '13 at 9:14
in a previous app i had to implement when the sun is going down, based on the gps location. Take a look and maybe it will surve your purpose.
https://github.com/jeancaffou/Analemma (its in java though).
What it does is it can calculate the time of the sunrise/sunset for a given GPS coordinate (only latitude is needed - you can hardcode if you want to simulate a specific place on earth).
This way you can get when the sunrise and sunset is going to take place and from that you can calculate your light value
curTime = currentTime;
sunset = sunsetForDate(curTime.year,curTime.month,curTime.day);
sunrise = sunriseForDate(curTime.year,curTime.month,curTime.day);
if(currentTime >= sunrise && currentTime <= sunset)
{
// daylight
} else
{
// night
}
The way you calculate the value is totally up to your needs. You can say that the light value is 1 at the mid point between sunset and sunrise eg. (sunset-sunrise)/2. And the light value is 0 when the current time is bigger than sunset + SOME_CONSTANT and smaller than sunrise - SOME_CONSTANT.
• This seems good if you wish to make a simulation, but since I'm only looking for a simple system that is flexible enough perhaps not the best implementation. I just need to calculate a value that can used to set the light to the entire scene, and the priority is to get a single day working and then later perhaps add extra features. – Deukalion Apr 8 '13 at 3:09
I would suggest to use the logistic function to simulate the sunrise and sunset.
float t <- time of day from 0.00 to 23.99
float rise <- time of sunrise
float set <- time of sunset
float f <- factor to vary length of sunset and sunrise
float max <- max. intensity of light during the day
float min <- min. intensity of light during the day
if(t > 0 && t < 12)
lighting = P((t - rise) * f)
else
lighting = 1.0 - P((t - set) * f)
lighting = min + lighting * (max - min)
You can achieve a yellowish/reddish light at sunrise and sunset by calculating each of the RGB channels of your light separately. The sunrise for the red channel then should start earlier resp. later for the sunset than for the other channels.
• Looking at the curve it seems promising, is it "rising" from 0-12 and falling from 12-24? But my only question is: how do I know what P is? If I want to implement it and look at the results. – Deukalion Apr 10 '13 at 2:17
• Also, looking at the "Double logistic function" is perhaps something to look at also because it rises, stays and rises again. Perhaps if it were possible to rise, stay and fall with that curve. – Deukalion Apr 10 '13 at 2:19
• according to the wiki article: P(x) = 1 / ( 1 + e^-x). The fact that it seems to reach 0 at -6 and 1 at 6 is just coincidence, it is only approaching but never reaching 0 and 1 – Dirk Apr 10 '13 at 8:02
• For the double logistic function I don't think it is possible... – Dirk Apr 10 '13 at 8:03 | 2020-10-25 11:37:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3712984025478363, "perplexity": 1360.7805616539754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00701.warc.gz"} |
https://byjus.com/rs-aggarwal-solutions/rs-aggarwal-class-10-solutions-chapter-3-linear-equations-in-two-variables-exercise-3-4/ | # RS Aggarwal Solutions Class 10 Ex 3D
Question 1: 3x – 5y – 19 = 0, -7x + 3y + 1= 0
Solution:
The given equations are.
3x – 5y – 19 = 0 …..(1)
-7x + 3y + 1= 0 …..(2)
Multiplying (1) by 3 and (2) by 5, we get,
9x – 15y = 57 …..(3)
-35x + 15y = -5 …..(4)
Adding (3) and (4), we get
-26x = 52 $\Rightarrow$ x = -2
Substituting x = -2 in (1), we get
(3)(-2) – 5y = 19 $\Rightarrow$ -6 – 5y = 19
-5y = 19 + 6 $\Rightarrow$ -5y = 25
Y = -5
∴ x = -2 and y = -5
Question 2: 4x – 3y = 8, 6x – y = (29/3)
Solution:
The given equations are.
4x – 3y = 8 …..(1)
6x – y = (29/3) …..(2)
Multiplying (1) by 1 and (2) by 3, we get,
4x – 3y = 8 …..(3)
18x – 3y = 29 …..(4)
Subtracting (3) from (4), we get
14x = 21 $\Rightarrow x = \frac{21}{14} = \frac{3}{2}$
Substituting $x = \frac{3}{2}$ in (1), we get
$4 \times \frac{3}{2} – 3y = 8 \Rightarrow 6 – 3y = 8 \Rightarrow y = \frac{-2}{3}$
$x = \frac{3}{2} \, and \, y = \frac{-2}{3}$
Question 3: $2x – \frac{3y}{4} = 3$, $5x = 2y + 7$
Solution:
The given equations are.
$2x – \frac{3y}{4} = 3$ …..(1)
$5x = 2y + 7$ …..(2)
Multiplying (1) by 2 and (2) by 3/4, we get,
$4x – \frac{3y}{2} = 6$ …..(3)
$\frac{15x}{4} – \frac{3y}{2} = \frac{21}{4}$ …..(4)
Subtracting (3) from (4), we get
$\frac{-1}{4} x = \frac{-3}{4}$
-x = -3 $\rightarrow$ x = 3
Substituting x = 3 in (1), we get
$2 \times 3 – \frac{3y}{4} = 3 \Rightarrow \frac{3y}{4} = 3 – 6 \Rightarrow y = \frac{-3 \times 4}{-3} \, = \, 4$
∴ Solution is x = 3 and y =4
Question 4: 11x – 5y + 23 = 0, 7x – 2y – 20 = 0
Solution:
The given equations are.
11x – 5y + 23 = 0 …..(1)
7x – 2y – 20 = 0 …..(2)
Multiplying (1) by 2 and (2) by 15, we get,
22x + 30y = -46 …..(3)
105x – 30y = 300 …..(4)
Adding (3) and (4), we get
127x = 254 $\Rightarrow x = \frac{254}{127} = 2$
Substituting x = 2 in (1), we get
(11)(2) + 15y = -23
15y = -23 – 22 $\Rightarrow$ 15y = -45
Y = -3
∴ Solution is x = 2 and y = -3
Question 5: 2x – 5y + 8 = 0, x – 4y + 7 = 0
Solution:
The given equations are.
2x – 5y + 8 = 0 …..(1)
x – 4y + 7 = 0 …..(2)
Multiplying (1) by 4 and (2) by 5, we get,
8x – 20y = -32 …..(3)
5x – 20y = -35 …..(4)
Subtracting (3) from (4), we get
-3x = -3 $\Rightarrow$ x = 1
Substituting x = 1 in (1), we get
(2)(1) – 5y = -8
-5y = -8 – 2 $\Rightarrow$ -5y = -10
y = 2
∴ Solution is x = 1 and y = 2
Question 6: 7 – 4x = 3y $\Rightarrow$ -4x – 3y = -7, 2x + 3y = -1
Solution:
The given equations are.
7 – 4x = 3y $\Rightarrow$ -4x – 3y = -7 …..(1)
2x + 3y = -1 …..(2)
Subtracting (2) from (1), we get
2x = 8 $\Rightarrow$ x = 4
Substituting x = 4 in (1), we get
(4)(4) + 3y = 7
3y =7 – 16 $\Rightarrow$ 3y = -9
y = -3
∴ Solution is x = 4 and y = -3
Question 7: 2x + 5y = (8/3), 3x – 2y = (5/6)
Solution:
The given equations are.
2x + 5y = (8/3) …..(1)
3x – 2y = (5/6) …..(2)
Multiplying (1) by 2 and (2) by 5, we get,
4x + 10y = (16/3) …..(3)
15x – 10y = (25/6) …..(4)
Adding (3) and (4), we get
19x = (57/6) $\Rightarrow x = \frac{57}{6 \times 19} = \frac{1}{2}$
Substituting x = ½ in (1), we get
(4)(1/2)+ 10y = (16/3)
10y = (16/3) – 2 $\Rightarrow 10y = (10/3)$
$y = \frac{10}{3 \times 10} = \frac{1}{3}$
∴ Solution is x = (½) and y = (⅓)
Question 8: $\frac{x}{3} + \frac{y}{4} = 11$, $\frac{5x}{6} – \frac{y}{3} = -7$
Solution:
The given equations are:
$\frac{x}{3} + \frac{y}{4}$ = 11
$\frac{5x}{6} – \frac{y}{3}$ = -7
$\frac{x}{3} + \frac{y}{4}$ = 11 (by taking LCM)
$\frac{4x + 3y}{12}$ = 11
4x + 3y = 132 ……..(1)
$\frac{5x}{6} – \frac{y}{3}$ = -7 (by taking LCM)
$\frac{5x – 2y}{6} = -7$
5x – 2y = -42 ……..(2)
Multiplying (1) by 2 and (2) by 3, we get,
8x + 6y = 264 …..(3)
15x – 6y = -126 …..(4)
Adding (3) and (4), we get
23x = 138 $\Rightarrow$ x = 6
Substituting x = 6 in (1), we get
(4)(6) + 3y = 132
3y = 132 – 24 $\Rightarrow$ 3y = 36
y = 36
∴ Solution is x = 6 and y = 36
Question 9: 7(y + 3) – 2(x + 2) = 14, 4(y – 2) + 3(x – 3) = 2
Solution:
The given equations are.
7(y + 3) – 2(x + 2) = 14
4(y – 2) + 3(x – 3) = 2
7(y + 3) – 2( x + 2) = 14
$\Rightarrow$ 7y + 21 – 2x – 4 = 14
$\Rightarrow$ 7y – 2x = 14 + 4 -21
$\Rightarrow$ – 2x + 7y = -3 ……..(1)
4(y – 2) + 3(x – 3) = 2
$\Rightarrow$ 4y – 8 + 3x – 9 = 2
$\Rightarrow$ 4y+ 3x = 2 + 8 + 9
$\Rightarrow$ 3x + 4y = 19 ……..(2)
Multiplying (1) by 4 and (2) by 7, we get,
-8x + 28y = -12 …..(3)
21x + 28y = 133 …..(4)
Subtracting (3) from (4), we get
29x = 145 $\Rightarrow$ x = 5
Substituting x = 5 in (1), we get
(-2)(5) + 7y = -3
7y = -3 + 10 $\Rightarrow$ 7y = 7
y = 1
∴ Solution is x = 5 and y = 1
Question 10: 6x + 5y = 7x + 2y + 1 = 2(x + 6y – 1)
Solution:
The given equations are.
6x + 5y = 7x + 2y + 1 = 2(x + 6y – 1)
Therefore, we have,
6x + 5y = 2(x + 6y -1)
$\Rightarrow$ 6x + 5y = 2x + 12y -2
$\Rightarrow$ 6x – 2x + 5y – 12y = -2
$\Rightarrow$ 4x – 7y = -2 ……..(1)
7x + 3y + 1= 2(x + 6y -1)
$\Rightarrow$ 7x + 3y + 1 = 2x + 12y -2
$\Rightarrow$ 7x – 2x + 3y – 12y = -2 -1
$\Rightarrow$ 5x -9y = -3 ……..(2)
Multiplying (1) by 9 and (2) by 7, we get,
36x – 63y = -18 …..(3)
35x – 63y = -21 …..(4)
Subtracting (4) from (3), we get
x = 3
Substituting x = 3 in (1), we get
(4)(3) – 7y = -2
-7y = -2 – 12 $\Rightarrow$ -7y = -14
y = 2
∴ Solution is x = 3 and y = 2
Question 11: $\frac{x + y – 8}{2} = \frac{x + 2y – 14}{3} = \frac{3x + y – 12}{11}$
Solution:
The given equations are.
$\frac{x + y – 8}{2} = \frac{x + 2y – 14}{3} = \frac{3x + y – 12}{11}$
Therefore, we have,
$\frac{x + y – 8}{2} + \frac{x + 2y – 14}{3} = \frac{3x + y – 12}{11}$
By cross multiplication, we get
11x + 11y – 88 = 6x + 2y – 24
11x – 6x + 11y – 2y = -24 + 88
5x + 9y = 64 ……..(1)
$\frac{x + 2y – 14}{3} = \frac{3x + y – 12}{11}$
11x + 22y – 154 = 9x + 3y – 36
11x – 9x + 22y – 3y = -36 + 154
2x + 19y = 118 ……..(2)
Multiplying (1) by 19 and (2) by 9, we get,
95x + 171y = 1216 …..(3)
18x + 171y = 1062 …..(4)
Subtracting (4) from (3), we get
77x = 154 $\Rightarrow$ x = 2
Substituting x = 2 in (1), we get
(5)(2) + 9y = 64
$\Rightarrow$ 9y = 54
y = 6
∴ Solution is x = 2 and y = 6
Question 12: 0.8x + 0.3y = 3.8, 0.4x – 0.5y = 0.6
Solution:
The given equations are.
0.8x + 0.3y = 3.8 ……..(1)
0.4x – 0.5y = 0.6 ……..(2)
Multiplying each one of the equation by 10, we get
8x + 3y = 38 ……..(3)
4x – 5y = 6 ……..(4)
Multiplying (3) by 5 and (4) by 3, we get,
40x + 15y = 190 …..(5)
12x – 15y = 18 …..(6)
Adding (5) and (6), we get
52x = 208 $\Rightarrow x = \frac{208}{52} = 4$
Substituting x = 4 in (3), we get
(8)(4) + 3y = 38
3y = 38 – 32 $\Rightarrow$ 3y = 6
y = 2
∴ Solution is x = 4 and y = 2
Question 13: 0.05x + 0.2y = 0.07, 0.3x – 0.1y = 0.03
Solution:
The given equations are.
0.05x + 0.2y = 0.07 ……..(1)
0.3x – 0.1y = 0.03 ……..(2)
Multiplying each one of the equation by 100, we get
5x + 20y = 7 ……..(3)
30x – 10y = 3 ……..(4)
Multiplying (3) by 10 and (4) by 20, we get,
50x + 200y = 70 …..(5)
600x – 200y = 60 …..(6)
Adding (5) and (6), we get
650x = 130 $\Rightarrow x = \frac{130}{650} = \frac{1}{5} = 0.2$
Substituting x = 0.2 in (3), we get
(5)(0.2) + 20y = 7
1 + 20y = 7 $\Rightarrow$ 20y = 6
y = 0.3
∴ Solution is x = 0.2 and y = 0.3
Question 14: mx – ny = m2 + n2, x + y = 2m
Solution:
mx – ny = m2 + n2 ……..(1)
x + y = 2m ……..(2)
Multiplying (1) by 1 and (2) ny n
mx – ny = m2 + n2 ……..(3)
nx + ny = 2mn ……..(4)
Adding (3) and (4), we get
mx + nx = m2 + n2 + 2mn
x(m + n) (m + n)2
$x = \frac{(m + n)^{2}}{m + n} = m + n$
Putting x = m + n in (1), we get
m(m + n) – ny = m2 + n2
m2 + mn – ny = m2 + n2
-ny = m2 + n2 – m2 – mn
-ny = n2 – nm
-y = $\frac{n(n – m)}{n}$
-y = (n – m)
y = (m – n)
∴ The solution is x = (m + n) and y = (m -n)
Question 15: $\frac{bx}{a} – \frac{ay}{b} + a + b = 0$
Solution:
$\frac{bx}{a} – \frac{ay}{b} + a + b = 0$
By taking LCM, we get
$\frac{b^{2}x – a^{2}y + a^{2}b + b^{2}a}{ab} = 0$
b2x – a2y = -a2b – b2a ……..(1)
bx – ay = -2ab ……..(2)
Multiplying (1) by 1 and (2) by ‘a’,
b2x – a2y = -a2b – b2a ……..(3)
abx – a2y = -2a2b ……..(4)
Subtracting (3) from (4)
(ab – b2)x = -2a2b + a2b + ab2
b(a – b)x = -a2b + ab2 = -ab(a – b)
$x = \frac{-ab(a – b)}{b(a – b)}$
x = -a
Putting x = -a, in (1), we get
b2(-a) – a2y = -a2b – b2a
-ab2 – a2y = -a2b – b2a
-a2y = -a2b – b2a + ab2
-a2y = -a2b $\Rightarrow y = \frac{-a^{2}b}{-a^{2}} = b$
∴ Solution is x = -a, y = b.
Question 16: $\frac{x}{a} + \frac{y}{b} = 2$
Solution:
$\frac{x}{a} + \frac{y}{b} = 2$
By taking LCM, we get
$\frac{bx + ay}{ab} = 2$
bx – ay = 2ab ……..(1)
ax – by = (a2 – b2) ……..(2)
Multiplying (1) by ‘b’ and (2) by ‘a’,
b2x + bay = 2ab2 ……..(3)
a2x – bay = a(a2 – b2) ……..(4)
Adding (3) and (4), we get
b2x + a2x = 2ab2 + a(a2 – b2)
x(b2 + a2) = 2ab2 + a3 – ab2
x(b2 + a2) = a(b2 + a2)
$x = \frac{a(b^{2} + a^{2})}{(b^{2} + a^{2})} = a$
x = a
Putting x = a, in (1), we get
(b)(a) + ay = 2ab
ay = 2ab – ab $\Rightarrow$ ay = ab or y = b
∴ Solution is x = a, y = b.
Question 17: $\frac{bx}{a} – \frac{ay}{b} = a^{2} + b^{2}$
Solution:
$\frac{bx}{a} – \frac{ay}{b} = a^{2} + b^{2}$
By taking LCM, we get
$\frac{b^{2}x – a^{2}y}{ab} = a^{2} + b^{2}$
b2x + a2y = ab(a2 + b2) ……..(1)
x + y = 2ab ……..(2)
Multiplying (1) by 1 and (2) by ‘a2’,
b2x + a2y = a3b + b3a ……..(3)
a2x + a2y = 2a3b ……..(4)
Subtracting (4) from (3)
b2x – a2x = a3b + ab3– 2a3b
x(b2 – a2) = ab3 – a3b
x(b2– a2) = ab(b2 – a2)
$x = \frac{ab(b^{2} – a^{2}}{b^{2} – a^{2}} = ab$
x = ab
Putting x = ab, in (3), we get
b2(ab) + a2y = a3b + b3a
ab3 + a2y = a3b + b3a
a2y = a3b + b3a – ab3
a2y = a3b $\Rightarrow y = \frac{-a^{3}b}{a^{2}} = ab$
∴ Solution is x = ab, y = ab.
#### Practise This Question
Fossils are _______. | 2019-01-17 13:07:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.429668664932251, "perplexity": 2039.1329655892623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00453.warc.gz"} |
https://www.math4refugees.de/html/en/1.7.5/xcontent2.html | #### Chapter 7 Differential Calculus
Section 7.5 Applications
# 7.5.4 Optimisation Problems
In many applications in engineering and business, solutions to problems can be found which are not unique. They often depend on variable conditions. To find an ideal solution, additional properties (constraints) are defined that are to be satisfied by the solution. This very often results in so-called optimisation problems, in which one solution has to be selected from a family of solutions such that it best satisfies a previously specified property.
As an example, we consider the problem of constructing a cylindrical can. This can must satisfy the additional condition of having a capacity (volume) $V$ of one litre (a.k.a. one cubic decimetre, $1 \mathrm{dm}{}^{3}$). Thus, if $V$ is specified in $\mathrm{dm}{}^{3}$ and $r$ is the radius and $h$ the height of the can, each measured in decimetre ($\mathrm{dm}$), then the volume is $V=\pi {r}^{2}·h=1$. The can with the least surface area $O=2·\pi {r}^{2}+2\pi rh$ is required in order to save material. Here, the surface area $O$, measured in square decimetres ($\mathrm{dm}{}^{2}$), is a function of the radius $r$ and the height $h$ of the can.
In mathematical terms, our question results in the problem of finding a minimum of the surface function $O$, where the minimum has to be found for values of $r$ and $h$ that also satisfy the additional condition for the volume: $V=\pi {r}^{2}·h=1$. In the context of finding extrema, such an additional condition is also called a constraint.
##### Optimisation Problem 7.5.52
In an optimisation problem, we search for an extremum ${x}_{\text{ext}}$ of a function $f$ satisfying a given equation $g\left({x}_{\text{ext}}\right)=b$.
If we search for a minimum point, this problem is called a minimisation problem. If we search for a maximum point, this problem is called a maximisation problem.
The function $f$ is called the target function, and the equation $g\left(x\right)=b$ is called the constraint of the optimisation problem. | 2021-12-07 18:17:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209277033805847, "perplexity": 217.96288320775977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00043.warc.gz"} |
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206266799-Defining-additional-JavaDoc-tags?page=1 | I'm using IDEA 4.5. In previous versions I was able to define additional JavaDoc tags, e.g. for EJBGen. However, in 4.5 I don't know where to add these tags. Perhaps it's Friday and I'm braindead, but I can't find this under any of the settings. My config\options\editor.codeinsight.xml file contains an entry: | 2020-02-22 08:21:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522664546966553, "perplexity": 1954.2387819374096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00057.warc.gz"} |
http://cognitive-liberty.online/vocabulary/independence/ | # Independence
Independence is a condition of a nation, country, or state in which its residents and population, or some portion thereof, exercise self-government, and usually sovereignty, over the territory. The opposite of independence is a dependent territory. Whether attainment of independence is different from revolution has long been contested, and has often been debated over the question of violence as a legitimate means to achieving sovereignty.
This entry was posted in Uncategorised. Bookmark the permalink. | 2020-10-20 12:21:41 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072723388671875, "perplexity": 4604.613277108752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00459.warc.gz"} |
https://minkowski-functionals.org/2d-functionals/ | # 2D Functionals
For sufficiently smooth bodies $K$, the Minkowski Functionals can be intuitively defined via (weighted) integrals over the volume or boundary of the body $K$.
The scalar functionals can be interpreted as area, perimeter, or the Euler characteristic, which is a topological constant. The vectors are closely related to the centers of mass in either solid or hollow bodies. Accordingly, the second-rank tensors correspond the tensors of inertia, or they can be interpreted as the moment tensors of the distribution of the normals on the boundary.
### Minkowski Functionals
Area
$W_0 = \quad \int\limits_K \mathrm d A \propto V_2$
Perimeter
$W_1 = \frac 12 \int\limits_{\partial K} \mathrm d l \propto V_1$
Euler characteristic
$W_2 = \frac 12 \int\limits_{\partial K} \kappa\, \mathrm d l \propto V_0$
with
$\kappa$ = curvature
## Cartesian representation (Minkowski Tensors)
Using the position vector $\textbf r$ and the normal vector $\textbf n$ on the boundary, the Minkowski Vectors can be defined in the Cartesian representation.
The second-rank Minkowski tensors are defined using the symmetric tensor product $(\textbf a\otimes \textbf a)_{ij} = a_i a_j$.
### Minkowski Vectors
$W_0^{1,0} = \quad \int\limits_K \textbf r \, \mathrm d A\propto \Phi_2^{1,0}$
$W_1^{1,0} = \frac 12 \int\limits_{\partial K} \textbf r \, \mathrm d l\propto \Phi_1^{1,0}$
$W_2^{1,0} = \frac 12 \int\limits_{\partial K} \kappa \, \textbf r \, \mathrm d l\propto \Phi_0^{1,0}$
### Minkowski Tensors
$W_0^{2,0} = \quad \int\limits_K \textbf r \otimes \textbf r \, \mathrm d A\propto \Phi_2^{2,0}$
$W_1^{2,0} = \frac 12 \int\limits_{\partial K} \textbf r \otimes \textbf r \, \mathrm d l\propto \Phi_1^{2,0}$
$W_2^{2,0} = \frac 12 \int\limits_{\partial K} \kappa \, \textbf r \otimes \textbf r \, \mathrm d l\propto \Phi_0^{2,0}$
$W_1^{0,2} = \frac 12 \int\limits_{\partial K} \textbf n \otimes \textbf n \, \mathrm d l\propto \Phi_1^{0,2}$
$W_2^{0,2} = \frac 12 \int\limits_{\partial K} \kappa \, \textbf n \otimes \textbf n \, \mathrm d l\propto \Phi_0^{0,2}$
## Irreducible representation (Circular Minkowski Tensors)
under construction
Consider a polygon with edges L:
$\vec L_i = L_i \cdot \vec n_i$
Density of normals:
$\rho(\varphi) = \sum L_i \delta(\phi-\phi_i)$
Fourier analysis:
$\tilde\rho(l) = \int \textnormal d \varphi \, \textnormal{exp}(i l \varphi) \, \rho(\varphi) = \sum L_i \textnormal{exp}(i l \varphi_i)$
$\tilde\rho(0) = L = W_1$ (the circumference of the polygon)
$\tilde\rho(1) = 0$ (for a closed polygon)
The shape indexes $q_l$ are defined as
$q_l = \frac{|\tilde\rho(l)|}{\tilde\rho(0)}$.
$q_0$ is the circumference of the object.
$q_1 = 1$ for closed polygons.
$q_2$ is the polar component of the normal density, it is related to $\beta_1^{0,2} - 1$
$q_3$ can detect anisotropy in a three-fold symmetric system
$q_4$ is the quadrupole component (suitable for detecting rectangles)
$q_6$ : …
### Morphometric distance
A morphometric distance of a polygon $K$ to a reference structure $R$ can be quantified by considering the pseudo distance function
$d(K) = \sqrt{\sum\limits_l [q_l(K) - q_l(R)]^2}$. | 2021-03-07 05:09:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402342438697815, "perplexity": 1482.8441278296555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00228.warc.gz"} |
http://crypto.stackexchange.com/tags/lattice-crypto/new | # Tag Info
I'm also afraid you couldn't understand this as D.W., but let us start. I sometimes cannot understand your questions. Please restate them, if possible. The definition of the Ajtai hash functions Let $n$, $m$, and $q$ be positive integers. Let $R = \mathbb{Z}_q$ be the quotient ring of integers modulo $q$. Let us define a function, which maps a vector in ... | 2014-03-08 13:48:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852131366729736, "perplexity": 312.6761757107221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654667/warc/CC-MAIN-20140305060734-00098-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://thatsmaths.com/tag/ramanujan/ | ## Posts Tagged 'Ramanujan'
### Ramanujan’s Astonishing Knowledge of 1729
Question: What is the connection between Ramanujan’s number 1729 and Fermat’s Last Theorem? For the answer, read on.
The story of how Srinivasa Ramanujan responded to G. H. Hardy’s comment on the number of a taxi is familiar to all mathematicians. With the recent appearance of the film The Man who Knew Infinity, this curious incident is now more widely known.
Result of a Google image search for “K3 Surface”.
Visiting Ramanujan in hospital, Hardy remarked that the number of the taxi he had taken was 1729, which he thought to be rather dull. Ramanujan replied “No, it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.”
### Waring’s Problem & Lagrange’s Four-Square Theorem
$\displaystyle \mathrm{num}\ = \square+\square+\square+\square$
Introduction
We are all familiar with the problem of splitting numbers into products of primes. This process is called factorisation. The problem of expressing numbers as sums of smaller numbers has also been studied in great depth. We call such a decomposition a partition. The Indian mathematician Ramanujan proved numerous ingenious and beautiful results in partition theory.
More generally, additive number theory is concerned with the properties and behaviour of integers under addition. In particular, it considers the expression of numbers as sums of components of a particular form, such as powers. Waring’s Problem comes under this heading.
### Ramanujan’s Lost Notebook
In the Irish Times column this week ( TM010 ), we tell how a collection of papers of Srinivasa Ramanujan turned up in the Wren Library in Cambridge and set the mathematical world ablaze. Continue reading ‘Ramanujan’s Lost Notebook’ | 2017-05-29 11:30:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5051280856132507, "perplexity": 856.4401877381081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612283.85/warc/CC-MAIN-20170529111205-20170529131205-00292.warc.gz"} |
https://studyadda.com/sample-papers/jee-main-sample-paper-5_q65/90/247127 | • # question_answer If letup rectum of the ellipse x2 tan2 $\alpha$ + y2 sec2 $\alpha$ = 1 is 1/2, then $\alpha$ $(0<\alpha <\pi /3)$ is equal to A) $\pi /6$ B) $\pi /12$ C) $\pi /4$ D) $3\pi /4$
Solution :
Idea An ellipse $\frac{{{x}^{2}}}{{{a}^{2}}}+\frac{{{y}^{2}}}{{{b}^{2}}}=1,$ where ${{b}^{2}}={{a}^{2}}$$(1-{{e}^{2}})$ length of latus rectum is $\frac{2{{b}^{2}}}{a}$ Here, the ellipse is x2 tan2 a + y2 sec2 a =1 So, latus rectum $=\frac{2{{\cos }^{2}}\alpha }{\cot \alpha }=\frac{1}{2}$ So, $\sin 2\alpha =\frac{1}{2}or2\alpha =\frac{\pi }{6},\frac{5\pi }{6}...$ TEST Edge Position of a point relative to an ellipse directories of an ellipse based question are asked. To solve these types of questions, students are advised to learn the different equations of the part of an ellipse.
You need to login to perform this action.
You will be redirected in 3 sec | 2022-01-22 17:51:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004263281822205, "perplexity": 4967.169827669782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00293.warc.gz"} |
http://pressurevesseltech.asmedigitalcollection.asme.org/article.aspx?articleid=1473533 | 0
Research Papers: Materials and Fabrication
Optimization of a Weld Overlay on a Plate Structure
[+] Author and Article Information
John Goldak
Carleton University, Ottawa, ON K1S 5B6, [email protected]
Carleton University, Ottawa, ON K1S 5B6, Canada
Jianguo Zhou, Stanislav Tchernov, Dan Downey
Goldak Technologies Incorporated, Ottawa, ON K1V 7C2, Canada
J. Pressure Vessel Technol 132(1), 011402 (Dec 09, 2009) (9 pages) doi:10.1115/1.4000511 History: Received January 29, 2009; Revised September 22, 2009; Published December 09, 2009; Online December 09, 2009
Abstract
An overlay weld repair procedure on a $1066.8×1066.8 mm2$ square plate 25.4 mm thick was simulated to compute the 3D transient temperature, microstructure, strain, stress, and displacement of the overlay weld repair procedure. The application for the overlay was the repair of cavitation erosion damage on a large Francis turbine used in a hydroelectric project. The overlay weld consisted of a $4×6$ pattern of $100×100 mm2$ squares. Each square was covered by 15 weld passes. Each weld pass was 100 mm long. The total length of weld in the six squares was 36 m. The welds in each square were oriented either front-to-back or left-to-right. The welding process was shielded metal arc. The analysis shows that alternating the welding direction in each square produces the least distortion. A delay time of 950 s between the end of one weld pass and the start of the next weld pass was imposed to meet the requirement of a maximum interpass temperature to $50°C$.
<>
Figures
Figure 1
The geometry of the overlay weld repair simulation described in Ref. 1 is shown
Figure 2
This cross section of the block for one overlay 100 mm square shows the FEM mesh for the filler metal
Figure 3
The FEM mesh of several blocks of overlay 100 mm square are shown embedded the structure being repaired
Figure 4
The FEM mesh of 6×4 checker board pattern of blocks of 100 mm overlay squares are shown together with stiffening gussets of the structure being repaired
Figure 5
The temperature in K at a virtual thermocouple located at the centroid of the bottom of the plate is plotted versus time. Each peak is associated with one weld pass in one block.
Figure 6
The decay of the maximum temperature in the structure between two weld passes is shown as a function of time for a maximum interpass temperature of 50°C. The delay time for any higher maximum interpass temperature can be read from the graph.
Figure 7
The phase fraction of martensite is shown for the front-to-back overlay
Figure 8
The phase fraction of ferrite is shown for the front-to-back overlay
Figure 9
The hardness is shown for the front-to-back overlay. The blue denoting zero hardness is because the algorithm only computes hardness if the material point transformed to austenite.
Figure 10
The εzz strain as a function of time for strain gauge at the center of the bottom surface
Figure 11
The εzz strain as a function of time for strain gauge at the center of the bottom surface for longer time
Figure 12
Left-to-right distortion magnified ten times at 1,000,000 s
Figure 13
Front-to-back distortion magnified ten times at 1,000,000 s
Figure 14
Checker board distortion magnified ten times at 1,000,000 s
Figure 15
Difference of left-to-right distortion to checker board distortion magnified ten times at 1,000,000 s
Figure 16
Difference of front-to-back distortion to checker board distortion magnified ten times at 1,000,000 s
Figure 17
Difference of left-to-right distortion to front-to-back distortion magnified ten times at 1,000,000 s
Figure 18
The temperature at the point at which a thermocouple is located is plotted versus time. This figure is from Ref. 12.
Figure 19
The εzz strain as a function of time for strain gauge 1 on the top surface of the 100×100 is shown. This figure is from Ref. 12.
Discussions
Some tools below are only available to our subscribers or users with an online account.
Related Content
Customize your page view by dragging and repositioning the boxes below.
Topic Collections | 2017-06-22 14:18:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2933056354522705, "perplexity": 4352.918296196781}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00318.warc.gz"} |
http://piping-designer.com/index.php/properties/fluid-mechanics/2433-cubic-inch-displacement | # Total Displacement
Written by Jerry Ratzlaff on . Posted in Fluid Dynamics
In engine design, the total displacement is the amount of volume displaced by engine cylinders. It is measured in cubic inches, cubic cylinders or liters.
### Total Displacement Formula
$$\large{ DISP = NOC \;*\; \frac{\large{\pi}}{2} \;*\; BORE^2 \;*\; STROKE }$$
Where:
$$\large{ DISP }$$ = total displacement
$$\large{ BORE }$$ = cylinder bore size
$$\large{ NOC }$$ = number of cylinders
$$\large{ STROKE }$$ = piston stroke length | 2019-02-15 22:18:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6669739484786987, "perplexity": 6253.7715578671105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479159.2/warc/CC-MAIN-20190215204316-20190215230316-00504.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.