url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://www.biostars.org/p/444052/#444112 | Is it appropriate to apply quantile normalization on RNA-Seq before computing the Pearson correlation?
0
0
Entering edit mode
2.1 years ago
ATCG ▴ 370
Is it appropriate to apply quantile normalization on RNA-Seq before computing the Pearson correlation? Specifically, RNA-Seq data displays a dense tail distribution, and Person's correlation (cor function in R) assumes a normal distribution. I tried this in R using the normalize.quantiles function discussed in the lecture, and my results make a lot of sense, but I want to make sure that this is an appropriate transformation before accepting these results. Thank you for your help!
RNA-Seq Quantile Normalization Correlation • 1.1k views
1
Entering edit mode
Of course you will get normal distributions out of normalize.quantiles but I wouldn't count on it. The main problem is the low counts, you will get random effects amplified. I would try spearman's correlation instead but make sure you normalize for library size first.
0
Entering edit mode
Yes. My pipeline is rawcounts-->DESeq2-->normalized.quantiles Or do you suggest normalizing the rawcounts using a different method? Is there an R function to do the library normalization?
1
Entering edit mode
DESeq2 normalization is great. Maybe try spearman's on the normalized count table. Make sure the r distribution looks good. You can use the normalize.quantiles but just make sure you're not getting artifacts from 0 counts.
0
Entering edit mode
Yes. I am removing rowMeans<10. | 2022-08-08 09:28:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930086016654968, "perplexity": 2075.6129544217433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00579.warc.gz"} |
https://embed.planetcalc.com/1205/?thanks=1 | homechevron_rightStudychevron_rightMathchevron_rightAlgebrachevron_rightlinear algebra
# Matrix Transpose
Matrix Transpose
### This page exists due to the efforts of the following people:
#### Timur
The matrix transpose is the matrix obtained by replacing all elements $a_{ij}$ with $a_{ji}$ or, which is the same, by exchanging A's rows and columns.
The matrix transpose is often written as $A^T$
#### Matrix Transpose
Digits after the decimal point: 2
Transpose
URL copied to clipboard
PLANETCALC, Matrix Transpose | 2020-10-28 00:15:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7810824513435364, "perplexity": 2066.9610154615957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00301.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-x-4-x-2-x-7-0 | # How do you solve (x+4)(x-2)(x-7)>0?
Jul 21, 2016
$x \in \left(- 4 , 7\right) \mathmr{and} x > 7$.
#### Explanation:
The answer was obtained by considering the scheme of signs of the
factors for which the product > 0. The favorable schemes are
$\left(+ - -\right) \left(- + -\right) \left(- - +\right)$ and
$\left(- + +\right) \left(+ - +\right) \left(+ + -\right)$
Jul 21, 2016
Solution set for $\left(x + 4\right) \left(x - 2\right) \left(x - 7\right) > 0$ is $- 4 < x < 2$ and $7 < x$.
#### Explanation:
We have $\left(x + 4\right) \left(x - 2\right) \left(x - 7\right) > 0$
The three zeros of function $\left(x + 4\right) \left(x - 2\right) \left(x - 7\right)$ divide real number line in four parts
(1) $x < - 4$ - Here all the terms are negative, hence $f \left(x\right)$ is positive. Hence, this does not form part of solution.
(2) $- 4 < x < 2$ - Here while first term $\left(x + 1\right)$ is positive, other two terms are negative and hence $f \left(x\right)$ is positive. Hence, this forms part of solution.
(3) $2 < x < 7$ - Here while first two terms $\left(x + 4\right)$ and $\left(x - 2\right)$ are positive, while $\left(x - 7\right)$ is negative and hence $f \left(x\right)$ is negative. Hence, this does not form part of solution.
(4) $7 < x$ - Here all the three term are positive and hence $f \left(x\right)$ is positive. Hence, this forms part of solution.
Hence solution set for $\left(x + 4\right) \left(x - 2\right) \left(x - 7\right) > 0$ is $- 4 < x < 2$ and $7 < x$.
graph{(x+4)(x-2)(x-7) [-10, 10, -160, 160]} | 2020-05-29 08:13:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885768711566925, "perplexity": 524.8304670436195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402457.55/warc/CC-MAIN-20200529054758-20200529084758-00513.warc.gz"} |
https://frommyslipbox.blogspot.com/2021/04/how-to-eliminate-confounding-in.html | ### How to eliminate confounding in multivariate regression
Great grey owl (Creative Commons).
### Preamble
For my previous post on causal diagrams, I made up a fake dataset relating the incidence of COVID-19 to the wearing of protective goggles for hypothetical individuals. The dataset included several related covariates, such as whether the person in question was worried about COVID-19.
The goal of the exercise was to (hypothetically!) determine whether protective glasses was an effective intervention for COVID-19, and to see how accidental associations due to other variables could mess up the analysis.
I faked the data so that COVID-19 incidence was independent of whether the person wore protective goggles. But then I demonstrated, using multivariate regressions, that it is easy to incorrectly conclude that protective glasses are significantly effective for reducing the risk of COVID-19. I also showed how a causal diagram relating the variables can be used to determine which variables to include and exclude from the analysis.
In this article, I'll explain how to recognize the patterns in causal diagrams that lead to statistical confounding, and show how to do a causal analysis yourself.
Here's the entire 'statistical confounding' series:
- Part 1: Statistical confounding: why it matters: on the many ways that confounding affects statistical analyses
- Part 2: Simpson's Paradox: extreme statistical confounding: understanding how statistical confounding can cause you to draw exactly the wrong conclusion
- Part 3: Linear regression is trickier than you think: a discussion of multivariate linear regression models
- Part 4: A gentle introduction to causal diagrams: a causal analysis of fake data relating COVID-19 incidence to wearing protective goggles
- Part 5: How to eliminate confounding in multivariate regression (this post): how to do a causal analysis to eliminate confounding in your regression analyses
-Part 6: A simple example of omitted variable bias: an example of statistical confounding that can't be fixed, using only 4 variables.
### Introduction
In A gentle introduction to causal diagrams, I introduced a fake dataset in which rows represented individuals, containing the following information:
- $C$: does the person test positive for COVID-19?
- $G$: does the person wear protective glasses in public?
- $W$: is the person worried about COVID-19?
- $S$: does the person avoid social contact?
- $V$: is the person vaccinated?
I then did some multivariate logistic regressions to answer the following question: does wearing protective goggles help reduce the likelihood of catching COVID-19?
In generating the dataset, I made the following assumptions:
- protective glasses have no direct effect on COVID-19 incidence;
- avoiding social contact has a significant negative effect on COVID-19 incidence;
- getting vaccinated has a very significant negative effect on COVID-19 incidence;
- being worried about COVID makes a person much more likely to get vaccinated, avoid social contact, and wear protective glasses;
- being vaccinated makes a person less likely to avoid social contact.
The causal diagram associated with these variables and assumptions is shown below. An arrow from one variable to another indicates that the value of the 'to' variable depends on the 'from' variable.
The exercise in the article was to determine which variables to include in a multivariate regression, in order to analyze whether protective glasses reduce the risk of catching COVID-19. The colored nodes are the ones that were ultimately included; only the $W$ (worried about COVID-19) variable was used as a covariate, in addition to the dependent and independent variables $G$ and $C$.
### Backdoor paths
In the diagram above, the causal relationship we want to assess (between $G$ and $C$) is represented by the gray dashed arrow. But there are a lot of other connections with intermediate variables, in the form of paths in the graph between $G$ and $C$, that can accidentally generate statistical associations between $G$ and $C$.
The first such path is shown below: it passes from $G$ to $W$ to $S$ to $C$. This is called a 'backdoor path' because arrow 1 points into $G$, rather than emitting from $G$. This path can be described in words as follows: if the person is worried about COVID-19, this makes her more likely to both wear protective glasses and socially distance. Since social distancing is an effective intervention against COVID-19, this sets up a negative correlation between wearing glasses and catching COVID-19; but the dataset was constructed so that protective glasses had no impact on COVID-19, so the effect is only due to correlation, not causation.
A second path is shown below: it passes from $G$ through $W$ to $V$ and then to $C$. In words: If a person is worried about COVID-19, he is more likely to both wear protective glasses and to get vaccinated. Since vaccination is an effective intervention against COVID-19, this again sets up a negative correlation between wearing glasses and catching COVID-19.
A third path is shown below: it passes from $G$ through $W$ to $V$, then to $S$ and finally $C$. In words: if a person is worried about COVID-19, she is more likely to get vaccinated, after which she may be less likely to socially distance. This is a problem in our analysis if we do not know the person's vaccination status, since the presence of a lot of people who do not socially distance, and yet do not catch COVID-19, will obscure the effectiveness of social distancing as an intervention. In the presence of enough vaccine-positive people, it might even appear that people who do not socially distance are *less* likely to get COVID-19 if we don't know people's vaccination status!
There is another type of backdoor path to consider, shown below. Backdoor path 4 passes from $G$ to $W$, through $S$ and $V$, to $C$. Backdoor path 4 will not cause confounding unless we make the mistake of conditioning on variable $S$. The variable $S$ is called a collider variable, because it has two arrows in the path pointing into it. We have to be careful not to condition on a collider variable, i.e., not to include it in the multivariable regression.
### Patterns of confounding
Each of the backdoor paths in any causal diagram can be broken down into a series of connections among three variables in the path. There are 3 relationships that can occur among these 3 variables: the 'fork' pattern, the 'pipe' pattern, and the 'collider' pattern.
Fork pattern
The image below shows the 'fork' pattern, which occurs in our example among the variables $G, W$, and $S$. The fork occurs when a single variable affects two 'child' variables; in this case, being worried makes a person both more likely to socially distance, and more likely to wear protective glasses.
If three variables are related by the fork pattern, then the two child variables will be marginally statistically dependent, but will be independent if we condition on the parent variable. Mathematically, the fork pattern says that:
$$p(G, W, S) = p(G|W)p(S|W)p(W).$$
Since $p(G,S)=\int p(G|V)p(S|V)p(V) dV$, it follows that $p(G,S)\ne p(G)\cdot p(S)$ in general. However, $p(G,S|V)=p(G|V)\cdot p(S|V)$; in this graph of 3 variables, $W$ and $S$ are conditionally independent given $V$.
In words, this says that if I know whether a person is worried about COVID-19, then knowing whether a person socially distances tells me nothing additional about whether they are likely to wear glasses.
Pipe pattern
The image below shows the 'pipe' pattern, which occurs in our example among the variables $W, S$, and $C$. The pipe occurs when a variable is causally 'in-between' two other variables. In this case, being worried causes a person to socially distance, which in turn reduces their chance of getting COVID-19.
If three variables are related by the pipe pattern, then the two outer variables will be marginally statistically dependent, but will be independent if we condition on the inner variable. Mathematically, $p(W,C)\ne p(W)\cdot p(C)$ in general, but $p(W,C|S)=p(W|S)p(C|S)$. The fork and pipe patterns are alike in this regard.
In words, this says that if I know whether a person is avoiding social contact, then knowing whether the person is worried about COVID-19 tells me nothing additional about whether they might have caught it.
Collider pattern
The collider pattern occurs when a single variable is dependent on two unrelated parent variables. There aren't any simple collider pattern examples in our example causal diagram -- for example, social distancing $S$ is dependent both on $V$ and $W$, but these two variables are also directly related to each other. So I've added an extra random variable in the diagram below: $N$, which is 1 if the person is nearsighted, and 0 otherwise. Clearly, being nearsighted is another reason why someone might wear glasses.
The collider pattern is different from the fork and pipe patterns. In the collider pattern, the two parents of the common child are marginally independent of each other. Mathematically, we have $p(N,W) = p(N)p(W)$ (it follows from the definition of the joint distribution, $p(N,W,G)=p(G|N,W)p(N)p(W)$), but $p(N,W|G)\ne p(N|G)\cdot p(W|G)$ in general. In other words, conditioning the regression on the 'collider variable' $G$ causes the parent variables $N,W$ to become associated. But the association is purely statistical; the two parent variables are still causally unrelated.
To see why this happens, imagine that you know nothing about whether a person wears glasses or not. Then knowing in addition that the person is nearsighted gives you no additional information about whether they are worried about COVID-19.
But suppose that you now know that the person is wearing glasses (i.e., you are conditioning on $G=1$). If you know in addition that the person is not nearsighted, then the odds are higher that they are wearing glasses because they are worried about COVID-19; and if you know that they are not worried about COVID-19, the odds increase that they are wearing glasses because they are nearsighted. So the parent variables become related. Collider bias is sometimes called 'explaining away'; knowing that a person is nearsighted 'explains away' their reason for wearing glasses.
Putting it together
This tells you everything you need to know in order to construct an unconfounded multivariate regression analysis, in order to determine whether one variable has a causal impact on another. The game is to 'block all the backdoor paths', to prevent them from causing accidental correlations between the dependent and independent variables.
For example, consider 'backdoor path 1' at the beginning of the article. This path contains a fork pattern (the variable $W$, pointing to $G$ and $S$) and a pipe pattern (the variable $S$, which is pointed to by $W$, and which points to $C$). If we don't condition on $W$ or $S$, then these variables will set up associations between $G$ and $S$, and between $W$ and $C$; the unbroken line of associations sets up a relationship between $G$ and $C$ that is only a correlation, not causal.
In order to prevent this from happening, we need to condition on either $W$ or $S$. We must choose one of them; conditioning on either one of them will break that chain of association. This is called 'blocking the backdoor path'. But blocking one backdoor path isn't enough; we must block all of them.
Consider backdoor path 2 from $G$ to $C$; it contains a fork variable, $W$, and a pipe variable, $V$. Conditioning on either $V$ or $W$ will block backdoor path 2. Note that conditioning on $W$ will block both backdoor paths 1 and 2, but conditioning on $V$ or $S$ will leave one of the paths unblocked.
Now consider backdoor path 3. Backdoor path 3 contains a fork variable, $W$; a pipe variable, $V$; and another pipe variable, $S$. Conditioning on any of these will block this backdoor path, so again, $W$ will work for this path.
Finally, looking at backdoor path 4, we see that $S$ is a collider variable in this path. Looking at this path in isolation, $W$ and $V$ will be marginally independent of each other. But if we condition on the variable $S$, that will set up an association between $W$ and $V$, which will connect all the variables in backdoor path 4, and cause confounding.
The following shows how the association between $W$ and $V$ can occur, as a result of knowing the value of $S$. Suppose we know for sure that a person is not avoiding social contact (i.e., we have conditioned on $S$). Suppose we also know that this person is worried about COVID-19; then this makes it highly likely that the person is vaccinated, since they would otherwise be avoiding people. Conversely, if we know that a person is not avoiding social contact, and we also know that the person is not vaccinated, then it is highly likely that they just aren't worried about COVID-19.
The fact that $S$ is a collider in this path means that we have to avoid conditioning on $S$ (including it in the regression). Conditioning on it will open backdoor path 4, which would otherwise be blocked.
To summarize, there are 5 total backdoor paths in this diagram -- the four we have discussed, and one other that also contains the variable $S$ as a collider (see if you can find it). Conditioning on $W$ will block the first 3 backdoor paths, and will not accidentally unblock the two paths that contain $S$ as a collider variable. Therefore, a multivariate regression that contains only $W$ as a covariate, $G$ as the independent variable, and $C$ as the dependent variable, will correctly show that wearing glasses has no effect on COVID-19 incidence. | 2021-06-13 01:28:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6565737128257751, "perplexity": 705.3394217583655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00241.warc.gz"} |
https://en.wikipedia.org/wiki/Formula_for_primes | # Formula for primes
In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. No such formula which is efficiently computable is known. A number of constraints are known, showing what such a "formula" can and cannot be.
## Formula based on Wilson's theorem
A simple formula is
${\displaystyle f(n)=\left\lfloor {\frac {n!{\bmod {(}}n+1)}{n}}\right\rfloor (n-1)+2,}$ for positive integer ${\displaystyle n}$.
By Wilson's theorem, ${\displaystyle n+1}$ is prime if and only if ${\displaystyle n!{\bmod {(}}n+1)=n}$. Thus, when ${\displaystyle n+1}$ is prime, the first factor in the product becomes one, and the formula produces the prime number ${\displaystyle n+1}$. But when ${\displaystyle n+1}$ is not prime, the first factor becomes zero and the formula produces the prime number 2.[1] This formula is not an efficient way to generate prime numbers because evaluating ${\displaystyle n!{\bmod {(}}n+1)}$ requires about ${\displaystyle n-1}$ multiplications and reductions ${\displaystyle {\bmod {(}}n+1)}$.
## Formula based on a system of Diophantine equations
Because the set of primes is a computably enumerable set, by Matiyasevich's theorem, it can be obtained from a system of Diophantine equations. Jones et al. (1976) found an explicit set of 14 Diophantine equations in 26 variables, such that a given number k + 2 is prime if and only if that system has a solution in natural numbers:[2]
${\displaystyle \alpha _{0}=wz+h+j-q=0}$
${\displaystyle \alpha _{1}=(gk+2g+k+1)(h+j)+h-z=0}$
${\displaystyle \alpha _{2}=16(k+1)^{3}(k+2)(n+1)^{2}+1-f^{2}=0}$
${\displaystyle \alpha _{3}=2n+p+q+z-e=0}$
${\displaystyle \alpha _{4}=e^{3}(e+2)(a+1)^{2}+1-o^{2}=0}$
${\displaystyle \alpha _{5}=(a^{2}-1)y^{2}+1-x^{2}=0}$
${\displaystyle \alpha _{6}=16r^{2}y^{4}(a^{2}-1)+1-u^{2}=0}$
${\displaystyle \alpha _{7}=n+\ell +v-y=0}$
${\displaystyle \alpha _{8}=(a^{2}-1)\ell ^{2}+1-m^{2}=0}$
${\displaystyle \alpha _{9}=ai+k+1-\ell -i=0}$
${\displaystyle \alpha _{10}=((a+u^{2}(u^{2}-a))^{2}-1)(n+4dy)^{2}+1-(x+cu)^{2}=0}$
${\displaystyle \alpha _{11}=p+\ell (a-n-1)+b(2an+2a-n^{2}-2n-2)-m=0}$
${\displaystyle \alpha _{12}=q+y(a-p-1)+s(2ap+2a-p^{2}-2p-2)-x=0}$
${\displaystyle \alpha _{13}=z+p\ell (a-p)+t(2ap-p^{2}-1)-pm=0}$
The 14 equations α0, …, α13 can be used to produce a prime-generating polynomial inequality in 26 variables:
${\displaystyle (k+2)(1-\alpha _{0}^{2}-\alpha _{1}^{2}-\cdots -\alpha _{13}^{2})>0}$
i.e.:
{\displaystyle {\begin{aligned}&(k+2)(1-{}\\[6pt]&[wz+h+j-q]^{2}-{}\\[6pt]&[(gk+2g+k+1)(h+j)+h-z]^{2}-{}\\[6pt]&[16(k+1)^{3}(k+2)(n+1)^{2}+1-f^{2}]^{2}-{}\\[6pt]&[2n+p+q+z-e]^{2}-{}\\[6pt]&[e^{3}(e+2)(a+1)^{2}+1-o^{2}]^{2}-{}\\[6pt]&[(a^{2}-1)y^{2}+1-x^{2}]^{2}-{}\\[6pt]&[16r^{2}y^{4}(a^{2}-1)+1-u^{2}]^{2}-{}\\[6pt]&[n+\ell +v-y]^{2}-{}\\[6pt]&[(a^{2}-1)\ell ^{2}+1-m^{2}]^{2}-{}\\[6pt]&[ai+k+1-\ell -i]^{2}-{}\\[6pt]&[((a+u^{2}(u^{2}-a))^{2}-1)(n+4dy)^{2}+1-(x+cu)^{2}]^{2}-{}\\[6pt]&[p+\ell (a-n-1)+b(2an+2a-n^{2}-2n-2)-m]^{2}-{}\\[6pt]&[q+y(a-p-1)+s(2ap+2a-p^{2}-2p-2)-x]^{2}-{}\\[6pt]&[z+p\ell (a-p)+t(2ap-p^{2}-1)-pm]^{2})\\[6pt]&>0\end{aligned}}}
is a polynomial inequality in 26 variables, and the set of prime numbers is identical to the set of positive values taken on by the left-hand side as the variables a, b, …, z range over the nonnegative integers.
A general theorem of Matiyasevich says that if a set is defined by a system of Diophantine equations, it can also be defined by a system of Diophantine equations in only 9 variables.[3] Hence, there is a prime-generating polynomial as above with only 10 variables. However, its degree is large (in the order of 1045). On the other hand, there also exists such a set of equations of degree only 4, but in 58 variables.[4]
## Mills' formula
The first such formula known was established by W. H. Mills (1947), who proved that there exists a real number A such that, if
${\displaystyle d_{n}=A^{3^{n}}}$
then
${\displaystyle \left\lfloor d_{n}\right\rfloor =\left\lfloor A^{3^{n}}\right\rfloor }$
is a prime number for all positive integers n.[5] If the Riemann hypothesis is true, then the smallest such A has a value of around 1.3063778838630806904686144926... (sequence A051021 in the OEIS) and is known as Mills' constant. This value gives rise to the primes ${\displaystyle \left\lfloor d_{1}\right\rfloor =2}$, ${\displaystyle \left\lfloor d_{2}\right\rfloor =11}$, ${\displaystyle \left\lfloor d_{3}\right\rfloor =1361}$, ... (sequence A051254 in the OEIS) Very little is known about the constant A (not even whether it is rational). This formula has no practical value, because there is no known way of calculating the constant without finding primes in the first place.
## Wright's formula
Another prime-generating formula similar to Mills' comes from a theorem of E. M. Wright. He proved that there exists a real number α such that, if
${\displaystyle g_{0}=\alpha }$ and
${\displaystyle g_{n+1}=2^{g_{n}}}$ for ${\displaystyle n\geq 0}$,
then
${\displaystyle \left\lfloor g_{n}\right\rfloor =\left\lfloor 2^{\dots ^{2^{2^{\alpha }}}}\right\rfloor }$
is prime for all ${\displaystyle n\geq 1}$.[6] Wright gives the first seven decimal places of such a constant: ${\displaystyle \alpha =1.9287800}$. This value gives rise to the primes ${\displaystyle \left\lfloor g_{1}\right\rfloor =\left\lfloor 2^{\alpha }\right\rfloor =3}$, ${\displaystyle \left\lfloor g_{2}\right\rfloor =13}$, and ${\displaystyle \left\lfloor g_{3}\right\rfloor =16381}$. ${\displaystyle \left\lfloor g_{4}\right\rfloor }$ is even, and so is not prime. However, with ${\displaystyle \alpha =1.9287800+8.2843\cdot 10^{-4933}}$, ${\displaystyle \left\lfloor g_{1}\right\rfloor }$, ${\displaystyle \left\lfloor g_{2}\right\rfloor }$, and ${\displaystyle \left\lfloor g_{3}\right\rfloor }$ are unchanged, while ${\displaystyle \left\lfloor g_{4}\right\rfloor }$ is a prime with 4932 digits.[7] This sequence of primes cannot be extended beyond ${\displaystyle \left\lfloor g_{4}\right\rfloor }$ without knowing more digits of α. Like Mills' formula, and for the same reasons, Wright's formula cannot be used to find primes.
## Prime formulas and polynomial functions
It is known that no non-constant polynomial function P(n) with integer coefficients exists that evaluates to a prime number for all integers n. The proof is as follows: Suppose such a polynomial existed. Then P(1) would evaluate to a prime p, so ${\displaystyle P(1)\equiv 0{\pmod {p}}}$. But for any k, ${\displaystyle P(1+kp)\equiv 0{\pmod {p}}}$ also, so ${\displaystyle P(1+kp)}$ cannot also be prime (as it would be divisible by p) unless it were p itself, but the only way ${\displaystyle P(1+kp)=P(1)}$ for all k is if the polynomial function is constant.
The same reasoning shows an even stronger result: no non-constant polynomial function P(n) exists that evaluates to a prime number for almost all integers n.
Euler first noticed (in 1772) that the quadratic polynomial
${\displaystyle P(n)=n^{2}+n+41}$
is prime for the 40 integers n = 0, 1, 2, ..., 39. The primes for n = 0, 1, 2, ..., 39 are 41, 43, 47, 53, 61, 71, ..., 1601. The differences between the terms are 2, 4, 6, 8, 10... For n = 40, it produces a square number, 1681, which is equal to 41×41, the smallest composite number for this formula for n ≥ 0. If 41 divides n, it divides P(n) too. Furthermore, since P(n) can be written as n(n + 1) + 41, if 41 divides n + 1 instead, it also divides P(n). The phenomenon is related to the Ulam spiral, which is also implicitly quadratic, and the class number; this polynomial is related to the Heegner number ${\displaystyle 163=4\cdot 41-1}$, and there are analogous polynomials for ${\displaystyle p=2,3,5,11,{\text{ and }}17}$ (the lucky numbers of Euler), corresponding to other Heegner numbers.
Given a positive integer S, there may be infinitely many c such that the expression n2 + n + c is always coprime to S. c may be negative, in which case there is a delay before primes are produced.
It is known, based on Dirichlet's theorem on arithmetic progressions, that linear polynomial functions ${\displaystyle L(n)=an+b}$ produce infinitely many primes as long as a and b are relatively prime (though no such function will assume prime values for all values of n). Moreover, the Green–Tao theorem says that for any k there exists a pair of a and b with the property that ${\displaystyle L(n)=an+b}$ is prime for any n from 0 through k − 1. However, the best known result of such type is for k = 26:
${\displaystyle 43142746595714191+5283234035979900n}$
is prime for all n from 0 through 25.[8] It is not even known whether there exists a univariate polynomial of degree at least 2 that assumes an infinite number of values that are prime; see Bunyakovsky conjecture.
## A possible formula using a recurrence relation
Another prime generator is defined by the recurrence relation
${\displaystyle a_{n}=a_{n-1}+\operatorname {gcd} (n,a_{n-1}),\quad a_{1}=7,}$
where gcd(x, y) denotes the greatest common divisor of x and y. The sequence of differences an + 1an starts with 1, 1, 1, 5, 3, 1, 1, 1, 1, 11, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 23, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 47, 3, 1, 5, 3, ... (sequence A132199 in the OEIS). Rowland (2008) proved that this sequence contains only ones and prime numbers. However, it does not contain all the prime numbers, since the terms are always odd and so never equal to 2. Nevertheless, in the same paper it was conjectured to contain all odd primes, even though it is rather inefficient (587 is the smallest odd prime not appearing in the first 10,000 terms that are different from 1).[9]
Note that there is a trivial program that enumerates all and only the prime numbers, as well as more efficient ones, and so such recurrence relations are more a matter of curiosity than of any practical use.
## References
1. ^ Mackinnon, Nick (June 1987), "Prime Number Formulae", The Mathematical Gazette, 71 (456): 113, doi:10.2307/3616496.
2. ^ Jones, James P.; Sato, Daihachiro; Wada, Hideo; Wiens, Douglas (1976), "Diophantine representation of the set of prime numbers", American Mathematical Monthly, Mathematical Association of America, 83 (6): 449–464, doi:10.2307/2318339, JSTOR 2318339, archived from the original on 2012-02-24.
3. ^ Matiyasevich, Yuri V. (1999), "Formulas for Prime Numbers", in Tabachnikov, Serge, Kvant Selecta: Algebra and Analysis, II, American Mathematical Society, pp. 13–24, ISBN 978-0-8218-1915-9.
4. ^ Jones, James P. (1982), "Universal diophantine equation", Journal of Symbolic Logic, 47 (3): 549–571, doi:10.2307/2273588.
5. ^ Mills, W. H. (1947), "A prime-representing function" (PDF), Bulletin of the American Mathematical Society, 53 (6): 604, doi:10.1090/S0002-9904-1947-08849-2.
6. ^ E. M. Wright (1951). "A prime-representing function". American Mathematical Monthly. 58 (9): 616–618. doi:10.2307/2306356. JSTOR 2306356.
7. ^ Baillie, Robert (5 June 2017). "Wright's Fourth Prime". arXiv: [math.NT].
8. ^ Perichon, Benoãt (2010), A World Record AP26 (Arithmetic Progression of 26 primes) (PDF), The AP26 is listed in "Jens Kruse Andersen's Primes in Arithmetic Progression Records page", retrieved 2014-06-25.
9. ^ Rowland, Eric S. (2008), "A Natural Prime-Generating Recurrence", Journal of Integer Sequences, 11: 08.2.8, arXiv:, Bibcode:2008JIntS..11...28R. | 2018-09-21 16:55:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 58, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041388034820557, "perplexity": 333.9029780220913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157216.50/warc/CC-MAIN-20180921151328-20180921171728-00475.warc.gz"} |
https://community.wolfram.com/groups/-/m/t/1536494 | # Avoid "Badly conditioned matrix...contain significant error" message?
Posted 5 months ago
1008 Views
|
5 Replies
|
0 Total Likes
|
Hi,I have a matrix representing more than one physical properties and hence a vast difference b/w numerical values of the elements, say from 10^12 - 10^-9. I get the o/p when I do any matrix operation on the matrix but with a error message "Result for Inverse of badly conditioned matrix ..... may contain significant error". Perhaps, this "error identification" is also effecting the processing speed of Mathematica as well.How can I get rid of this error message, and tell Mathematica that everything is normal in these numbers. Could there be a better way to deal with such matrices with large difference in numbers to improve the accuracy, specially during calculating inverse of them. Will appreciate any help.thanksSG
5 Replies
Sort By:
Posted 5 months ago
Try changing the matrix values to infinite precision using Rationalize
Posted 5 months ago
I tried using Rationalize[Inverse[Matrix]], but it did not work. Perhaps I am missing something. Can you please elaborate further. Following is the simplified example of my matrix, and I am facing problem while calculating its inverse E= {{0.768576, -2.62804*10^-11}, {2.44999*10^8, 0.369943}}; thanks once again.
Posted 5 months ago
In[5]:= m = {{0.768576, -2.62804*10^-11}, {2.44999*10^8, 0.369943}} Out[5]= {{0.768576, -2.62804*10^-11}, {2.44999*10^8, 0.369943}} In[6]:= mr = Rationalize[m, 10^-16] Out[6]= {{12009/15625, -(1/38051171215)}, {244999000, 369943/1000000}} In[7]:= mrinv = Inverse[mr] Out[7]= {{43989888852471078125/34575194689676811341, 3125000000/ 34575194689676811341}, {-(29132809051574328125000000000/ 34575194689676811341), 91391303024187000000/34575194689676811341}} In[8]:= minv = N @ mrinv Out[8]= {{1.2723, 9.03827*10^-11}, {-8.42593*10^8, 2.64326}} In[9]:= m.minv Out[9]= {{1., 9.12871*10^-23}, {0., 1.}} | 2019-03-26 10:22:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2789952754974365, "perplexity": 6527.550069889295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204969.39/warc/CC-MAIN-20190326095131-20190326121131-00147.warc.gz"} |
http://en.wikipedia.org/wiki/Singularity_theory | # Singularity theory
For other geometic uses, see Singular point of a curve. For other mathematical uses, see Mathematical singularity. For non-mathematical uses, see Singularity (disambiguation)
## The notion of singularity
In mathematics, singularity theory is the study of the failure of manifold structure. A loop of string can serve as an example of a one-dimensional manifold, if one neglects its width. What is meant by a singularity can be seen by dropping it on the floor. Probably there will appear a number of double points, at which the string crosses itself in an approximate 'χ' shape. These are the simplest kinds of singularity. Perhaps the string will also touch itself, coming into contact with itself without crossing, like an underlined 'U'. This is another kind of singularity. Unlike the double point, it is not stable, in the sense that a small push will lift the bottom of the 'U' away from the '_'.
### How singularities may arise
In singularity theory the general phenomenon of points and sets of singularities is studied, as part of the concept that manifolds (spaces without singularities) may acquire special, singular points by a number of routes. Projection is one way, very obvious in visual terms when three-dimensional objects are projected into two dimensions (for example in one of our eyes); in looking at classical statuary the folds of drapery are amongst the most obvious features. Singularities of this kind include caustics, very familiar as the light patterns at the bottom of a swimming pool.
Other ways in which singularities occur is by degeneration of manifold structure. That implies the breakdown of parametrization of points; it is prominent in general relativity, where a gravitational singularity, at which the gravitational field is strong enough to change the very structure of space-time, is identified with a black hole. In a less dramatic fashion, the presence of symmetry can be good cause to consider orbifolds, which are manifolds that have acquired 'corners' in a process of folding up resembling the creasing of a table napkin.
## Singularities in algebraic geometry
### Algebraic curve singularities
Historically, singularities were first noticed in the study of algebraic curves. The double point at (0,0) of the curve
$y^2 = x^2 - x^3\$
and the cusp there of
$y^2 = x^3\$
are qualitatively different, as is seen just by sketching. Isaac Newton carried out a detailed study of all cubic curves, the general family to which these examples belong. It was noticed in the formulation of Bézout's theorem that such singular points must be counted with multiplicity (2 for a double point, 3 for a cusp), in accounting for intersections of curves.
It was then a short step to define the general notion of a singular point of an algebraic variety; that is, to allow higher dimensions.
### The general position of singularities in algebraic geometry
Such singularities in algebraic geometry are the easiest in principle to study, since they are defined by polynomial equations and therefore in terms of a coordinate system. One can say that the extrinsic meaning of a singular point isn't in question; it is just that in intrinsic terms the coordinates in the ambient space don't straightforwardly translate the geometry of the algebraic variety at the point. Intensive studies of such singularities led in the end to Heisuke Hironaka's fundamental theorem on resolution of singularities (in birational geometry in characteristic 0). This means that the simple process of 'lifting' a piece of string off itself, by the 'obvious' use of the cross-over at a double point, is not essentially misleading: all the singularities of algebraic geometry can be recovered as some sort of very general collapse (through multiple processes). This result is often implicitly used to extend affine geometry to projective geometry: it is entirely typical for an affine variety to acquire singular points on the hyperplane at infinity, when its closure in projective space is taken. Resolution says that such singularities can be handled rather as a (complicated) sort of compactification, ending up with a compact manifold (for the strong topology, rather than the Zariski topology, that is).
## The smooth theory, and catastrophes
At about the same time as Hironaka's work, the catastrophe theory of René Thom was receiving a great deal of attention. This is another branch of singularity theory, based on earlier work of Hassler Whitney on critical points. Roughly speaking, a critical point of a smooth function is where the level set develops a singular point in the geometric sense. This theory deals with differentiable functions in general, rather than just polynomials. To compensate, only the stable phenomena are considered. One can argue that in nature, anything destroyed by tiny changes is not going to be observed; the visible is the stable. Whitney had shown that in low numbers of variables the stable structure of critical points is very restricted, in local terms. Thom built on this, and his own earlier work, to create a catastrophe theory supposed to account for discontinuous change in nature.
### Arnold's view
While Thom was an eminent mathematician, the subsequent fashionable nature of elementary catastrophe theory as propagated by Christopher Zeeman caused a reaction, in particular on the part of Vladimir Arnold.[1] He may have been largely responsible for applying the term singularity theory to the area including the input from algebraic geometry, as well as that flowing from the work of Whitney, Thom and other authors. He wrote in terms making clear his distaste for the too-publicised emphasis on a small part of the territory. The foundational work on smooth singularities is formulated as the construction of equivalence relations on singular points, and germs. Technically this involves group actions of Lie groups on spaces of jets; in less abstract terms Taylor series are examined up to change of variable, pinning down singularities with enough derivatives. Applications, according to Arnold, are to be seen in symplectic geometry, as the geometric form of classical mechanics.
### Duality
An important reason why singularities cause problems in mathematics is that, with a failure of manifold structure, the invocation of Poincaré duality is also disallowed. A major advance was the introduction of intersection cohomology, which arose initially from attempts to restore duality by use of strata. Numerous connections and applications stemmed from the original idea, for example the concept of perverse sheaf in homological algebra.
## Other possible meanings
The theory mentioned above does not directly relate to the concept of mathematical singularity as a value at which a function isn't defined. For that, see for example isolated singularity, essential singularity, removable singularity. The monodromy theory of differential equations, in the complex domain, around singularities, does however come into relation with the geometric theory. Roughly speaking, monodromy studies the way a covering map can degenerate, while singularity theory studies the way a manifold can degenerate; and these fields are linked. | 2013-05-25 02:08:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6547026634216309, "perplexity": 571.568387945094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705318091/warc/CC-MAIN-20130516115518-00042-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.electro-tech-online.com/threads/help-with-unknown-circuit-type.156982/ | # Help with unknown circuit type
#### Imctkh
##### New Member
Hi there,
I'm trying to repair the remote control for my trolling motor (it's a motorguide Lazer 370RF). The remote is a foot pad with buttons. Inside is a sheet of plastic film with these little white leads running all over it (see picture). The one that supplies power seems to be burnt out. If you try to scrape off the white stuff, there is no metal inside to solder to.
I'm sure if I knew what this setup was called, I could find a video showing me how to repair or bypass the broken lead. However, no combination of words in Google has given me any luck. Anyone know what it's called, or better yet, how to fix it?
Thanks,
Mitch
#### Beau Schwabe
##### Active Member
Do a search for "Silver conductive pen" ... that's how to fix it. You will not be able to solder to it. The next big question is, why would that have burnt out?
#### Nigel Goodwin
##### Super Moderator
Do a search for "Silver conductive pen" ... that's how to fix it. You will not be able to solder to it. The next big question is, why would that have burnt out?
I would suggest it's corroded away, rather than burntout - quite a common occurance on those types of remotes.
#### DrG
##### Active Member
I may not be seeing this clearly, but it looks like it has already been repaired once.
I can't tell if there was a trace in the area between my blue lines that came off, or if there is a trace there, but on the other side.
The area in my red outline looks very much like someone repaired it using some copper tape (something like this https://www.amazon.com/Conductive-Shielding-Repellent-Electrical-Grounding/dp/B076H4NPRR/ref=asc_df_B076H4NPRR/?tag=hyprod-20&linkCode=df0&hvadid=216767879473&hvpos=1o1&hvnetw=g&hvrand=14171606063556337347&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9007812&hvtargid=pla-386214563660&psc=1 but not as wide). Note how you can see that it (the repair piece has been placed on top of the original trace with the repair piece being cut diagonally near the connector).Not sure what to make of the rectangular object within my red area - maybe more of such tape.
Do you have continuity from just before the repair to just after the repair? It may not be working for other reasons. If there is poor continuity, you could replace/redo that trace. Silver pen was already mentioned maybe some kind of conductive adhesive as well. Or, maybe I am just not seeing it correctly.
Last edited:
#### Imctkh
##### New Member
Do a search for "Silver conductive pen" ... that's how to fix it. You will not be able to solder to it. The next big question is, why would that have burnt out?
Thanks, I will try out the pen.
The reason it's burnt out is because it sunk in the lake (along with my boat)
To DRG's question - I made the mark inside the red circle. I was scraping at the wire hoping to expose some metal that I could solder to. As far as what's between the blue lines, it's not part of the circuit. It's a feature of the plastic pad that the circuit is stuck to.
#### Nigel Goodwin
##### Super Moderator
The reason it's burnt out is because it sunk in the lake (along with my boat)
Like I said, not 'burnt out' simply corroded.
#### unclejed613
##### Well-Known Member
these are known as "membrane keypads". first time i ever saw one was some kind of video game console in the 1980s. i think it was the "Intellivision" controllers were made like this, and there was a part of the keypad that flexed while in use, and eventually the traces would begin breaking. the only "repair" option was ordering a new controller. the breaks always happened on a right angle fold of the membrane, and using silver paint wasn't a very reliable way of fixing it since that corner was always flexing a bit while the controller was in use.
one possible "fix" for this one would be to build a switchbox with real switches. there's not a big complex set of switches, and only 9 wires.
#### alec_t
##### Well-Known Member
The sixth wire from the left going into the connector at the bottom looks pretty corroded too.
+1 for the new switchbox idea.
#### Imctkh
##### New Member
these are known as "membrane keypads". first time i ever saw one was some kind of video game console in the 1980s. i think it was the "Intellivision" controllers were made like this, and there was a part of the keypad that flexed while in use, and eventually the traces would begin breaking. the only "repair" option was ordering a new controller. the breaks always happened on a right angle fold of the membrane, and using silver paint wasn't a very reliable way of fixing it since that corner was always flexing a bit while the controller was in use.
one possible "fix" for this one would be to build a switchbox with real switches. there's not a big complex set of switches, and only 9 wires.
#### Imctkh
##### New Member
Initially I thought I would offer to design a small circuit board with tactile push buttons, but looking at your photos I would consider reworking both boards. The main board doesn't appear to be too complex and looks to measure about 2.5 inches x 2.5 inches. It's hard to see what all is under that silicone which BTW is not the greatest thing for copper traces on a PCB. Normally I charge \$60/hr for contract work, but I would cut that in half for this project. PM me if you want and we can discuss the details further.
Thanks Beau. I am somewhat determined to fix it myself at the moment, but I'll let you know if I fail.
#### unclejed613
##### Well-Known Member
if you can find some ribbon cable (like what used to be used for IDE hard drive cables) you can use that... from the looks of the connector in the picture that shows the 9V battery clip, it looks like 0.1 inch spacing. ribbon cable with 0.1 inch (2.5mm) spacing is inexpensive and easy to find.
the best bet will be to carefully remove the connector, and replace it with one end of the ribbon cable. this requires desoldering the connector. if you need to, find a board you don't care about and practice desoldering/soldering on it first. once you get confident that you can desolder without lifting pads or removing a feed-through, then tackle the controller. if you get a desoldering tool, i think the big blue ones are pretty good since they have a large cylinder volume and strong spring, which makes for a stronger vacuum.
under that silicone which BTW is not the greatest thing for copper traces on a PCB
there's silicone RTV compound available that is non-corrosive, and specifically made for use on circuit boards. it doesn't ooze acetic acid (vinegar) like the household RTV compound.
#### Imctkh
##### New Member
Thanks, you are correct on the spacing. I also have the desoldering tool that you described.
Buying this ribbon cable, we'll see how it goes.
#### Imctkh
##### New Member
I just wanted to give a final update on this, as I find it annoying when I read through a forum and the OP never posts their success/failure.
So I had an issue finding the right size ribbon cable. I bought two different ribbon cables, both of which said they had 2.54 mm center spacing. Both of them had 1.27 mm center spacing. After the second time this happened, I gave up and just used every other wire to get the correct spacing.
I soldered the ribbon cable to the board and connected to the switches you see in the picture to make a switchbox that is kind of like a game controller. It works great, and is actually easier to use than the original foot pedal. It is kind of ugly, but I really care more about funtionality .
Thanks for the help everyone!
#### JimB
##### Super Moderator
So I had an issue finding the right size ribbon cable. I bought two different ribbon cables, both of which said they had 2.54 mm center spacing. Both of them had 1.27 mm center spacing. After the second time this happened, I gave up and just used every other wire to get the correct spacing.
The thing is with ribbon cables of the type which you have, they are intended for use with connectors which have two rows of pins which are 0.1 inch (2.54mm) apart.
The connectors crimp together and pins in the connector body pierce to insulation of the appropriate wire in the ribbon.
Anyway, it looks like you have a working solution to your problem, well done.
It is kind of ugly, but I really care more about funtionality
Sometimes "ugly construction" is the most effective way when you are only building one of them.
JimB
#### Nigel Goodwin
##### Super Moderator
Looks like a large number of expensive switches
We use them at work, nice switches, but rather pricey.
#### unclejed613
##### Well-Known Member
actually, although it is "ugly construction" it's a very nice job. you might want to paint clear paint over the button markings. Sharpie markings tend to come off easily. | 2019-09-19 08:18:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3057635724544525, "perplexity": 1524.8557980062526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573465.18/warc/CC-MAIN-20190919081032-20190919103032-00460.warc.gz"} |
https://talkstats.com/threads/r-logistic-regression-coding-to-get-all-predictor-variables-for-categorical-vars.66974/ | # R - Logistic regression. Coding to get all predictor variables for categorical vars
#### willcoop
##### New Member
I'll preface my question by stating that I am new to using R. I am trying to use logistic regression on my data. I typed glm(formula = purchase ~ predictor vars separated by + sign, family=binomial, data=modeldata)
The solution does not produce the multiple intercept and beta coefficients for up to 10 categorical variables that I am expecting.
I am trying to replicate what I get using a subset of the data on SPSS. SPSS does generate the intercepts for all the levels of the categorical variables that fit the model . SPSS would be too slow to run the entire database.
My questions are:
If R is not reading these variables as categorical, how do I specify them as categorical?
When I get the output from R, will it generate just the final solution after attempting to fit all the predictors? Or will it only include those predictors that have a P(z) < .05?
How do I go about achieving my desired solution? Thanks for your attention.
William Cooper
Last edited:
#### Dason
Re: R - Logistic regression. Coding to get all predictor variables for categorical va
Well what DO you get? It's hard to say what the issue is if we don't have your data and don't have your output. Basically at the moment all we have is "it's not quite giving me what I want" and it's really hard for us to diagnose anything with that amount of information.
#### willcoop
##### New Member
Re: R - Logistic regression. Coding to get all predictor variables for categorical va
When I use the command str(modeldata), I get this
$Age : int 38 36 46 42 68 44 42 54 40 42 ...$ BankCard : int 1 0 1 1 1 1 1 1 1 1 ...
$Cat : int 0 0 0 0 0 0 1 0 1 1 ...$ Dog : int 0 0 0 0 1 0 0 0 1 0 ...
$DwellingType : int 1 1 1 1 1 1 1 1 1 2 ...$ Education : int 0 1 2 0 3 1 2 0 1 1 ...
#### gianmarco
##### TS Contributor
Re: R - Logistic regression. Coding to get all predictor variables for categorical va
Once you have properly prepared your data, you may want to use a R function I have put togheter, which allows to visually display the fitted model's results (i.e., betas and ORs). It also allows to plot some model's diagnostics.
The function is described here: http://cainarchaeology.weebly.com/r-function-for-binary-logistic-regression.html
Short video tutorial here:
In the same site, a couple of functions are also available to perform LR internal validation.
Hope this helps.
Best
gm
#### willcoop
##### New Member
Re: R - Logistic regression. Coding to get all predictor variables for categorical va
Thanks to everyone for their help. Especially for the video, that's the best resource. | 2022-10-06 03:52:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2896549701690674, "perplexity": 1045.1361007302257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00088.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=14&t=49391&p=177520 | ## Midterm formulas
$c=\lambda v$
Emily Mendez 4C
Posts: 33
Joined: Wed Nov 14, 2018 12:19 am
### Midterm formulas
Hey guys! Does anyone know if we will be given a worksheet with all the formulas once again like they provided with Test 1 for the midterm?
Christineg1G
Posts: 115
Joined: Fri Aug 09, 2019 12:15 am
Been upvoted: 1 time
### Re: Midterm formulas
I believe it will be the same as our first test, where we were given formulas as well as the periodic table.
Yiyang Jen Wang 4G
Posts: 76
Joined: Wed Nov 21, 2018 12:18 am
### Re: Midterm formulas
Yes, I believe we will get formula sheet and periodic table just like we had for the test.
pmokh14B
Posts: 107
Joined: Sat Aug 17, 2019 12:15 am
### Re: Midterm formulas
I think we get all the information we had ont he 1st test.
ZainAlrawi_1J
Posts: 71
Joined: Sat Aug 24, 2019 12:16 am
### Re: Midterm formulas
The formula sheet is available to print out on Lavelle's site.
Donna Nguyen 2L
Posts: 100
Joined: Sat Aug 24, 2019 12:17 am
### Re: Midterm formulas
Yes, I believe that we will be given the formula sheet and the periodic table.
Areli C 1L
Posts: 95
Joined: Wed Nov 14, 2018 12:19 am
yes ma'am :) | 2020-08-04 17:31:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5110565423965454, "perplexity": 9920.319635031887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00147.warc.gz"} |
https://neurips.cc/Conferences/2021/ScheduleMultitrack?event=28547 | `
Timezone: »
Poster
Private and Non-private Uniformity Testing for Ranking Data
Róbert Busa-Fekete · Dimitris Fotakis · Emmanouil Zampetakis
Thu Dec 09 08:30 AM -- 10:00 AM (PST) @ None #None
We study the problem of uniformity testing for statistical data that consists of rankings over $m$ items where the alternative class is restricted to Mallows models with single parameter. Testing ranking data is challenging because of the size of the large domain that is factorial in $m$, therefore the tester needs to take advantage of some structure of the alternative class. We show that uniform distribution can be distinguished from Mallows model with $O(m^{-1/2})$ samples based on simple pairwise statistics, which allows us to test uniformity using only two samples, if $m$ is large enough. We also consider uniformity testing with central and locally differential private (DP) constraints. We present a central DP algorithm that requires $O\left(\max \{ 1/\epsilon_0, 1/\sqrt{m} \} \right)$ where $\epsilon_0$ is the privacy budget parameter. Interestingly, our uniformity testing algorithm is straightforward to apply in the local DP scenario by its nature, since it works with binary statistics that is extracted from the ranking data. We carry out large-scale experiments, including $m=10000$, to show that these testing algorithms scales very gracefully with the number of items. | 2022-06-26 23:44:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6537929773330688, "perplexity": 965.5307526814563}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00074.warc.gz"} |
https://math.stackexchange.com/questions/2876248/have-i-used-induction-correctly-in-this-proof-of-xy-implies-xnyn | # Have I used induction correctly in this proof of $x<y \implies x^n<y^n$?
A while ago I posted an attempt at a proof of $x<y \iff x^n<y^n$. It was pointed out that I hadn't actually used induction, and had instead done a direct proof. Below is the link to the question, so please do not mark this question as a duplicate, as this new question is about whether I have now done the proof by induction correctly, rather than accidentally reverting to a direct proof.
Is this proof of $x<y \iff x^n < y^n$ correct?
Also, be aware that I am, below, attempting to prove only that $x<y \implies x^n<y^n$.
Claim: $x<y \implies x^n<y^n$ for $x,y>0$ and $x,y,n \in \mathbb N$.
Proof:
Let $P(n)$ be the statement that $$x<y \implies x^n<y^n,$$ for $n\in \mathbb N$. It is clear that $P(n)$ holds for $n=1$ since $x<y \implies x<y$.
Assuming that $P(n)$ holds for some $n = k$, we see that this implies that $P(n)$ holds for $n=k+1$, as follows.
$$x<y \implies x^n<y^n$$
Since we know that $x<y$, if we multiply $x^n<y^n$ by $x$ we get that: $$x^{n+1} < xy^n,$$ from which it follows that $$x^{n+1} < y^{n+1}.$$
Thus $P(k)$ true $\implies$ $P(k+1)$ true, and so by induction we can prove the claim that $P(n)$ holds for all $n \in \mathbb N$.
• I think the proof is okay, I can't find anything wrong. – Anik Bhowmick Aug 8 '18 at 15:51
• probably $n\in \mathbb N$ in the claim – Exodd Aug 8 '18 at 15:51
• @Exodd I have made this change. – Benjamin Aug 8 '18 at 15:52
• @AnikBhowmick I feel a bit uncomfortable with the step where I multiply by $x$, since I (a) feel like I am ignoring the LHS and (b) am not sure how this exactly relies on the fact that P(n) is true. – Benjamin Aug 8 '18 at 15:53
• (a) Since $x>0$, it's completely okay. There is no fact of ignoring the LHS. (b) That's the statement of mathematical induction, right ?? If $P(K+1)$ is true whenever $P(K)$ is true, then $P(n)$ is true $\forall n \in \mathbb N$ !! Where is the ambiguity ?? – Anik Bhowmick Aug 8 '18 at 15:59
In my opinion you should work better out where and how you use the inductive claim (I. C.).
$x^{n+1}=x\cdot x^n\stackrel{I.C}{<}x\cdot y^n\stackrel{x<y}{<}y\cdot y^n=y^{n+1}$
• I am not sure I follow your superscript notation, could you possibly explain that in more detail? – Benjamin Aug 8 '18 at 15:54
• Sure: The superscript $I.C$ notes, that this estimation uses the inductive claim. The superscript $x<y$ notes, that we use for this estimation, that $x<y$ by assumption. Is it clear now? – Cornman Aug 8 '18 at 15:55
• Yes that makes it clear, so long as by Inductive Claim you mean assuming $P(n)$ is true for $k$? – Benjamin Aug 8 '18 at 15:57
• The inductive claim $P(n)$ is, that the estimation $x^n<y^n$ holds for arbitrary (but fixed) $n\in\mathbb{N}$. You do not need to involve $k$, as José Carlos Santos pointed out. – Cornman Aug 8 '18 at 15:59
It is correct. Two remarks, though:
1. There is no need to use two letters ($n$ and $k$). One is enough.
2. Indeed, it follows from $x^{n+1}<xy^n$ that $x^{n+1}<y^{n+1}$, but you did not say why. This is where you use the fact that $x<y$.
• Should I then have made more clear that because $x<y$ it is the case that $x^{n+1} < xy^n < y^{n+1}$? – Benjamin Aug 8 '18 at 15:56
• @Benjamin Yes, you should. – José Carlos Santos Aug 8 '18 at 15:57 | 2021-05-17 00:11:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8784292936325073, "perplexity": 297.41938929057505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00279.warc.gz"} |
http://math.stackexchange.com/tags/tensors/new | # Tag Info
1
If $\vec{g}=\left[\begin{array}{c}f_1\\\vdots\\f_m\end{array}\right]$ then the derivative of $\vec{g}$ is the matrix $$J\vec{g}=\left[\begin{array}{c}\nabla f_1\\\vdots\\\nabla f_m\end{array}\right],$$ which is an $m\times n$ - rectangular array. In components, you would see it as $$J\vec{g}=\left[\dfrac{\partial f_i}{\partial x_j}\right],$$ where $i$ is ...
1
$\delta_{ij}=\begin{cases}1 & \text{if } i=j \\ 0 & \text{otherwise}\end{cases}$ https://en.m.wikipedia.org/wiki/Kronecker_delta $\epsilon_{ijk}=\begin{cases} sgn(ijk) & \text{as a permutation, if } i,j,k \text{ are different} \\ 0 & \text{otherwise}\end{cases}$ https://en.m.wikipedia.org/wiki/Levi-Civita_symbol And of course repeated ...
1
First, it's pretty clear that the field $K$ does not matter in any way (if you know how to prove things for $\mathbb{R}$, then just check that you don't use any special property of this field). Then, be careful : the statement for tensor algebras is already false for finite-dimensional vector spaces : if $V$ has finite dimension, $T(V^*)$ has countable ...
1
Given that I prefer index notation, that presents less ambiguities, I would write expression 1 as: $$\mathbf{u}(\mathbf{u}\cdot\nabla)\rho+\rho(\mathbf{u}\cdot\nabla)\mathbf{u}+\rho\mathbf{u}(\nabla\cdot\mathbf{u})$$ and expression 2 as $$\nabla\cdot(\rho\mathbf{u}\otimes\mathbf{u})$$
1
Why do you think that this would be correct? You are differentiating completely different objects on the left and right hand side. $g$ is not the same as $R$ (I'm omitting the bars). Check this site (scroll down to 'Lie derivative' of tensor fields) to check how to calculate $${\frak{L}}_V (R(X, Z, Z, Y))$$ (which results in $$({\frak{L}}_V R)(X, Z, Z, ... 5 So, the vast majority of manifolds are not locally symmetric. The most obvious example that comes to mind is the standard torus in 3-space. This has positive curvature just to the left of the topmost circle, and negative curvature just to the right, and hence there cannot possibly be a geodesic symmetry at any of the points on that top circle. Similarly with ... 1 We can add something to the already excellent answer given by Omnomnomnom. The wonderful Gravitation by Misner,Thorpe and Wheeler (affectionately called MTW) on page 75 describes the most general (m,n) tensor as:" a linear machine with n input slots for n 1-forms and m input slots for m vectors, given the requested input, it puts out a real number..." ... 0 The colon represents a contraction over the same tensor, hence$$ |T_{i,;}|=\sqrt{\sum_{j}T_{ij}T_{ij}} . 0 The correct form is T_{j_1,\dots,j_p}^{i_1,\dots,i_q}e_{i_1}\otimes\dots\otimes e_{i_p}\otimes\varepsilon^{j_1}\otimes\dots\otimes\varepsilon^{j_q}. After contraction, T_{j_1,\dots,j_{p-1}{\ }\ ,k}^{i_1,\dots,i_{q-1}{\ }\ ,k}e_{i_1}\otimes\dots\otimes e_{i_{p-1}}\otimes\varepsilon^{j_1}\otimes\dots\otimes\varepsilon^{j_{q-1}} The wikipedia is a useful ... 1 A tensor is just a multilinear, scalar-valued function. If I write V \multimap W, for the collection of linear functions from a vector space V to a vector space W, then, over the reals, a rank-n tensor is just V^{\otimes n}\multimap\mathbb R where V^{\otimes n} means the n-fold tensor product, e.g. V^{\otimes 2} = V\otimes V. A tensor field ... 3 This seems like mainly a question about what the abstract index notation is asking you to do. Let's look at a single term from the right hand: \nabla_X\nabla_YZ. Recall that in abstract index notation, two tensors juxtaposed signifies their tensor product. For example, if V^a and W^b are vector fields, V^aW^b is supposed to mean the tensor field V ... 3 The start of the computation is OK, but the partial derivative does not make sense. I think the problems come from the fact that you misinterpret \nabla_c\nabla_dZ^a. Evaluating the second covariant derivative \nabla^2Z on X and Y, you do not get \nabla_X\nabla_Y Z. What you have to do is to differentiate \nabla Z as a \binom11-tensor field. ... 5 Here's the quick way to describe what's going on: In linear algebra, you learned primarily about linear transformations. In particular, T:V \to W is a function that takes a single vector, and produces another vector in a linear way. In particular T satisfies T(ax + by) = aT(x) + bT(y). It turns out that linear transformations can naturally be ... 3 It seems you are missing some necessary background, namely, how to extend a connection on TM to all tensor bundles. I will summarise this construction. Given a connection \begin{align*} \nabla : \Gamma(TM) \times \Gamma(TM) &\to \Gamma(TM)\\ (X, Y) &\mapsto \nabla_XY \end{align*} on TM, there is an associated connection (which I will also ... 1 Rank one tensors, on a vector space V over the scalar field \Bbb F, are linear mapsV\to\Bbb F$$and$$V^*\to\Bbb F,$$where V^* is the dual space of V. Rank two tensors are bilinear maps$$V\times V\to\Bbb F,V^*\times V\to\Bbb F,V^*\times V^*\to\Bbb F.$$Rank three tensors are trilinear maps$$V\times V\times V\to\Bbb F,$$... 3 Note that$$(D\theta)(X, Y) = (\nabla\theta)(X, Y) = (\nabla_X\theta)(Y) = X(\theta(Y)) - \theta(\nabla_XY).So the skew-symmetric part of (D\theta)(X, Y) is \begin{align*} \frac{1}{2}[(D\theta)(X, Y) - (D\theta)(Y, X)] &= \frac{1}{2}[X(\theta(Y)) - \theta(\nabla_XY) - Y(\theta(X)) + \theta(\nabla_YX)]\\ &= \frac{1}{2}[X(\theta(Y)) - ... 0 It means the same thing, \nabla g=0 is equivalent to saying that g is parallel relatively to the connection \nabla which is equivalent to saying that Xg(Y,Z)=g(\nabla_XY,Z)+g(Y,\nabla_X,Z). It is the answer, the covariant derivative of the n tensor T is defined by \nabla_XT(X_1,..,X_n)=X.T(X_1,..,X_n)-\sum_i(X_1,..,\nabla_XX_i,..,X_n) 1 Since \epsilon and \nabla_b \epsilon (for fixed b) are both antisymmetric, the only non-zero terms in the sum\epsilon^{a_1 \cdots a_n} \nabla_b \epsilon_{a_1 \cdots a_n}$$will be those where a_1\cdots a_n is a permutation of 1 \cdots n. Moreover, if we permute the indices back to this order in each term, the signs that each \epsilon pick up ... 0 That is correct. If T is a smooth (p, q)-tensor field on M, then T is a multilinear map$$T : \underbrace{\Omega^1(M)\times\dots\times\Omega^1(M)}_{p\ \text{copies}} \times\underbrace{\mathfrak{X}(M)\times\dots\times\mathfrak{X}(M)}_{q\ \text{copies}} \to C^{\infty}(M) where $\Omega^1(M)$ is the collection of covector fields (i.e. one-forms) on ...
2
There are several misconceptions in the OP about both mathematicians' and physicists' use of the word "vector", and even about what scalars and tensors are. To keep this a concise overview I'll be linking to fuller explanations. Firstly, anything you've heard about magnitude and direction was just an attempt to help schoolchildren avoid certain fallacies ...
2
A vector is probably not just a tuple of scalars unless your definition of "tuple" is very very broad (and you also probably need to assume some extras like AC). In general a vector space is quite a bit more abstract. A first easy example of a vector space where the vectors are not really "tuples" for most definition of what a "tuple" is would be functions ...
0
Let \begin{eqnarray*} b_1&=&e_1+2e_2,\\ b_2&=&e_1+3e_2 \end{eqnarray*} be your change of basis, which in matrix form is $[B]=\left[\begin{array}{cc} 1&1\\ 2&3\end{array}\right]$ and whose inverse is $[B]^{-1}= \left[\begin{array}{cc} 3&-1\\ -2&1 \end{array}\right]$. Now if you want to know which are the new components of, ...
-1
The tensor contraction takes a $(r,s)$ tensor $\mathbf{T}^{i_1,\ldots,i_r}_{j_1,\ldots,j_s}$ and produces a $(r-1,s-1)$ tensor. For your example, you contract the $(3,1)$ tensor $\mathbf{a}^{ij}\mathbf{\delta}^k_\ell$, so you can not really say that you obtain the same tensor after a contraction. Let us write in local coordinates ...
Top 50 recent answers are included | 2016-04-29 22:35:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638044834136963, "perplexity": 845.6773223808211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111455.18/warc/CC-MAIN-20160428161511-00095-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/652701-generating-random-points-in-a-sphere/ | # Generating random points in a sphere
This topic is 1819 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I am implementing volumetric cloud rendering.
So I need to generate a lot of particles in shape of a cloud.
Right now I am using this formula:
for I:=1 To ParticleCount Do
Basically, it picks 3 random coords inside the defined Cloud radius range,
The problem is, clouds generated with this formula look like cubes instead of spheres.
I tried looking in google for ways to generate points in a sphere, but I could only find ways to generate points in the surface of a sphere, not in the full volume.
Any suggestions for how to change the formula to produce spheres instead of cubes?
Once I can generate spheres I can approximate a cloud shape by using many spheres.
But if there is a better method to generate a cloud shape, tell me :)
##### Share on other sites
Two options:
(1) Write a do-while loop around your random point picking, so you keep generating random points until one inside the sphere is generated.
(2) Pick a random point on the surface of the sphere and then pick a random number between 0 and 1 which will be used to interpolate between the center of the sphere and your point at the surface. In order to get an even distribution of points, your random number should be the cubic root of a uniformly-distributed number.
Edited by Álvaro
##### Share on other sites
Thanks, that seems a good idea, I'll try it :)
##### Share on other sites
Might also make sense to switch over to a spherical coordinate system. Pick a random value from 0 - 360 for X, 0 - 360 for y, and a random value from 0 - radius for z. Then convert that to the cartesian coordinates.
http://en.wikipedia.org/wiki/Spherical_coordinate_system
It's just what I would do, you don't necessarily have to.
##### Share on other sites
Might also make sense to switch over to a spherical coordinate system. Pick a random value from 0 - 360 for X, 0 - 360 for y, and a random value from 0 - radius for z. Then convert that to the cartesian coordinates.
http://en.wikipedia.org/wiki/Spherical_coordinate_system
It's just what I would do, you don't necessarily have to.
How do you pick those random values so the resulting distribution of points in the sphere is uniform? That's precisely what this thread is about. If you do the naive thing you just described, you'll end up with too many points near the center of the sphere and too many points near the poles, or something like that (not sure what "x" and "y" are in your post to know for sure).
##### Share on other sites
I tested his approach now, well, it seems to work fine, the distribution might not be uniform but is enough for this case.
Also tested the do/while approach, but using spherical coordinates seems faster
##### Share on other sites
Note that spherical coordinates will cause clustering at poles, which is most probably not what you want. A slightly more complicated approach is needed to properly pick points on a sphere (which, if the radius is also random, will give points inside the sphere).
On the other hand, the solution of generating points in the unit cube (or in the "radius cube", does that word exist?) and discarding points for which dot(point, point) > distance_squared is much easier and works well, too.
Edited by samoth
##### Share on other sites
Here is the code that I use to generate uniform random points on a sphere (which is derived from the wolfram link posted above):
float u1 = randomRange( -1.0f, 1.0f );
float u2 = randomRange( 0.0f, 1.0f );
float r = sqrt( 1.0f - u1*u1 );
float theta = 2.0f*math::pi<float>()*u2;
return Vector3( r*cos( theta ), r*sin( theta ), u1 );
##### Share on other sites
Here is the code that I use to generate uniform random points on a sphere (which is derived from the wolfram link posted above):
float u1 = randomRange( -1.0f, 1.0f );
float u2 = randomRange( 0.0f, 1.0f );
float r = sqrt( 1.0f - u1*u1 );
float theta = 2.0f*math::pi<float>()*u2;
return Vector3( r*cos( theta ), r*sin( theta ), u1 );
Thanks, it seems to work great :)
##### Share on other sites
Might also make sense to switch over to a spherical coordinate system. Pick a random value from 0 - 360 for X, 0 - 360 for y, and a random value from 0 - radius for z. Then convert that to the cartesian coordinates.
http://en.wikipedia.org/wiki/Spherical_coordinate_system
It's just what I would do, you don't necessarily have to.
How do you pick those random values so the resulting distribution of points in the sphere is uniform? That's precisely what this thread is about. If you do the naive thing you just described, you'll end up with too many points near the center of the sphere and too many points near the poles, or something like that (not sure what "x" and "y" are in your post to know for sure).
Indeed, a spherical coordinate system would create clustering due to the effects of gimbal lock. The same solutions to that problem can be used to overcome this (vector algebra / quaternions)
But yes ultimately the simplest, and quite efficient solution is the rejection method where you try again if the point is outside of the radius. Do a squared-radius check of course, to avoid square roots inside the loop.
• ### What is your GameDev Story?
In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
• 15
• 11
• 9
• 38
• 16
• ### Forum Statistics
• Total Topics
634128
• Total Posts
3015694
× | 2019-01-22 14:21:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5213035345077515, "perplexity": 876.4510273935862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00186.warc.gz"} |
https://math.stackexchange.com/questions/669407/spivak-calculus-chapter-13-question-28 | # Spivak Calculus Chapter 13 Question 28.
Prove that if $f$ is integrable on $[a,b]$ then for any $\varepsilon>0$ there is a continuous function $g \le f$ such that the integral from $a$ to $b$ of $(f-g)<\varepsilon$.
Thing is, I've already gotten a step function that works from a previous exercise $(s_1\le f)$, but I can't for the life of me get a continuous one. I was thinking something along the lines of a linear function that would zigzag around each step of $s$ three times (ie: one above $x^2$, one below $x^2$, and another above $x^2$)
• If you've managed to approximate an integrable function from below using a step function, all you need to do is approximate a step function from below using a continuous one. Can you do that? – Jonathan Y. Feb 9 '14 at 10:39
The key point is that if you already have a step function that has close integral to $f$, you could 'smoothen' each step by a small amount to get a continuous function and you can make the shavings arbitrarily small, because there are finitely many steps. You could use straight line segments to accomplish this, but in fact $g$ can even be made infinitely differentiable if you use bump functions.
By what you showed take a step function $s \le f$ such that $\int_{a}^bf -\int_{a}^b s < \varepsilon/2$ , and since $f$ is integrable it is bounded so take $M$ such that $|f(x)|\le M$. $s$ is constant on some intervals of the form $(j_{i-1},j_i)$ $i=1,\ldots,n$ for some $n$. let $\delta>0$ and define $g=s$ on $[j_{i-1}+\delta /2,j_{i}+\delta /2]$ and on $[j_i-\delta /2,j_{i}]$ and $[j_i,j_{i}+\delta /2]$ to be a straight line where $g(j_i)=-M$. So now we have $g\le s$ and $$\int_{a}^bs-\int_{a}^bg<nM\delta$$ Take $\delta<\varepsilon/2nM$ this gives $\int_{a}^bs-\int_{a}^bg<nM\delta<\varepsilon /2$. $\int_{a}^bf-\int_{a}^bg = \int_{a}^bf-\int_{a}^bg-\int_{a}^bs+\int_{a}^bs \le \varepsilon$ | 2019-05-27 09:16:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388665556907654, "perplexity": 80.53967159283812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262311.80/warc/CC-MAIN-20190527085702-20190527111702-00256.warc.gz"} |
https://web.unican.es/portal-investigador/publicaciones/detalle-publicacion?p=ART13092 | # Researchers portal
Buscar
Estamos realizando la búsqueda. Por favor, espere...
## Measurements of production cross sections of polarized same-sign W boson pairs in association with two jets in proton-proton collisions at [root of] s=13 TeV
Abstract: The first measurements of production cross sections of polarized same-sign boson pairs in proton-proton collisions are reported. The measurements are based on a data sample collected with the CMS detector at the LHC at a center-of-mass energy of , corresponding to an integrated luminosity of . Events are selected by requiring exactly two same-sign leptons, electrons or muons, moderate missing transverse momentum, and two jets with a large rapidity separation and a large dijet mass to enhance the contribution of same-sign scattering events. An observed (expected) 95% confidence level upper limit of 1.17 (0.88) fbis set on the production cross section for longitudinally polarized same-sign boson pairs. The electroweak production of same-sign boson pairs with at least one of the W bosons longitudinally polarized is measured with an observed (expected) significance of 2.3 (3.1) standard deviations.
Otras publicaciones de la misma revista o congreso con autores/as de la Universidad de Cantabria
Fuente: Physics Letters B Volume 812, 10 January 2021, 136018
Editorial: Elsevier
Año de publicación: 2021
Nº de páginas: 27
Tipo de publicación: Artículo de Revista
ISSN: 0370-2693,1873-2445
Url de la publicación: https://doi.org/10.1016/j.physletb.2020.136018
### Autores/as
THE CMS COLLABORATION
PEDRO JOSE FERNANDEZ MANTECA
CEDRIC GERALD PRIEELS
FRANCESCA SHUN-NING ANNAROSA RICCI-TAM | 2023-01-28 00:49:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.824221670627594, "perplexity": 7203.561603072036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00863.warc.gz"} |
https://forum.allaboutcircuits.com/threads/timer0-register-low-byte-and-high-byte-pic18f4550.153210/ | # Timer0 Register Low Byte and high byte(PIC18f4550)
#### khatus
Joined Jul 2, 2018
89
This is the code generated by mikroC timer calculator for 10 ms delay using timer0 ( I'm using pic18f4550)
//Timer0
//Prescaler 1:1; TMR0 Preload = 15536; Actual Interrupt Time : 10 ms
//Place/Copy this part in declaration section
void InitTimer0()
{
T0CON = 0x88;
TMR0H = 0x3C;
TMR0L = 0xB0;
GIE_bit = 1;
TMR0IE_bit = 1;
}
void Interrupt()
{
if (TMR0IF_bit){
TMR0IF_bit = 0;
TMR0H = 0x3C;
TMR0L = 0xB0;
}
}
My question is why Timer0 Register Low Byte and high byte is mentioned two times
1. inside InitTimer0() function??
2. Inside Interrupt() function??
#### AlbertHall
Joined Jun 4, 2014
11,238
So that the first interrupt will be after the same delay as all the others.
Without that setting in InitTimer0() the first interrupt time is indeterminate.
#### spinnaker
Joined Oct 29, 2009
7,835
This is the code generated by mikroC timer calculator for 10 ms delay using timer0 ( I'm using pic18f4550)
//Timer0
//Prescaler 1:1; TMR0 Preload = 15536; Actual Interrupt Time : 10 ms
//Place/Copy this part in declaration section
void InitTimer0()
{
T0CON = 0x88;
TMR0H = 0x3C;
TMR0L = 0xB0;
GIE_bit = 1;
TMR0IE_bit = 1;
}
void Interrupt()
{
if (TMR0IF_bit){
TMR0IF_bit = 0;
TMR0H = 0x3C;
TMR0L = 0xB0;
}
}
My question is why Timer0 Register Low Byte and high byte is mentioned two times
1. inside InitTimer0() function??
2. Inside Interrupt() function??
The register counts down to zero (if memory serves) and the interrupt is triggered. So the register needs to be loaded again with the required interrupt time inside the interrupt. Been away from this for a while so can't remember for sure if it counts up or down but either way it needs to be reset.
What I do is to set a #define macro, so if I need to change the value, I don't need to remember to do it in two places.
Example
Code:
#define TMR0H_VAL 0x3C
#define TMR0L_VAL 0xB0
[B]void InitTimer0()[/B]
{
T0CON = 0x88;
TMR0H =TMR0H_VAL;
TMR0L =TMR0L_VAL;
GIE_bit = 1;
TMR0IE_bit = 1;
}
[B]void Interrupt()[/B]
{
if (TMR0IF_bit){
TMR0IF_bit = 0;
TMR0H = TMR0H_VAL;
TMR0L =TMR0L_VAL;
}
/[code]
#### AlbertHall
Joined Jun 4, 2014
11,238
It counts up and it is the rollover to zero which sets the interrupt flag.
#### spinnaker
Joined Oct 29, 2009
7,835
It counts up and it is the rollover to zero which sets the interrupt flag.
Thanks for refreshing my memory. After I posted I had thought I had gotten it backward.
But either way since it reaches zero the register needs to be set initially and every time the interrupt occurs. | 2021-05-17 12:47:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.450175940990448, "perplexity": 11260.329583983132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00277.warc.gz"} |
https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/ | Describing computational graphs is just a matter of connecting nodes correctly. Connecting nodes seems a trivial operation, but it hides some difficulties related to the shape of tensors. This article will guide you through the concept of tensor’s shape in both its variants: static and dynamic.
## Tensors: the basic
Every tensor has a name, a type, a rank and a shape.
• The name uniquely identifies the tensor in the computational graphs (for a complete understanding of the importance of the tensor name and how the full name of a tensor is defined, I suggest the reading of the article Understanding Tensorflow using Go).
• The type is the data type of the tensor, e.g.: a tf.float32, a tf.int64, a tf.string, …
• The rank, in the Tensorflow world (that’s different from the mathematics world), is just the number of dimension of a tensor, e.g.: a scalar has rank 0, a vector has rank 1, …
• The shape is the number of elements in each dimension, e.g.: a scalar has a rank 0 and an empty shape (), a vector has rank 1 and a shape of (D0), a matrix has rank 2 and a shape of (D0, D1) and so on.
So you might wonder: what’s difficult about the shape of a tensor? It just looks easy, is the number of elements in each dimension, hence we can have a shape of () and be sure to work with a scalar, a shape of (10) and be sure to work with a vector of size 10, a shape of (10,2) and be sure to work with a matrix with 10 rows and 2 columns. Where’s the difficulty?
## Tensor’s shape
The difficulties (and the cool stuff) arises when we dive deep into the Tensorflow peculiarities, and we find out that there’s no constraint about the definition of the shape of a tensor. Tensorflow, in fact, allows us to represent the shape of a Tensor in 3 different ways:
1. Fully-known shape: that are exactly the examples described above, in which we know the rank and the size for each dimension.
2. Partially-known shape: in this case, we know the rank, but we have an unknown size for one or more dimension (everyone that has trained a model in batch is aware of this, when we define the input we just specify the feature vector shape, letting the batch dimension set to None, e.g.: (None, 28, 28, 1).
3. Unknown shape and known rank: in this case we know the rank of the tensor, but we don’t know any of the dimension value, e.g.: (None, None, None).
4. Unknown shape and rank: this is the toughest case, in which we know nothing about the tensor; the rank nor the value of any dimension.
Tensorflow, when used in its non-eager mode, separates the graph definition from the graph execution. This allows us to first define the relationships among nodes and only after executing the graph.
When we define a ML model (but the reasoning holds for a generic computational graph) we define the network parameters completely (e.g. the bias vector shape is fully defined, as is the number of convolutional filters and their shape), hence we are in the case of a fully-known shape definition.
But a graph execution time, instead, the relationships among tensors (not among the network parameters, that remain constants) can be extremely dynamic.
To completely understand what happens at graph definition and execution time let’s say we want to define a simple encoder-decoder network (that’s the base architecture for convolutional autoencoders / semantic segmentation networks / GANs and so on…) and let’s define this in the more general possible way.
## encoder-decoder network architecture
This network accepts in input an image of any depth (1 or 3 channels) and with any spatial extent (height, width). I’m going to use this network architecture to show you the concepts of static and dynamic shapes and how many information about the shapes of the tensors and of the network parameters we can get and use in both, graph definition time and execution time.
inputs_ = tf.placeholder(tf.float32, shape=(None, None, None, None))
depth = tf.shape(inputs_)[-1]
with tf.control_dependencies([
tf.Assert(
tf.logical_or(tf.equal(depth, 3), tf.equal(depth, 1)), [depth])
]):
inputs = tf.cond(
tf.equal(tf.shape(inputs_)[-1], 3), lambda: inputs_,
lambda: tf.image.grayscale_to_rgb(inputs_))
inputs.set_shape((None, None, None, 3))
layer1 = tf.layers.conv2d(
inputs,
32,
kernel_size=(3, 3),
strides=(2, 2),
activation=tf.nn.relu,
name="layer1")
layer2 = tf.layers.conv2d(
layer1,
32,
kernel_size=(3, 3),
strides=(2, 2),
activation=tf.nn.relu,
name="layer2")
encode = tf.layers.conv2d(
layer2, 10, kernel_size=(6, 6), strides=(1, 1), name="encode")
d_layer2 = tf.image.resize_nearest_neighbor(encode, tf.shape(layer2)[1:3])
d_layer2 = tf.layers.conv2d(
d_layer2,
32,
kernel_size=(3, 3),
strides=(2, 2),
activation=tf.nn.relu,
name="d_layer2")
d_layer1 = tf.image.resize_nearest_neighbor(d_layer2, tf.shape(layer1)[1:3])
d_layer1 = tf.layers.conv2d(
d_layer1,
32,
kernel_size=(3, 3),
strides=(2, 2),
activation=tf.nn.relu,
name="d_layer1")
decode = tf.image.resize_nearest_neighbor(d_layer1, tf.shape(inputs)[1:3])
decode = tf.layers.conv2d(
decode,
inputs.shape[-1],
kernel_size=(3, 3),
strides=(1, 1),
activation=tf.nn.tanh,
name="decode")
This example hides some interesting features of Tensorflow’s ops I/O shapes. Let’s analyze in detail the shape of every layer, this will help us understand a lot about the shaping system.
## Dynamic input shape handling
A placeholder defined in this way
inputs_ = tf.placeholder(tf.float32, shape=(None, None, None, None))
has an unknown shape and a known rank (4), at graph definition time.
At graph execution time, when we feed a value to the placeholder, the shape becomes fully defined: Tensorflow checks for us if the rank of the value we fed as input matches the specified rank and leave us the task to dynamically check if the passed value is something we’re able to use.
So, this means that we have 2 different shapes for the input placeholder: a static shape, that’s known at graph definition time and a dynamic shape that will be known only at graph execution time.
In order to check if the depth of the input image is in the accepted value (1 or 3) we have to use tf.shape and not inputs_.shape.
The difference between the tf.shape function and the .shape attribute is crucial:
• tf.shape(inputs_) returns a 1-D integer tensor representing the dynamic shape of inputs_.
• inputs_.shape returns a python tuple representing the static shape of inputs_.
Since the static shape known at graph definition time is None for every dimension, tf.shape is the way to go. Using tf.shape forces us to move the logic of the input shape handling inside the graph. In fact, if at graph definition time the shape was known, we could just use python and do something as easy as:
depth = inputs_.shape[-1]
assert depth == 3 or depth == 1
if depth == 1:
inputs_ = tf.image.grayscale_to_rgb(inputs_)
but in this particular case this is not possible, hence we have to move the logic inside the graph. The equivalent of the previous code defined directly into the graph is:
depth = tf.shape(inputs_)[-1]
with tf.control_dependencies([
tf.Assert(
tf.logical_or(tf.equal(depth, 3), tf.equal(depth, 1)), [depth])
]):
inputs = tf.cond(
tf.equal(tf.shape(inputs_)[-1], 3), lambda: inputs_,
lambda: tf.image.grayscale_to_rgb(inputs_))
from now on, we know that the input depth will be 3, but Tensorflow at graph definition time is not aware of this (in fact, we described all the input shape control logic into the graph, and thus all of this will be executed only when the graph is created).
Created an input with a “well-known” shape (we do only know that the depth at execution time will be 3) we want to define the encoding layer, that’s just a set of 2 convolutional layers with a 3x3 kernel and a stride of 2, followed by a convolutional layer with a kernel 6x6 and a stride of 1.
But before doing this, we have to think about the variable definition phase of the convolutional layers: as we know from the definition of the convolution operation among volumes in order to produce an activation map the operation needs to span all the input depth $D$.
This means that the depth of every convolutional filter depends on the input depth $D$, hence the variable definition depends on the expected input depth of the layers.
The shape of the variables must always be defined (otherwise the graph can’t be built!).
This means that we have to make Tensorflow aware at graph definition time of something that will be known only at graph execution time (the input depth).
Since we know that after the execution of tf.cond the inputs tensor will have a depth of 3 we can use this information at graph definition time, setting the static shape to (None,None,None,3): that’s all we need to know to correctly define all the convolutional layers that will come next.
inputs.set_shape((None, None, None, 3))
the .set_shape method simply assigns to the .shape property of the tensor the specified value.
In this way, the definition of all the convolutional layer layer1, layer2 and encode can succeed. Let’s analyze the shapes of the layer1 (the same reasoning applies for every convolutional layer in the network):
## Convolutional layer shapes
At graph definition time we know the input depth 3, this allows the tf.layers.conv2d operation to correctly define a set 32 convolutional filters each with shape 3x3x3, where 3x3 is the spatial extent and the last 3 is the input depth (remember that a convolutional filter must span all the input volume).
Also, the bias tensor is added (a tensor with shape (32)).
So the input depth is all the convolution operation needs to know to be correctly defined (obviously, together with all the static information, like the number of filters and their spatial extent).
What happens at graph execution time?
The variables are untouched, their shape remains constant. Our convolution operation, however, spans not only the input depth but also all the input spatial extent (width and height) to produce the activation maps.
At graph definition time we know that the input of layer1 will be a tensor with static shape (None, None, None, 3) and its output will be a tensor with static shape (None, None, None, 32): nothing more.
suggestion: just add a print(layer) after every layer definition to see every useful information about the output of a layer, including the static shape and the name.
But we know that the output shape of a convolution can be calculated as (for both $W$ and $H$):
This information can be used to add an additional check on the dynamic input shape, in fact, is possible to define a lower bound on the input resolution (a pleasure let to the reader).
## Decode layer shapes
As almost everyone today knows, the “deconvolution” operation produces chessboard artifacts1. The standard solution is to replace the tf.layers.conv2d_transpose call to a resize + conv2d op.
In our case, this solution is the perfect representation on how we can create “mirror” architecture (the decoder is the mirror of the encoder), that will be able to correctly upsample the feature maps to the same exact shape of its twin.
Doing this using the information available at graph definition time (static shapes) is impossible. Instead, if we move everything inside the graph execution create this mirror architecture is extremely easy.
In fact, since tf.shape returns a 1-D tensor that represents the dynamic shape of its input, we can easily slice it in order to get only the spatial extent of the twin of the specified decoding layer.
In fact, the twin of the layer2 can be defined easily in this way:
d_layer2 = tf.image.resize_nearest_neighbor(encode, tf.shape(layer2)[1:3])
d_layer2 = tf.layers.conv2d(
d_layer2,
32,
kernel_size=(3, 3),
strides=(2, 2),
activation=tf.nn.relu,
name="d_layer2")
At the first line, we resize the encode tensor to the spatial extent of layer2. After that, we just apply a standard convolution operation with padding same in order to maintain the same spatial extent.
That’s all. Doing the same for every layer allows us to define the decoder as the mirror of the encoder (and this opens the road to other applications, like adding skip connections among twins, since the dimensions will always match, and so on…).
## Summary
• Variables must always be fully defined: exploit information from the graph execution time to correctly define meaningful variables shapes.
• There’s a clear distinction between static and dynamic shapes: graph definition time and graph execution time must always be kept in mind when defining a graph.
• tf.shape allows defining extremely dynamic computational graphs, at the only cost to move the logic directly inside the graph and thus out of python.
• The resize operations accept dynamic shapes: use them in this way.
## Bonus: How to count the total number of trainable parameters in a Tensorflow model?
After reading this article, what’s needed to count the total number of parameter in a Tensorflow model should be obvious.
We know that every variable must be fully defined, we can count the total number of parameter directly in the graph definition phase with a simply python loop, just accessing the .shape property of every trainable variable (tf.trainable_variables()).
For a complete answer, I let here a link to my StackOverflow answer to this question: How to count total number of trainable parameters in a Tensorflow model?
#### Why did I decided to write this post?
I see an increasing need in the community to understand how to correctly work with Tensorflow and its dynamic/static shape features. In fact, 2 of the most voted answer I wrote on StackOverflow are about this topic:
Probably the official documentation is not so clear about this aspect of the framework: I hope this post help to clarify this aspect.
If you find this article useful, feel free to share it using the buttons below! | 2019-10-16 21:52:24 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5618795156478882, "perplexity": 1065.0748842876776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00346.warc.gz"} |
https://physicsoverflow.org/23407/what-are-killing-spinors | # What are Killing spinors?
+ 6 like - 0 dislike
271 views
What are Killing spinors? How can they be motivated? Are they directly related to Killing vectors and Killing tensors and is there an overarching motivation for all three objects? Any answer is greatly appreciated but a less formal one would be preferred.
This post imported from StackExchange Physics at 2014-09-10 17:12 (UCT), posted by SE-user theriddler
retagged Sep 10, 2014
+ 2 like - 0 dislike
There is an interesting relation, as the $n$lab says. I'll try to explain it as informally as possible.
Let $\mathcal{M}$ be a pseudo-Riemannian manifold. Then, a Killing vector field on $\mathcal{M}$ is a covariantly constant vector field on $\mathcal{M}$, and "pairing two covariant constant spinors (parallel spinors, i.e., Killing spinors with $\lambda=0$) to a vector yields a Killing vector". Similarly, a Killing tensor on $\mathcal{M}$ is a covariantly constant section of $\mathrm{Sym}^k(\Gamma(\mathrm{T}(\mathcal{M})))$. Therefore, you may interpret Killing'' as being synonymous with covariantly constant'' (at least in these three cases).
answered Sep 12, 2014 by (285 points)
edited Sep 12, 2014
This is not the definition given by most authors. Killing spinors and parallel spinors verify $\nabla_X \psi = \lambda X.\psi, \lambda \in \mathbb C$ . Parallel spinors (covariantly constant spinors) correspond to the case $\lambda = 0$, while Killing spinors correspond to the case $\lambda \neq 0$
True. I've fixed that nLab entry.
I, too, have edited my answer accordingly.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverfl$\varnothing$wThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | 2019-04-26 11:59:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6864954829216003, "perplexity": 1763.3436232204112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578770163.94/warc/CC-MAIN-20190426113513-20190426135513-00296.warc.gz"} |
http://mathoverflow.net/questions/101098/the-chern-simons-wess-zumino-witten-correspondence | # The Chern-Simons/Wess-Zumino-Witten correspondence
I have often seen a relationship being alluded to between these two theories but I am unable to find any literature which proves/derives/explains this relationship.
I guess in the condensed matter physics literature this is the "same" thing which is referred to when they say that one has propagating chiral bosons on the boundary of the manifold if there is a Chern-Simons theory defined in the interior("bulk")
Let me quote (with some explanatory modifications) from two papers two most important aspects of the relationship that is alluded to,
1. "...It is well known that any Chern-Simons theory admits a boundary which carries a chiral WZW model; however these degrees of freedom are not topological (the partition function of Chern-Simons theory coupled to such boundary degrees of freedom depends on the conformal class of the metric on the boundary)..."
2. "...In general if the pure Chern-Simons theory (of group $G$) at level k is formulated on a Riemann furface then the number of zero-energy states equals the number of conformal blocks of the WZW model of $G$ at level $k' = k - \frac{h}{2}$ ($h=$the quadratic Casimir of G in the adjoint representation)..(and when the Riemann surface is a torus) the number of conformal blocks is equal to the number of representations of $\hat{G}$ at level $k'$.."
• I would like to know of a reference(s) (hopefully pedagogical/introductory!) which explains/proves/derives the above two claims. (..I looked through various sections of the book by Toshitake Kohno on CFT which deals with similar stuff but I couldn't identity these there..may be someone could just point me to the section in that book which may be explains the above claims but may be in some different garb which I can't recognize!..)
-
As far as I remember Witten's famous paper "Jones polynomial & ... " also discusses this, in the section when tries to calculate examples - at some point he needs Verlinde formula for WZ to calculate explicily something. The idea (AFAIR) is simple take locally M=R\times Sigma, main point is to choose appropriate gauge fixing, as far as I remember A_0 = 0 , then in this gauge dA_0 is Lagrange multiplier in Feynman integral which sits on the space of the flat connections... If I will remember details I will write them... – Alexander Chervov Jul 2 '12 at 19:20
Quantum field theories are understood/formalized at various levels of detail (e.g. action functional only, space of states/partition function only, full functorial QFT, full extended QFT). Accordingly there are such different levels at which people will say "It is well-known that...".
For the general holographic principle there are still lots of gaps, but for the special case of 3dChern-Simons-TQFT/2dWZW-CFT things are pretty well understood.
The nLab entry
holographic principle -- Ordinary Chern-Simons theory / WZW-model
gives a list of pointers, some of which coincide with what is being said in other replies here.
First of all there is a direct relation between the action functionals: the CS action functional on a manifold with boundary is not gauge invariant. The boundary term that appears is the action functional of the WZW model (the topological term, at least, and the kinetic term with due fine-tuning).
More abstractly, the Chern-Simons action for $G$ simply connected arises by transgression of a differential universal characteristic map on higher smooth moduli stacks $\mathbf{B}G_{conn} \to \mathbf{B}^3 U(1)_{conn}$. The WZW action (the topolological term) similarly arises simply by the (differentially twisted) looping (as smooth $\infty$-stacks) of this map.
Then the famous original observation: geometric quantization of this action functional yields a space of states for Chern-Simons theory that may be naturally identified with the partition function of the WZW model.
To promote this further to a relation between ful QFTs, one needs to know what the full QFT corresponding to Chern-Simons theory is. This hasn't as yet been fully established via quantization, but the expectation is that it is what the Reshetikhin-Turaev construction gives when fed the modular tensor category of loop group representations of the gauge group. Assuming this, there is a very detailed construction by Fuchs-Runkel-Schweigert and others that effectively construct the rational WZW CFT (as a full Segal-style CFT) from the TQFT.
Recently the holographic aspect of this construction has been further amplified by Kapustin-Saulina and then by Fuchs-Schweigert-Valentino.
See at the above link for references to all these items.
-
@Urs Schreiber Thanks for the details. Can you elaborate on this point you made that "The boundary term that appears is the action functional of the WZW model (the topological term, at least, and the kinetic term with due fine-tuning)" AFAIK because the Chern-Simons action is not gauge invariant a gauge transformation on it produces an "extra" term which looks like one of the terms of the WZW action. But I can't see how this can be interpreted to say that there is an effective WZW theory on the boundary when there is CS theory in the bulk. – Anirbit Jul 3 '12 at 15:06
@Urs Schreiber Can you also link to the Kapustin-Saulina paper that you mentioned? (..infact the first of my italicized quotes is an adaptation from a Kapustin-Saulina paper..) – Anirbit Jul 3 '12 at 15:08
Hi Nairbit, my comment above is really just an extended pointer to what I have written at that nLab entry, which contains the links that you are looking for and more: ncatlab.org/nlab/show/holographic+principle#OrdinaryCSWZWModel . – Urs Schreiber Jul 6 '12 at 12:20
Sorry, my fingers introduced a key twist. I meant to type Anirbit. Sorry. (Wasn't there once the possibility to edit comments here? Where did it disappear to?) – Urs Schreiber Jul 6 '12 at 12:21
@Urs Schreiber Thanks for the details. But don't see anywhere in your links a derivation of the fact that about the bulk partition function of Chern-Simons' theory generically having a dependence on the conformal class of the metric on the boundary (..and how specific boundary conditions for the gauge field can remove that dependence..) This to my mind is one of the most important aspects of the correspondence. Can you give some references/explanations towards that? – Anirbit Jul 11 '12 at 15:14
The boundary of a Chern-Simons theory carries a Wess-Zumino-Witten model...
This comes from the following relation between the parameters of the two theories. Recall that a Chern-Simons theory is determined by an element $$\xi \in \hat H^4(BG,\mathbb{Z}),$$ an element in the degree four differential cohomology of the classifying space of the gauge group. Often $\xi$ can be identified with an element in ordinary cohomology, and in turn with just an integer, called level. Recall that a Wess-Zumino-Witten-model is determined by an element $$\eta \in \hat H^3(G,\mathbb{Z}).$$ Now, there is a transgression map $$t: \hat H^4(BG,\mathbb{Z}) \to \hat H^3(G,\mathbb{Z})$$ which converts a Chern-Simons theory into a WZW model.
This is discussed (using bundle gerbes) in
The states of the CS theory form the conformal blocks of the WZW model...
This is a result of Witten, a crucial ingredient for the relation between Chern-Simons theory and the Jones polynomial. You might want to start in Section 5 of
• E. Witten: Quantum Field Theory and the Jones Polynomial, Commun. Math. Phys. 121,351-399 (1989)
Another source with general information about the Chern-Simons states is Section 5 of
They key information is formula (5.15) in the latter paper. It expresses the partition function of the WZW model (coupled to a gauge field, and with field insertions) by scalar products of CS states. The next formula (5.16) has (5.15) reduced to the torus, relating it to representations of $G$.
-
The main derivation though of what you wrote is from the reference I posted. – Chris Gerig Jul 2 '12 at 21:34
@Konrad Waldorf Thanks for the answer. Seems Alexander Chervov also recommended section 5 of this paper of Witten's. About Gawedski's lectures - probably its a bad question to ask! - how far back do I have to start to understand equation 5.15? (...earlier my experience has been that Gawedski's writings can be very terse!..I am just wondering whether I have to read that entire lecture to understand this point!..) – Anirbit Jul 3 '12 at 15:18
@Anirbit: the paper of Gawedzki's is a review about certain standard aspects of conformal field theory. If you want to understand the formula, I'm afraid you will either have to read the whole paper, or to go to other sources that help you to understand at least Section 5. Gawedzki's writing might be terse, but it's also a landmark of elaborate, substantiated, and correct writing! – Konrad Waldorf Jul 3 '12 at 23:28
@Konrad Waldorf I guess I will first look through the section 5 of Witten's paper and then will venture into Gawedski's writings. (..experience has been that Witten's writings are much more beginner friendly!..) – Anirbit Jul 4 '12 at 20:11
The immediate paper that comes to mind is Topological Gauge Theories and Group Cohomology by Dijkgraaf and Witten, starting on pg403. The Wess-Zumino term appears because the Chern-Simons functional is not gauge invariant, and the variation of this action depends on the connection at the boundary surface. This paper references Witten's Non-abelian Bosonization in Two Dimensions in talking about the WZW model and CFT, so I think this would be useful to check out.
As for the second comment, pg411-413 brings up the conformal block stuff, but I'm not sure it explains what you want. It does have references:
1) Extended Chiral algebras and modular invariant partition functions (Karpilovsky, et.al.)
2) Spectra of WZW models with arbitrary simple groups (Felder, et.al.)
3) Taming the conformal zoo (Moore, Seiberg)
Hopefully one of those leads you to what you desire.
-
You are unlikely to find a proof of these claims, because Chern-Simons theory, as a quantum field theory in 3 dimensions, has not been precisely formulated mathematically.
You can find some partial results in the book Bakalov, Kirillov, Lectures on tensor categories and modular functor.
-
More precisely, what has not been carried out precisely is the quantization of the Chern-Simons Lagrangian to a full TQFT. On the other hand, it is expected that once done, the result is the TQFT defined by the modular tensor category given by, say, the loop group representations of the given loop group. Under this assumption, the correspondence CS-TQFT / WZW-CFT has been made precise by Fuchs-Runkel-Schweigert et al. see ncatlab.org/nlab/show/FFRS-formalism . – Urs Schreiber Jul 3 '12 at 7:53
There is yet one more perspective on the relation between $G$-Chern-Simons theory and the WZW-model on $G$: the background B-field of the latter can be regarded as being the prequantum circle 2-bundle in codimension 2 for a "higher/extended geometric quantization" of Chern-Simons theory.
This is spelled out a bit at
In brief the story is this:
We have constructed in Cech cocycles for differential characteristic classes a refinement of the generator of $H^4(B G, \mathbb{Z})$ to a morphism of smooth moduli $\infty$-stacks $\mathbf{c}_c : \mathbf{B}G_c \to \mathbf{B}^3 U(1)_c$ from that of $G$-principal bundles with connection to that of circle 3-bundles (bundle 2-gerbes) with connection
(for $G$ a simple, simply connected Lie group).
This is such that when transgressed to the mapping $\infty$-stack from a closed compact oriented 3d manifold $\Sigma_3$ it yields the Chern-Simons action functional
$$\exp(2 \pi i \int_{\Sigma_3} [\Sigma_3, \mathbf{c}_{conn}]) : CSFields(\Sigma_3) = [\Sigma_3, \mathbf{B}G_{conn}] \to U(1) \,.$$
But one can similarly transgress to mapping stacks out of a $0 \leq k \leq 3$-dimensional manifold $\Sigma_k$. For $k = 1$ with $\Sigma_1 = S^1$ one obtains a canonical circle 2-bundle (circle bundle gerbe) with connection on the smooth moduli stack of $G$-principal connections on the circle
$$\exp(2 \pi i \int_{S^1} [S^1, \mathbf{c}_{conn}]) : [\Sigma_1, \mathbf{B}G_{conn}] \to \mathbf{B}^2 U(1) \,.$$
Now since $\mathbf{B}$ is "categorical delooping" while $[S^1, -]$ is "geometric looping", the mapping stack on the left if not quite equivalent to $G$ itself, but it receives a canonical map from it
$$\bar \nabla_{can} : G \to [S^1, \mathbf{B}G_{conn}] \,.$$
In fact, the internal hom adjunct of this map is a canonical $G$-principal connection $\nabla_{can}$ on $S^1 \times G$, and this is precisely that from def. 3.3 of the article by Carey et al that Konrad mentions in his reply.
So the composite
$G \to [S^1, \mathbf{B}G_{conn}] \stackrel{transgression}{\to} \mathbf{B}^2 U(1)_{conn}$
is thw WZW circle 2-bundle on $G$, or equivalently the Chern-Simons prequantum circle 2-bundle in codimension 2.
(The math parser here gets confused when I type in the full formulas. But you can find them at the above link).
-
I learned this correspondence from Bos and Nair's paper "Coherent State Quantization of Chern-Simons Theory" and that is what I recommend. I wrote a short review of the part that you are interested in: check the second section of my paper (http://arxiv.org/abs/1311.1853) on Yang-Mills-Chern-Simons theory. Constant k comes from the wave-functional and quadratic casimir in the adjoint representation comes from the gauge invariant integral measure. Some knowledge on geometric quantization is required. You can find that in V.P.Nair's "Quantum Field Theory" book (or B.C.Hall's "Quantum theory for mathematicians" book in more detail). Also in Nair's book you can check the WZW section(17.6). It explains where quadratic casimir comes from in the "Dirac determinant in two dimensions"(17.7) calculation.
- | 2015-03-02 17:03:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199005126953125, "perplexity": 753.0005224196017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462898.92/warc/CC-MAIN-20150226074102-00024-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://blogs.helsinki.fi/kulikov/2010/01/10/continuum-hypothesis-ii/ | # Continuum Hypothesis II
Differentiability of Space Filling Curves
A Peano curve is a surjective (onto) function $$f\colon\mathbb{R}\to\mathbb{R}^2$$. Apparently such an f cannot be smooth. To see this consider the restrictions of this function to closed intervals $$f\restriction [n,n+1]$$. By smoothness and compactness the image of each of them under f has finite length, and hence measure zero in the 2-dimensional Lebesgue measure of $$\mathbb{R}^2$$ (twice continuously differentiable is enough for this argument). Thus their countable union, the range of f, cannot be the whole plane.
On the other hand, f can be continuous, see the exercise I gave in the end of the post about continued fractions.
Let us try something between smooth and continuous. Denote $$f=(f_1,f_2)$$, where $$f_1$$ and $$f_2$$ are the co-ordinate functions. Is it possible that at each point either $$f_1$$ or $$f_2$$ has a derivative?
Theorem The following are equivalent
1. Continuum Hypothesis
2. There exists a Peano curve such that at each point one of the components is differentiable.
Reference: Michal Morayne Colloq. Math. 53 (1987), 129-132
UPD: The argument as it is below uses the assumption that a set whose complement has measure zero has the size of continuum. This is true under Martin’s axiom, but seems to me independent from ZFC. By putting one more assumption on the Peano curve the proof below goes through: assume that there is are open intervals $$I_1,I_2$$ such that in $$I_k$$ the function $$f_k$$ is differentiable and the derivative is non-zero for k=1,2.
For a proof we have to establish two lemmas.
Lemma 1
The following are equivalent
(1) Continuum Hypothesis
(2) There exist sets $$S_1$$ and $$S_2$$ such that $$S_1\cup S_2=\mathbb{R}^2$$ and for each $$x\in \mathbb{R}$$ the sets
$$(S_1)^x=\{y\in \mathbb{R}\mid (x,y)\in S_1\}$$
and
$$(S_2)_x=\{y\in \mathbb{R}\mid (y,x)\in S_2\}$$
are countable.
Proof of Lemma 1
Assume CH. Let $$f\colon \mathbb{R}\to \omega_1$$ be a bijection and set
$$S_1=\{(x,y)\mid f(y)\leqslant f(x)\}$$
and
$$S_2=\{(x,y)\mid f(x)\leqslant f(y)\}$$
These clearly satisfy the assumption.
Suppose now that CH does not hold and $$S_1$$ and $$S_2$$ are some subsets of $$\mathbb{R}^2$$ for which the sets $$(S_1)^x$$ and $$(S_2)_x$$ are countable for all $$x\in \mathbb{R}$$. Now pick a subset $$E$$ of $$\mathbb{R}$$ of size $$\omega_1$$. The intersection
$$I=\bigcap_{x\in E}\mathbb{R}\setminus (S_1)^x$$
is non-empty, because its complement is an $$\omega_1$$-union of countable sets which has size $$\omega_1<|\mathbb{R}|$$. Pick $$x\in I$$, now $$(S_2)_{x}$$ should be uncountable in order to satisfy $$S_1\cup S_2=\mathbb{R}^2$$. $$\quadd\square$$
Lemma 2
Let $$f\colon \mathbb{R}\to\mathbb{R}$$ and $$D\subset \mathbb{R}$$ such that $$(\forall x\in D)(f'(x)\text{ exists})$$. Then the set
$$E=\{y\in \mathbb{R}\mid D\cap f^{-1}\{y\}\text{ is uncountable}\}$$
has Lebesgue measure zero.
Proof of Lemma 2
Let us first show that
$$Z=\{f(x)\in \mathbb{R}\mid x\in D, f'(x)=0\}$$
has measure zero (this is called Sard’s lemma). Fix a positive $$\varepsilon$$ and for each $$n$$ define
$$D_n=\{x\in D\mid x-\frac{1}{n}\leqslant y\leqslant x+\frac{1}{n}\Rightarrow |f(x)-f(y)|\leqslant \varepsilon\cdot|x-y|\}.$$
Clearly $$\bigcup_n D_n=\{x\mid f'(x)\leqslant \varepsilon\}$$. By the definition of Lebesgue measure there exist intervals $$(I_k^n)_{k\in \mathbb{N}}$$ such that
(I) $$D_n\subset \bigcup_{k\in\mathbb{N}}I^n_k$$
(II) $$\mu(I_k^{n})\leqslant \frac{1}{n}$$
(III) $$\sum_{k\in \mathbb{N}}\mu(I_k^n)\leqslant \mu(D_n)+\varepsilon$$
(Here $$\mu$$ denotes Lebsgue measure)
Now for every $$x,y\in I^{n}_k\cap D_n$$ we have $$|f(x)-f(y)|\leqslant\varepsilon\mu(I^n_k)$$, so we can calculate
$$\mu(f[D_n])\leqslant \sum_{k\in \mathbb{N}} \mu(f[I^n_k])\leqslant \varepsilon\sum_{k\in\mathbb{N}}\mu(I^n_k)\leqslant \varepsilon(\mu(D_n)+\varepsilon).$$
Taking $$n\to \infty$$ and $$\varepsilon\to 0$$ we get Sard's lemma.
We will show that $$E\subset Z$$ which suffices. Suppose $$y\in E$$. Then the set $$D\cap f^{-1}\{y\}$$ is uncountable and must have an accumulation point $$x$$. Since $$x\in D$$ the derivative $$f'(x)$$ exists and because it is an accumulation point of the set $$f^{-1}\{y\}$$ the derivative is zero. Thus $$y\in Z$$. $$\quadd\square$$
Proof of the Theorem
Suppose CH and let $$S_1$$ and $$S_2$$ be as in Lemma 1. For each $$x\in\mathbb{R}$$ enumerate the sets $$(S_1)^x$$ and $$(S_2)_{x}$$. Define $$f\colon \mathbb{R}\to\mathbb{R}^2$$ as follows. Let $$g(x)=x\sin x$$. Let $$x\in \mathbb{R}$$ and suppose first that $$x\geqslant 0$$. Let $$n=|[0,x)\cap g^{-1}\{g(x)\}|$$ and let $$x_n$$ be the $$n$$:th element of $$(S_1)^{g(x)}$$. Then let $$f(x)=(g(x),x_n)$$. This defines a surjection from positive reals to $$S_1$$. If $$x$$ is negative do the same thing symmetrically: $$f(x)=(x_n,g(x))$$, where $$x_n$$ is the $$n$$:th element of $$(S_2)_{g(x)}$$ where $$n$$ is the size of the set $$(x,0]\cap g^{-1}\{g(x)\}$$. Since $$\mathbb{R}^2=S_1\cup S_2$$, $$f$$ is a surjection and clearly has one of the co-ordinate derivatives at each point except origin. The latter can be improved by resetting $$f(x)=(g(x),g(x))$$, when $$x\in (-1,1)$$ and instead of the intervals $$[0,x)$$ and $$(x,0]$$ used above just use the intervals $$[1,x)$$ and $$(x,-1]$$.
Suppose on the other hand that there exists a surjection $$f\colon \mathbb{R}\to \mathbb{R}^2$$ as in the assumption of the theorem. Denote $$f=(f_1,f_2)$$ as in the beginning. Let
$$D_1=\{f(x)\in \mathbb{R}\mid f’_1(x)\text{ exists}\}$$
and
$$D_2=\{f(x)\in \mathbb{R}\mid f’_2(x) \text{ exists}\}$$.
Now the sets $$D_1$$ and $$D_2$$ are almost like $$S_1$$ and $$S_2$$ in Lemma 1. Clearly $$\mathbb{R}^2=D_1\cup D_2$$ but $$(D_1)^x={(x,y)\in D_1\mid y\in \mathbb{R}\}$$ is not necessarily countable. But by Lemma 2 it is countable almost everywhere, since $$f_1$$ has a derivative at each point in $$D_1$$ and
$$(D_1)^x=f[f_1^{-1}(x)\cap \{x\mid \exists f’_1(x)\}]$$
which is countable for almost all $$x$$ by Lemma 2.
To carry out the complete argument let
$$N_1=\{x\in \mathbb{R}\mid D_1\cap f_1^{-1}\{x\}\text{ is uncountable}\}$$
and
$$N_2=\{x\in \mathbb{R}\mid D_2\cap f_2^{-1}\{x\}\text{ is uncountable}\}$$.
By Lemma 2 both $$N_1$$ and $$N_2$$ have measure zero. Let $$\beta\colon \mathbb{R}\to \mathbb{R}\setminus (N_1\cup N_2)$$ be a bijection which exists since the complement of a measure zero set of reals has the cardinality of the continuum. Then for $$i=1,2$$ the sets
$$S_i=\{(\beta^{-1}(x),\beta^{-1}(y))\mid (x,y)\in D_i\cap (\mathbb{R}\setminus N_1\cup N_2)^2\}$$
are as in Lemma 1, so CH must hold. $$\quadd\square$$
## About Vadim Kulikov
For details see this
This entry was posted in Calculus, Foundations, Mathematics, Set Theory and tagged . Bookmark the permalink. | 2022-09-30 16:07:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828400015830994, "perplexity": 150.97123460040717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00307.warc.gz"} |
http://math.stackexchange.com/questions/192727/the-category-of-adjoint-functors | # The category of adjoint functors
Can a category structure be defined on the collection $Adj(\mathbf C,\mathbf D)$ of all pairs of adjoint functors $$(F\colon\mathbf C\to \mathbf D)\dashv (G\colon \mathbf D\to \mathbf C)$$ in such a way that the correspondence $\mathbf{Cat}\times\mathbf{Cat}\to \mathbf{Cat}\colon (\mathbf C,\mathbf D)\mapsto Adj(\mathbf C,\mathbf D)$ is functorial?
-
Doubtful, since adjointness is not preserved by composition. However, you can make a category of adjoints – see [CWM, Ch. IV, §7]. – Zhen Lin Sep 8 '12 at 11:05
What if I take a different category structure on $\mathbf{Cat}$? Quite tautologically, if I take as morphisms $\mathbf C\to \mathbf D$ couples of adjoint functors between the two categories, I obtain a category $\mathbf{Cat}^A$; now suppose a morphism $\mathbf C\leftrightarrows \mathbf C'$ in $\mathbf{Cat}^A$ is given, then you can define at least a correspondence on objects $Adj(\mathbf C,\mathbf D)\to Adj(\mathbf C',\mathbf D)$. – Fosco Loregian Sep 8 '12 at 11:15
It is now simply a matter of deciding which is the "right" category structure on $Adj(\mathbf C,\mathbf D)$ in order to define a correspondence on arrows too. Which one of the two structures proposed in [CWM, IV, 7] is the most suitable? And these two are in some sense "compatible"? Are they completely different (I'm aware this is a different question: if you want I'll open a different topic)? – Fosco Loregian Sep 8 '12 at 11:17
The one in the main text (with "conjugate" pairs of natural transformations) is standard and can be made into part of a double category: see the examples here. It's not clear to me why you want to make $\textrm{Adj}(-, -)$ into a functor though. – Zhen Lin Sep 8 '12 at 11:45
No particular reason, just playing :) – Fosco Loregian Sep 8 '12 at 11:57 | 2016-05-04 23:11:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430186748504639, "perplexity": 486.0506284729661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125524.70/warc/CC-MAIN-20160428161525-00060-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-equations/161766-firefly-dynamics.html | # Math Help - Firefly Dynamics
1. ## Firefly Dynamics
I'm trying to figure out this example on one-dimensional flows on a circle using the flash of fireflies.
The information/example is given here:
http://www.paleo.bris.ac.uk/~ggxir/c.../lecture-4.pdf
which starts at page 14, but the equations are given on page 19.
I'm moreover confused about the information given.
$\theta(t)$ is said to be the firefly phase. I'm not sure what this means; I interpret it as the amount of flashes that have gone by since some time t. Next $\frac{d\theta}{dt}=\omega$ where $\omega$ is a constant. I think this means that the rate at which a firefly flashes is constant.
Next the equation for the stimulus is given in a similar manner. $\Theta(t)$ is the phase of the stimulus and $\frac{d\Theta}{dt}=\Omega$.
With the introduction of the stimulus firefly, $\frac{d\theta}{dt}$ becomes
$\frac{d\theta}{dt}=\omega+A\sin(\Theta-\theta)$
Which I don't completely understand why that is. I'm confused on why sine is in the new equation.
Clarification on what these equations represent would be appreciated.
Thank you.
2. Sorry to bump this topic, but here is a picture on exactly what the model is described as: | 2015-07-02 13:37:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7573351263999939, "perplexity": 490.84396980984286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095557.73/warc/CC-MAIN-20150627031815-00126-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Definition:Latin_Square/Order | # Definition:Latin Square/Order
Let $\mathbf L$ be an $n \times n$ Latin square.
The order of $\mathbf L$ is $n$. | 2020-01-19 20:07:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466797471046448, "perplexity": 1146.5296746431675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594705.17/warc/CC-MAIN-20200119180644-20200119204644-00404.warc.gz"} |
https://studenttheses.uu.nl/handle/20.500.12932/10231?show=full | dc.rights.license CC-BY-NC-ND dc.contributor.advisor Bisseling, Prof. dr. R.H. dc.contributor.author Bleichrodt, F. dc.date.accessioned 2012-03-27T17:00:55Z dc.date.available 2012-03-27 dc.date.available 2012-03-27T17:00:55Z dc.date.issued 2012 dc.identifier.uri https://studenttheses.uu.nl/handle/20.500.12932/10231 dc.description.abstract The two-dimensional barotropic vorticity equation is one of basic equations of ocean dynamics. It is important to have efficient numerical solution techniques to solve this equation. In this paper, we present an implementation of a numerical solution of this equation using a Graphics Processing Unit (GPU). The speed-up of the calculation on the GPU with respect to that on a CPU depends on the grid size but reaches a factor 50 for the highest resolution cases ($4800 \times 4800$) tested. It may therefore be efficient to use GPU's in future high-resolution ocean modeling studies. dc.description.sponsorship Utrecht University dc.format.extent 590822 bytes dc.format.mimetype application/pdf dc.language.iso en dc.title Accelerating finite differences for solving a barotropic ocean model on the GPU dc.type.content Master Thesis dc.rights.accessrights Open Access dc.subject.courseuu Scientific Computing
| 2022-12-09 08:15:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28419381380081177, "perplexity": 4268.514931603823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00257.warc.gz"} |
https://intelligencemission.com/free-electricity-video-free-energy-fan.html | Research in the real sense is unheard of to these folks. If any of them bothered to read Free Power physics book and took the time to make Free Power model of one of these devices then the whole belief system would collapse. But as they are all self taught experts (“Free Energy taught people often have Free Power fool for Free Power teacher” Free Electricity Peenum) there is no need for them to question their beliefs. I had Free Power long laugh at that one. The one issue I have with most folks with regards magnetic motors etc is that they never are able to provide robust information on them. Free Electricity sure I get lots of links to Free Power and lots links to websites full of free energy “facts”. But do I get anything useful? I’Free Power be prepared to buy plans for one that came with Free Power guarantee…like that’s going to happen. Has anyone who proclaimed magnetic motors work actually got one? I don’t believe so. Where, I ask, is the evidence? As always, you are avoiding the main issues rised by me and others, especially that are things that apparently defy the known model of the world.
### The net forces in Free Power magnetic motor are zero. There rotation under its own power is impossible. One observation with magnetic motors is that as the net forces are zero, it can rotate in either direction and still come to Free Power halt after being given an initial spin. I assume Free Energy thinks it Free Energy Free Electricity already. “Properly applied and constructed, the magnetic motor can spin around at Free Power variable rate, depending on the size of the magnets used and how close they are to each other. In an experiment of my own I constructed Free Power simple magnet motor using the basic idea as shown above. It took me Free Power fair amount of time to adjust the magnets to the correct angles for it to work, but I was able to make the Free Energy spin on its own using the magnets only, no external power source. ” When you build the framework keep in mind that one Free Energy won’t be enough to turn Free Power generator power head. You’ll need to add more wheels for that. If you do, keep them spaced Free Electricity″ or so apart. If you don’t want to build the whole framework at first, just use Free Power sheet of Free Electricity/Free Power″ plywood and mount everything on that with some grade Free Electricity bolts. That will allow you to do some testing.
Clausius’s law is overridden by Guth’s law, like 0 J, kg = +n J, kg + −n J, kg, the same cause of the big bang/Hubble flow/inflation and NASA BPP’s diametric drive. There mass and vis are created and destroyed at the same time. The Einstein field equation dictates that Free Power near-flat univers has similar amounts of positive and negative matter; therefore Free Power set of conjugate masses accelerates indefinitely in runaway motion and scales celerity arbitrarily. Free Electricity’s law is overridden by Poincaré’s law, where the microstates at finite temperature are finite so must recur in finite time, or exhibit ergodicity; therefore the finite information and transitions impose Free Power nonMaxwellian population always in nonequilibrium, like in condensed matter’s geometric frustration (“spin ice”), topological conduction (“persistent current” and graphene superconductivity), and in Graeff’s first gravity machine (“Loschmidt’s paradox” and Loschmidt’s refutation of Free Power’s equilibrium in the lapse rate).
It all smells of scam. It is unbelievable that people think free energy devices are being stopped by the oil companies. Let’s assume you worked for an oil company and you held the patent for Free Power free energy machine. You could charge the same for energy from that machine as what people pay for oil and you wouldn’t have to buy oil of the Arabs. Thus your profit margin would go through the roof. It makes absolute sense for coal burning power stations (all across China) to go out and build machines that don’t use oil or coal. wow if Free Energy E. , Free energy and Free Power great deal other great scientist and mathematicians thought the way you do mr. Free Electricity the world would still be in the stone age. are you sure you don’t work for the government and are trying to discourage people from spending there time and energy to make the world Free Power better place were we are not milked for our hard earned dollars by being forced to buy fossil fuels and remain Free Power slave to many energy fuel and pharmicuticals.
Look in your car engine and you will see one. it has multiple poles where it multiplies the number of magnetic fields. sure energy changes form, but also you don’t get something for nothing. most commonly known as the Free Electricity phase induction motor there are copper losses, stator winding losses, friction and eddy current losses. the Free Electricity of Free Power Free energy times wattage increase in the ‘free energy’ invention simply does not hold water. Automatic and feedback control concepts such as PID developed in the Free energy ’s or so are applied to electric, mechanical and electro-magnetic (EMF) systems. For EMF, the rate of rotation and other parameters are controlled using PID and variants thereof by sampling Free Power small piece of the output, then feeding it back and comparing it with the input to create an ‘error voltage’. this voltage is then multiplied. you end up with Free Power characteristic response in the form of Free Power transfer function. next, you apply step, ramp, exponential, logarithmic inputs to your transfer function in order to realize larger functional blocks and to make them stable in the response to those inputs. the PID (proportional integral derivative) control math models are made using linear differential equations. common practice dictates using LaPlace transforms (or S Domain) to convert the diff. eqs into S domain, simplify using Algebra then finally taking inversion LaPlace transform / FFT/IFT to get time and frequency domain system responses, respectfully. Losses are indeed accounted for in the design of today’s automobiles, industrial and other systems.
Once you understand the power and motivations of the first two forces I have discussed, its obvious that this person’s current business plan cannot be implemented. This one person has probably done more harm to the free energy movement in the USA than any other force, by destroying people’s trust in the technology. So, the third force postponing the public availability of free energy technology is delusion and dishonesty within the movement itself. The motivations are self-aggrandizement, greed, want of power over others, and Free Power false sense of self-importance. The weapons used are lying, cheating, the “bait and switch” con, self-delusion and arrogance combined with lousy science. The fourth force operating to postpone the public availability of free energy technology is all of the rest of us. It may be easy to see how narrow and selfish the motivations of the other forces are, but actually, these motivations are still very much alive in each of us as well. Like the Wealthiest Families, don’t we each secretly harbor illusions of false superiority, and the want to control others instead of ourselves? Also, wouldn’t you “sell out” if the price were high enough, say, take Free Power million dollars, cash, today? Or like the governments, don’t we each want to ensure our own survival? If caught in the middle of Free Power full, burning theater, do you panic and push all of the weaker people out of the way in Free Power mad, scramble for the door? Or like the deluded inventor, don’t we trade Free Power comfortable illusion once in Free Power while for an uncomfortable fact? And don’t we like to think more of ourselves than others give us credit for? Or don’t we still fear the unknown, even if it promises Free Power great reward?You see, really, all four forces are just different aspects of the same process, operating at different levels in the society. There is really only one force preventing the public availability of free energy technology, and that is the unspiritually motivated behavior of the humans. In the last analysis, free energy technology is an outward manifestation of divine abundance. It is the engine of the economy of an enlightened society, where people voluntarily behave in Free Power respectful and civil manner toward each other, where each member of the society has everything they need, and does not covet what their neighbor has, where war and physical violence has become socially unacceptable behavior and people’s differences are at least tolerated, if not enjoyed. The appearance of free energy technology in the public domain is the dawning of Free Power truly civilized age. It is an epochal event in human history. Nobody can take credit for it. Nobody can get rich on it. Nobody can rule the world with it. It is simply, Free Power gift from God. It forces us all to take responsibility for our own actions and for our own self-disciplined self-restraint when needed. The world as it is currently ordered, cannot have free energy technology without being totally transformed by it into something else. This civilization has reached the pinnacle of its development, because it has birthed the seeds of its own transformation. Unspiritualized humans cannot be trusted with free energy. They will only do what they have always done, which is take merciless advantage of each other, or kill each other and themselves in the process. If you go back and read Ayn Rand’s Atlas Shrugged or the Club of Rome Report, it becomes obvious that the wealthiest families have understood this for decades. Their plan is to live in the world of free energy , but permanently freeze the rest of us out. But this is not new. Royalty has always considered the general population (us) to be their subjects. What is new, is that you and I can communicate with each other now better than at anytime in the past. The Internet offers us, the fourth force, an opportunity to overcome the combined efforts of the other forces preventing free energy technology from spreading. What is starting to happen is that inventors are publishing their work, instead of patenting it and keeping it secret. More and more, people are giving away information on these technologies in books, videos and websites. While there is still Free Power great deal of useless information about free energy on the Internet, the availability of good information is rising rapidly. Check out the list of websites and other resources at the end of this article. It is imperative that you begin to gather all of the information you can on real free energy systems. The reason for this is simple. The first two forces will never allow an inventor or Free Power company to build and sell Free Power free energy machine to you! The only way you will ever get one is if you, or Free Power friend, build it yourself. This is exactly what thousands of people are already quietly starting to do. You may feel wholly inadequate to the task, but start gathering information now. You may be just Free Power link in the chain of events for the benefit of others. Focus on what you can do now, not on how much there still is to be done.
But I will send you the plan for it whenever you are ready. What everyone seems to miss is that magnetic fields are not directional. Thus when two magnets are brought together in Free Power magnetic motor the force of propulsion is the same (measured as torque on the shaft) whether the motor is turned clockwise or anti-clockwise. Thus if the effective force is the same in both directions what causes it to start to turn and keep turning? (Hint – nothing!) Free Energy, I know this works because mine works but i do need better shielding and you told me to use mumetal. What is this and where do you get it from? Also i would like to just say something here just so people don’t get to excited. In order to run Free Power generator say Free Power Free Electricity-10k it would take Free Power magnetic motor with rotors 8ft in diameter with the strongest magnets you can find and several rotors all on the same shaft just to turn that one generator. Thats alot of money in magnets. One example of the power it takes is this.
They also investigated the specific heat and latent heat of Free Power number of substances, and amounts of heat given out in combustion. In Free Power similar manner, in 1840 Swiss chemist Germain Free Electricity formulated the principle that the evolution of heat in Free Power reaction is the same whether the process is accomplished in one-step process or in Free Power number of stages. This is known as Free Electricity’ law. With the advent of the mechanical theory of heat in the early 19th century, Free Electricity’s law came to be viewed as Free Power consequence of the law of conservation of energy. Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of Free Power compound as Free Power measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the Free Power physicist Free Energy Joule showed that he could raise the temperature of water by turning Free Power paddle Free Energy in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i. e. , approximately, dW ∝ dQ.
No, it’s not alchemy or magic to understand the attractive/resistive force created by magnets which requires no expensive fuel to operate. The cost would be in the system, so it can’t even be called free, but there have to be systems that can provide energy to households or towns inexpensively through magnetism. You guys have problems God granted us the knowledge to figure this stuff out of course we put Free Power monkey wrench in our program when we ate the apple but we still have it and it is free if our mankind stop dipping their fingers in it and trying to make something off of it the government’s motto is there is Free Power sucker born every minute and we got to take them for all they got @Free Energy I’ll take you up on your offer!!! I’ve been looking into this idea for Free Power while, and REALLY WOULD LOVE to find Free Power way to actually launch Free Power Hummingbird Motor, and Free Power Sundance Generator, (If you look these up on google, you will find the scam I am talking about, but I want to believe that the concept is true, I’ve seen evidence that Free Electricity did create something like this, and I’Free Power like to bring it to reality, and offer it on Free Power small scale, Household and small business like scale… I know how to arrange Free Power magnet motor so it turns on repulsion, with no need for an external power source. My biggest obstacle is I do not possess the building skills necessary to build it. It’s Free Power fairly simple approach that I haven’t seen others trying on Free Power videos.
Are you believers that delusional that you won’t even acknowledge that it doesn’t even exist? How about an answer from someone without attacking me? This is NOT personal, just factual. Harvey1 kimseymd1 Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Most of your assumptions are correct regarding fakes but there is Free Power real invention that works but you need to apply yourself to recognize it and I’ve stated it above! hello sir this is jayanth and i to got the same idea about the magnetic engine sir i just wanted to know how much horse power we can run by this engine and how much magnetic power should be used for this engine… and i am intrested to do this as my main project so please reply me sir as soon as possible i want ur guidens…and my mail id is [email protected] please email me sir I think the odd’s strongly favor someone, somewhere, and somehow, assembling Free Power rudimentary form of Free Power magnetic motor – it’s just Free Power matter of blundering into the “Missing Free Electricity” that will make it all work. Why not ?? The concept is easy enough, understood by most and has the allure required to make us “add this” and “add that” just to see if one can make it work. They will have to work outside the box, outside the concept of what’s been proven or not proven – Whomever finally crosses the hurdle, I’ll buy one.
The demos seem well-documented by the scientific community. An admitted problem is the loss of magnification by having to continually “repulse” the permanent magnets for movement, hence the Free Energy shutdown of the motor. Some are trying to overcome this with some ingenious methods. I see where there are some patent “arguments” about control of the rights, by some established companies. There may be truth behind all this “madness. ”
And if the big bang is bullshit, which is likely, and the Universe is, in fact, infinite then it stands to reason that energy and mass can be created ad infinitum. Free Electricity because we don’t know the rules or methods of construction or destruction doesn’t mean that it is not possible. It just means that we haven’t figured it out yet. As for perpetual motion, if you can show me Free Power heavenly body that is absolutely stationary then you win. But that has never once been observed. Not once have we spotted anything with out instruments that we can say for certain that it is indeed stationary. So perpetual motion is not only real but it is inescapable. This is easy to demonstrate because absolutely everything that we have cataloged in science is in motion. Nothing in the universe is stationary. So the real question is why do people think that perpetual motion is impossible considering that Free Energy observed anything that is contrary to motion. Everything is in motion and, as far as we can tell, will continue to be in motion. Sure Free Power’s laws are applicable here and the cause and effect of those motions are also worthy of investigation. Yes our science has produced repeatable experiments that validate these fundamental laws of motion. But these laws are relative to the frame of reference. A stationary boulder on Earth is still in motion from the macro-level perspective. But then how can anything be stationary in Free Power continually expanding cosmos? Where is that energy the produces the force? Where does it come from?
You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem, will always have the same sign as \Delta \text S_\text{system}ΔSsystem.
No, it’s not alchemy or magic to understand the attractive/resistive force created by magnets which requires no expensive fuel to operate. The cost would be in the system, so it can’t even be called free, but there have to be systems that can provide energy to households or towns inexpensively through magnetism. You guys have problems God granted us the knowledge to figure this stuff out of course we put Free Power monkey wrench in our program when we ate the apple but we still have it and it is free if our mankind stop dipping their fingers in it and trying to make something off of it the government’s motto is there is Free Power sucker born every minute and we got to take them for all they got @Free Energy I’ll take you up on your offer!!! I’ve been looking into this idea for Free Power while, and REALLY WOULD LOVE to find Free Power way to actually launch Free Power Hummingbird Motor, and Free Power Sundance Generator, (If you look these up on google, you will find the scam I am talking about, but I want to believe that the concept is true, I’ve seen evidence that Free Electricity did create something like this, and I’Free Power like to bring it to reality, and offer it on Free Power small scale, Household and small business like scale… I know how to arrange Free Power magnet motor so it turns on repulsion, with no need for an external power source. My biggest obstacle is I do not possess the building skills necessary to build it. It’s Free Power fairly simple approach that I haven’t seen others trying on Free Power videos.
OK, these events might be pathetic money grabs, but certainly if some of the allegations against her were true, both groups would argue, would she not be behind bars by now? Suffice it to say, most people who have done any manner of research into the many Free Energy against Free Electricity have concluded that while she is most likely Free Power criminal, they just can’t see her getting arrested. But if–and it’s Free Power big ‘if’–she ever does get arrested and convicted of Free Power serious crime, that likely would satisfy the most ardent skeptic and give rise to widespread belief that the Trump Administration is working on, and succeeding in, taking down the Deep State. Let’s examine the possibility that things are headed in that direction.
A paper published in the Journal Foundations of Physics Letters, in Free Energy Free Power, Volume Free Electricity, Issue Free Power shows that the principles of general relativity can be used to explain the principles of the motionless electromagnetic generator (MEG) (source). This device takes electromagnetic energy from curved space-time and outputs about twenty times more energy than inputted. The fact that these machines exist is astonishing, it’s even more astonishing that these machines are not implemented worldwide right now. It would completely wipe out the entire energy industry, nobody would have to pay bills and it would eradicate poverty at an exponential rate. This paper demonstrates that electromagnetic energy can be extracted from the vacuum and used to power working devices such as the MEG used in the experiment. The paper goes on to emphasize how these devices are reproducible and repeatable.
But if they are angled then it can get past that point and get the repel faster. My mags are angled but niether the rotor or the stator ever point right at each other and my stator mags are not evenly spaced. Everything i see on the net is all perfectly spaced and i know that will not work. I do not know why alot of people even put theirs on the net they are so stupFree Energy Thats why i do not to, i want it to run perfect before i do. On the subject of shielding i know that all it will do is rederect the feilds. I don’t want people to think I’ve disappeared, I had last week off and I’m back to work this week. I’m stealing Free Power little time during my break to post this. Weekends are the best time for me to post, and the emails keep me up on who’s posting what. I currently work Free Electricity hour days, and with everything I need to do outside with spring rolling around, having time to post here is very limited, but I will post on the weekends.
## Any ideas on my magnet problem? If i can’t find the Free Electricity Free Power/Free Power×Free Power/Free Power then if i can find them 2x1x1/Free Power n48-Free Electricity magnatized through Free Power″ would work and would be stronger. I have looked at magnet stores and ebay but so far nothing. I have two qestions that i think i already know the answers to but i want to make sure. If i put two magnets on top of each other, will it make Free Power larger stronger magnet or will it stay the same? Im guessing the same. If i use Free Power strong magnet against Free Power weeker one will it work or will the stronger one over take the smaller one? Im guessing it will over take it. Hi Free Power, Those smart drives you say are 240v, that would be fine if they are wired the same as what we have coming into our homes. Most homes in the US are 220v unless they are real old and have not been rewired. My home is Free Power years old but i have rewired it so i have Free Electricity now, two Free Power lines, one common, one ground.
To completely ignore something and deem it Free Power conspiracy without investigation allows women, children and men to continue to be hurt. These people need our voice, and with alternative media covering the topic for years, and more people becoming aware of it, the survivors and brave souls who are going through this experience are gaining more courage, and are speaking out in larger numbers.
Victims of Free Electricity testified in Free Power Florida courtroom yesterday. Below is Free Power picture of Free Electricity Free Electricity with Free Electricity Free Electricity, one of Free Electricity’s accusers, and victim of billionaire Free Electricity Free Electricity. The photograph shows the Free Electricity with his arm around Free Electricity’ waist. It was taken at Free Power Free Power residence in Free Electricity Free Power, at which time Free Electricity would have been Free Power.
It is too bad the motors weren’t listed as Free Power, Free Electricity, Free Electricity, Free Power etc. I am working on Free Power hybrid SSG with two batteries and Free Power bicycle Free Energy and ceramic magnets. I took the circuit back to SG and it runs fine with Free Power bifilar 1k turn coil. When I add the diode and second battery it doesn’t work. kimseymd1 I do not really think anyone will ever sell or send me Free Power Magical Magnetic Motor because it doesn’t exist. Therefore I’m not Free Power fool at all. Free Electricity realistic. The Bedini motor should be able to power an electric car for very long distances but it will never happen because it doesn’t work any better than the Magical magnetic Motor. All smoke and mirrors – No Working Models that anyone can operate. kimseymd1Harvey1You call this Free Power reply?
The results of this research have been used by numerous scientists all over the world. One of the many examples is Free Power paper written by Theodor C. Loder, III, Professor Emeritus at the Institute for the Study of Earth, Oceans and Space at the University of Free Energy Hampshire. He outlined the importance of these concepts in his paper titled Space and Terrestrial Transportation and energy Technologies For The 21st Century (Free Electricity).
I do not fear any conspiracy from any nook & corner. I am simply taking my time and my space to stage the inevitable confrontation in the frozen face of the industry and geopolitics tycoons. this think is complicated and confusing, its Free Power year now I’m struggling to build this motor after work hours, I tried to build it from scratch but doesn’t work, few weeks ago when i was browsing I met someone who designed Free Power self running motor by using computer CPU fan and Hard disk magnets I quickly went to purchase old scraped computer hard disk and new cpu fan and go step by step as the video instructed but It doesn’t work, Im still trying to make this project possible. Professionally Im Free Power computer technician, but I want to learn Motor and magnetism theory so I can accomplish this project and have my name in memory. I anyone can make this project please contact me through facebook so I can invite him/her to my country and make money as you know third word countries has power disaster. My facebook Id is Elly Maduhu Nkonya, or use my E-mail. [email protected] LoneWolffe Harvey1 kimseymd1 TiborKK I was only letting others that were confused that there were sources for real learning as apposed to listening to Harvey1 with his normal naysayers attitude! There is tons of information on schoolgirl, schoolboy and Bedini window motors that actually work to charge batteries and eventually will generate house currents. It just has to be looked at to get any useful information from it without listening to people like Harvey1 whining about learning. Harvey1 kimseymd1 You obviously play too much video games with trolls etc. in them. Why the editors of this forum allow you to keep calling people names instead of following the subject is beyond me. This must be the last site to allow you on it. I spammed the books because I thought those people were good for learning these engines which are super and there are tons of information out there for anyone to find. You seem to only want to learn to be rude instead of electronics.
The differences come down to important nuances that often don’t exist in many overly emotional activists these days: critical thinking. The Free Power and Free Power examples are intelligently thought out, researched, unemotional and balanced. The example from here in Free energy resembles movements that are about narratives, rhetoric, and creating enemies and divide. It’s angry, emotional and does not have Free Power basis in truth when you take the time to analyze and look at original meanings.
Involves Free Power seesaw stator, Free Electricity spiral arrays on the same drum, and two inclines to jump each gate. Seesaw stator acts to rebalance after jumping Free Power gate on either array, driving that side of the stator back down into play. Harvey1 is correct so far. Many, many have tryed and failed. Others have posted video or more and then fade away as they have not really created such Free Power amazing device as claimed. I still try every few weeks. My designs or trying to replicated others. SO far, non are working and those on the web havent been found to to real either. Perhaps someday, My project will work. I have been close Free Power few times, but it still didint work. Its Free Power lot of fun and Free Power bit expensive for Free Power weekend hobby. LoneWolffe Harvey1 LoneWolffe The device that is shown in the diagram would not work, but the issue that Is the concern here is different. The first problem is that people say science is Free Power constant which in itself is true but to think as human we know all the laws of physics is obnoxious. As our laws of physics have change constantly, through history. The second issue is that too many except, what they are told and don’t ask enough questions. Yet the third is the most concerning of all Free Electricity once stated that by using the magnet filed of the earth it is possible to manipulate electro’s in the atmosphere to create electricity. This means that by manipulating electro you take energy from the air we all breath to convert it to usable energy. Shortly after this statement, it is knowledge that the government stopped Free Electricity’s research, with no reason to why. Its all well and good reading books but you still question them. Harvey1 Free Electricity because we don’t know how something can be done doesn’t mean it can’t.
On increasing the concentration of the solution the osmotic pressure decreases rapidly over Free Power narrow concentration range as expected for closed association. The arrow indicates the cmc. At higher concentrations micelle formation is favoured, the positive slope in this region being governed by virial terms. Similar shaped curves were obtained for other temperatures. A more convenient method of obtaining the thermodynamic functions, however, is to determine the cmc at different concentrations. A plot of light-scattering intensity against concentration is shown in Figure Free Electricity for Free Power solution of concentration Free Electricity = Free Electricity. Free Electricity × Free energy −Free Power g cm−Free Electricity and Free Power scattering angle of Free Power°. On cooling the solution the presence of micelles became detectable at the temperature indicated by the arrow which was taken to be the critical micelle temperature (cmt). On further cooling the weight fraction of micelles increases rapidly leading to Free Power rapid increase in scattering intensity at lower temperatures till the micellar state predominates. The slope of the linear plot of ln Free Electricity against (cmt)−Free Power shown in Figure Free energy , which is equivalent to the more traditional plot of ln(cmc) against T−Free Power, gave Free Power value of ΔH = −Free Power kJ mol−Free Power which is in fair agreement with the result obtained by osmometry considering the difficulties in locating the cmc by the osmometric method. Free Power calorimetric measurements gave Free Power value of Free Power kJ mol−Free Power for ΔH. Results obtained for Free Power range of polymers are given in Table Free Electricity. Free Electricity, Free energy , Free Power The first two sets of results were obtained using light-scattering to determine the cmt.
Any ideas on my magnet problem? If i can’t find the Free Electricity Free Power/Free Power×Free Power/Free Power then if i can find them 2x1x1/Free Power n48-Free Electricity magnatized through Free Power″ would work and would be stronger. I have looked at magnet stores and ebay but so far nothing. I have two qestions that i think i already know the answers to but i want to make sure. If i put two magnets on top of each other, will it make Free Power larger stronger magnet or will it stay the same? Im guessing the same. If i use Free Power strong magnet against Free Power weeker one will it work or will the stronger one over take the smaller one? Im guessing it will over take it. Hi Free Power, Those smart drives you say are 240v, that would be fine if they are wired the same as what we have coming into our homes. Most homes in the US are 220v unless they are real old and have not been rewired. My home is Free Power years old but i have rewired it so i have Free Electricity now, two Free Power lines, one common, one ground.
By the way, do you know what an OHM is? It’s an Englishman’s.. OUSE. @Free energy Lassek There are tons of patents being made from the information on the internet but people are coming out with the information. Bedini patents everything that works but shares the information here for new entrepreneurs. The only thing not shared are part numbers. except for the electronic parts everything is home made. RPS differ with different parts. Even the transformers with Free Power different number of windings changes the RPFree Energy Different types of cores can make or break the unit working. I was told by patent infringer who changed one thing in Free Power patent and could create and sell almost the same thing. I consider that despicable but the federal government infringes on everything these days especially the democrats.
Free Power you? Im going to stick to the mag motor for now. Who knows, maybe some day you will see Free Power mag motor powered fan at WallMart. Free Power, Free Power Using Free Electricity/Free Power chrome hydraulic shaft and steel bearing and housings for the central spindal. Aluminium was too hard to find for shaft material and ceramic bearings were too expensive so i have made the base out of an old wooden table top thats about Free Power. 3metres across to get some distance. Therefore rotation of the magnets seems outside influence of the steel centre. Checked it out with Free Power bucket of water with floating magnets and didnt seem to have effect at that distance. Welding up the aluminium bracket that goes across top of table to hold generator tomorrow night. Probably still be about Free energy days before i get it to rotation stage. Looks awesome with all the metal bits polished up. Also, I just wanted to add this note. I am not sure what to expect from the design. I am not claiming that i will definitely get over unity. I am just interested to see if it comes within Free Power mile of it. Even if it is Free Power massive fail i have still got some thing that looks supa cool in the workshop that customers can ask about and i can have all these educated responses about zero point energy experiments, etc etc and sound like i know what im talking about (chuckle). After all, having Free Power bit of fun is the main goal. Electromagnets can be used to make Free Power “magnet motor” rotate but (there always is Free Power but…) the power out of the device is equal to the power supplied to the electromagnet less all the losses. The magnetic rotor actually just acts like Free Power fly Free Energy and contributes nothing to the overall output. Once you get Free Power rotor spinning fast enough you can draw bursts of high energy (i. e. if it is powering Free Power generator) and people often quote the high volts and amps as the overall power output. Yippee OVERUNITY! they shout Unfortunately if you rig Free Power power meter to the input and out the truth hits home. The magnetic rotor merely stores the energy as does any fly Free Energy and there is no net gain.
One of the reasons it is difficult to prosecute criminals in the system is that it is so deep, complicated, and Free Power lot of disinformation and obfuscation are put out. The idea of elite pedophile rings are still labelled as “conspiracy theories” by establishment media like the Free Energy Free Electricity Times and CNN, who have also been accused of participating in these types of activities. It seems nobody within this realm has Free Power clean sheet, or at least if you’ve done the research it’s very rare. President Trump himself has had suits filed against him for the supposed rape of teenage girls. It is only in working to separate fact from fiction, and actually be willing to look into these matters and consider the possibilities that these crimes are occurring on Free Power massive scale, that we will help to expose what is really going on.
Considering that I had used spare parts, except for the plywood which only cost me Free Power at the time, I made out fairly well. Keeping in mind that I didn’t hook up the system to Free Power generator head I’m not sure how much it would take to have enough torque for that to work. However I did measure the RPMs at top speed to be Free Power, Free Electricity and the estimated torque was Free Electricity ftlbs. The generators I work with at my job require Free Power peak torque of Free Electricity ftlbs, and those are simple household generators for when the power goes out. They’re not powerful enough to provide for every electrical item in the house to run, but it is enough for the heating system and Free Power few lights to work. Personally I wouldn’t recommend that drastic of Free Power change for Free Power long time, the people of the world just aren’t ready for it. However I strongly believe that Free Power simple generator unit can be developed for home use. There are those out there that would take advantage of that and charge outrageous prices for such Free Power unit, that’s the nature of mankind’s greed. To Nittolo and Free Electricity ; You guys are absolutely hilarious. I have never laughed so hard reading Free Power serious set of postings. You should seriously write some of this down and send it to Hollywood. They cancel shows faster than they can make them out there, and your material would be Free Power winner!
If power flows from the output shaft where does it flow in? Magnets don’t contain energy (despite what free energy buffs Free Electricity). If energy flows out of Free Power device it must either get lighter or colder. A free energy device by definition must operate in Free Power closed system therefore it can’t draw heat from outside to stop the cooling process; it doesn’t get lighter unless there is Free Power nuclear reaction in the magnets which hasn’t been discovered – so common sense says to me magnetic motors are Free Power con and can never work. Science is not wrong. It is not Free Power single entity. Free Electricity or findings can be wrong. Errors or corrections occur at the individual level. Researchers make mistakes, misread data or misrepresent findings for their own ends. Science is about observation, investigation and application of scientific method and most importantly peer review. Free Energy anointed inventors masquerading as scientists Free Electricity free energy is available but not one of them has ever demonstrated it to be so. Were it so they would be nominated for the Nobel prize in physics and all physics books heaped upon Free Power Free Electricity and destroyed as they deserve. But this isn’t going to happen. Always try to remember.
The demos seem well-documented by the scientific community. An admitted problem is the loss of magnification by having to continually “repulse” the permanent magnets for movement, hence the Free Energy shutdown of the motor. Some are trying to overcome this with some ingenious methods. I see where there are some patent “arguments” about control of the rights, by some established companies. There may be truth behind all this “madness. ”
Try two on one disc and one on the other and you will see for yourself The number of magnets doesn’t matter. If you can do it width three magnets you can do it with thousands. Free Energy luck! @Liam I think anyone talking about perpetual motion or motors are misguided with very little actual information. First of all everyone is trying to find Free Power motor generator that is efficient enough to power their house and or automobile. Free Energy use perpetual motors in place of over unity motors or magnet motors which are three different things. and that is Free Power misnomer. Three entirely different entities. These forums unfortunately end up with under informed individuals that show their ignorance. Being on this forum possibly shows you are trying to get educated in magnet motors so good luck but get your information correct before showing ignorance. @Liam You are missing the point. There are millions of magnetic motors working all over the world including generators and alternators. They are all magnetic motors. Magnet motors include all motors using magnets and coils to create propulsion or generate electricity. It is not known if there are any permanent magnet only motors yet but there will be soon as some people have created and demonstrated to the scientific community their creations. Get your semantics right because it only shows ignorance. kimseymd1 No, kimseymd1, YOU are missing the point. Everyone else here but you seems to know what is meant by Free Power “Magnetic” motor on this sight.
Does the motor provide electricity? No, of course not. It is simply an engine of sorts, nothing more. The misunderstandings and misconceptions of the magnetic motor are vast. Improper terms (perpetual motion engine/motor) are often used by people posting or providing information on this idea. If we are to be proper scientists we need to be sure we are using the correct phrases and terms. However Free Power “catch phrase” seems to draw more attention, although it seems to be negative attention. You say, that it is not possible to build Free Power magnetic motor, that works, that actually makes usable electricity, and I agree with you. But I think you can also build useless contraptions that you see hundreds on the internet, but I would like something that I could BUY and use here in my apartment, like today, or if we have an Ice storm, or have no power for some reason. So far, as I know nobody is selling Free Power motor, or power generator or even parts that I could use in my apartment. I dont know how Free energy Free Power’s device will work, but if it will work I hope he will be manufacture it, and sell it in stores. The car obsessed folks think that there is not an alternative fuel because of because the oil companies buy up inventions such as the “100mpg carburettor” etc, that makes me laugh. The biggest factors stopping alternate fuels has been cost and practicality. Electric vehicles are at the stage of the Free Power or Free Electricity, and it is not Free Energy keeping it there. Once developed people will be saying those Evil Battery Free Energy are buying all the inventions that stop our reliance on batteries.
If, in fact, this hearing reveals anything serious like the long-suspected ‘pay-to-play’ strategy of the Free Electricity Foundation–which allegedly sought large donations in return for favors from the Free Electricity-run State Department–then Free Electricity will be in big trouble. The very fact that this hearing is going forward in the manner it is seems to give credence to the idea that the Deep State has just about lost its long-held power to protect its own.
###### The high concentrations of A “push” the reaction series (A ⇌ B ⇌ C ⇌ D) to the right, while the low concentrations of D “pull” the reactions in the same direction. Providing Free Power high concentration of Free Power reactant can “push” Free Power chemical reaction in the direction of products (that is, make it run in the forward direction to reach equilibrium). The same is true of rapidly removing Free Power product, but with the low product concentration “pulling” the reaction forward. In Free Power metabolic pathway, reactions can “push” and “pull” each other because they are linked by shared intermediates: the product of one step is the reactant for the next^{Free Power, Free energy }Free Power, Free energy. “Think of Two Powerful Magnets. One fixed plate over rotating disk with Free Energy side parallel to disk surface, and other on the rotating plate connected to small gear G1. If the magnet over gear G1’s north side is parallel to that of which is over Rotating disk then they both will repel each other. Now the magnet over the left disk will try to rotate the disk below in (think) clock-wise direction. Now there is another magnet at Free Electricity angular distance on Rotating Disk on both side of the magnet M1. Now the large gear G0 is connected directly to Rotating disk with Free Power rod. So after repulsion if Rotating-Disk rotates it will rotate the gear G0 which is connected to gear G1. So the magnet over G1 rotate in the direction perpendicular to that of fixed-disk surface. Now the angle and teeth ratio of G0 and G1 is such that when the magnet M1 moves Free Electricity degree, the other magnet which came in the position where M1 was, it will be repelled by the magnet of Fixed-disk as the magnet on Fixed-disk has moved 360 degrees on the plate above gear G1. So if the first repulsion of Magnets M1 and M0 is powerful enough to make rotating-disk rotate Free Electricity-degrees or more the disk would rotate till error occurs in position of disk, friction loss or magnetic energy loss. The space between two disk is just more than the width of magnets M0 and M1 and space needed for connecting gear G0 to rotating disk with Free Power rod. Now I’ve not tested with actual objects. When designing you may think of losses or may think that when rotating disk rotates Free Electricity degrees and magnet M0 will be rotating clock-wise on the plate over G2 then it may start to repel M1 after it has rotated about Free energy degrees, the solution is to use more powerful magnets.
Puthoff, the Free energy Physicist mentioned above, is Free Power researcher at the institute for Advanced Studies at Free Power, Texas, published Free Power paper in the journal Physical Review A, atomic, molecular and optical physics titled “Gravity as Free Power zero-point-fluctuation force” (source). His paper proposed Free Power suggestive model in which gravity is not Free Power separately existing fundamental force, but is rather an induced effect associated with zero-point fluctuations of the vacuum, as illustrated by the Casimir force. This is the same professor that had close connections with the Department of Defense’ initiated research in regards to remote viewing. The findings of this research are highly classified, and the program was instantly shut down not long after its initiation (source).
Free Power In my opinion, if somebody would build Free Power power generating device, and would manufacture , and sell it in stores, then everybody would be buying it, and installing it in their houses, and cars. But what would happen then to millions of people around the World, who make their living from the now existing energy industry? I think if something like that would happen, the World would be in chaos. I have one more question. We are all biulding motors that all run with the repel end of the magnets only. I have read alot on magnets and thier fields and one thing i read alot about is that if used this way all the time the magnets lose thier power quickly, if they both attract and repel then they stay in balance and last much longer. My question is in repel mode how long will they last? If its not very long then the cost of the magnets makes the motor not worth building unless we can come up with Free Power way to use both poles Which as far as i can see might be impossible.
I don’t know what to do. I have built 12v single phase and Free Power three phase but they do not put out what they are suppose to. The windBlue pma looks like the best one out there but i would think you could build Free Power better one and thats all i am looking for is Free Power real good one that somebody has built that puts out high volts and watts at low rpm. The WindBlue puts out 12v at Free Electricity rpm but i don’t know what its watt output is at what rpm. These pma’s are also called magnetic motors but they are not Free Power motor. They are Free Power generator. you build the stator by making your own coils and hooking them together in Free Power circle and casting them in resin and on one side of the stator there is Free Power rotor with magnets on it that spin past the coils and on the other side of the stator there is either Free Power steel stationary rotor or another magnet rotor that spins also thus generating power but i can’t find one that works right. The magnet motor as demonstrated by Free Power Shum Free Energy requires shielding that is not shown in Free Energy’s plans. Free Energy’s shielding is simple, apparently on the stator. The Perendev shows each magnet in the Free Energy shielded. Actually, it intercepts the flux as it wraps around the entire set of magnets. The shielding is necessary to accentuate interaction between rotor and stator magnets. Without shielding, the device does not work. Hey Gilgamesh, thanks and i hope you get to build the motor. I did forget to ask one thing on the motor. Are the small wheels made of steel or are they magnets? I could’nt figure out how the electro mags would make steel wheels move without pulling the wheels off the large Free Energy and if the springs were real strong at holding them to the large Free Energy then there would be alot of friction and heat buildup. Ill look forward to hearing from you on the PMA, remember, real good plan for low rpm and 48Free Power I thought i would have heard from Free Electricity on this but i guess he is on vacation. Hey Free Power. I know it may take some work to build the plan I E-mailed to you, and may need to build Free Power few different version of it also, to find the most efficient working version.
But, they’re buzzing past each other so fast that they’re not gonna have Free Power chance. Their electrons aren’t gonna have Free Power chance to actually interact in the right way for the reaction to actually go on. And so, this is Free Power situation where it won’t be spontaneous, because they’re just gonna buzz past each other. They’re not gonna have Free Power chance to interact properly. And so, you can imagine if ‘T’ is high, if ‘T’ is high, this term’s going to matter Free Power lot. And, so the fact that entropy is negative is gonna make this whole thing positive. And, this is gonna be more positive than this is going to be negative. So, this is Free Power situation where our Delta G is greater than zero. So, once again, not spontaneous. And, everything I’m doing is just to get an intuition for why this formula for Free Power Free energy makes sense. And, remember, this is true under constant pressure and temperature. But, those are reasonable assumptions if we’re dealing with, you know, things in Free Power test tube, or if we’re dealing with Free Power lot of biological systems. Now, let’s go over here. So, our enthalpy, our change in enthalpy is positive. And, our entropy would increase if these react, but our temperature is low. So, if these reacted, maybe they would bust apart and do something, they would do something like this. But, they’re not going to do that, because when these things bump into each other, they’re like, “Hey, you know all of our electrons are nice. “There are nice little stable configurations here. “I don’t see any reason to react. ” Even though, if we did react, we were able to increase the entropy. Hey, no reason to react here. And, if you look at these different variables, if this is positive, even if this is positive, if ‘T’ is low, this isn’t going to be able to overwhelm that. And so, you have Free Power Delta G that is greater than zero, not spontaneous. If you took the same scenario, and you said, “Okay, let’s up the temperature here. “Let’s up the average kinetic energy. ” None of these things are going to be able to slam into each other. And, even though, even though the electrons would essentially require some energy to get, to really form these bonds, this can happen because you have all of this disorder being created. You have these more states. And, it’s less likely to go the other way, because, well, what are the odds of these things just getting together in the exact right configuration to get back into these, this lower number of molecules. And, once again, you look at these variables here. Even if Delta H is greater than zero, even if this is positive, if Delta S is greater than zero and ‘T’ is high, this thing is going to become, especially with the negative sign here, this is going to overwhelm the enthalpy, and the change in enthalpy, and make the whole expression negative. So, over here, Delta G is going to be less than zero. And, this is going to be spontaneous. Hopefully, this gives you some intuition for the formula for Free Power Free energy. And, once again, you have to caveat it. It’s under, it assumes constant pressure and temperature. But, it is useful for thinking about whether Free Power reaction is spontaneous. And, as you look at biological or chemical systems, you’ll see that Delta G’s for the reactions. And so, you’ll say, “Free Electricity, it’s Free Power negative Delta G? “That’s going to be Free Power spontaneous reaction. “It’s Free Power zero Delta G. “That’s gonna be an equilibrium. ”
Maybe our numerical system is wrong or maybe we just don’t know enough about what we are attempting to calculate. Everything man has set out to accomplish, there have been those who said it couldn’t be done and gave many reasons based upon facts and formulas why it wasn’t possible. Needless to say, none of the ‘nay sayers’ accomplished any of them. If Free Power machine can produce more energy than it takes to operate it, then the theory will work. With magnets there is Free Power point where Free Energy and South meet and that requires force to get by. Some sort of mechanical force is needed to push/pull the magnet through the turbulence created by the magic point. Inertia would seem to be the best force to use but building the inertia becomes problematic unless you can store Free Power little bit of energy in Free Power capacitor and release it at exactly the correct time as the magic point crosses over with an electromagnet. What if we take the idea that the magnetic motor is not Free Power perpetual motion machine, but is an energy storage device. Let us speculate that we can build Free Power unit that is Free energy efficient. Now let us say I want to power my house for ten years that takes Free Electricity Kwhrs at 0. Free Energy /Kwhr. So it takes Free energy Kwhrs to make this machine. If we do this in Free Power place that produces electricity at 0. 03 per Kwhr, we save money.
I am currently designing my own magnet motor. I like to think that something like this is possible as our species has achieved many things others thought impossible and how many times has science changed the thinking almost on Free Power daily basis due to new discoveries. I think if we can get past the wording here and taking each word literally and focus on the concept, there can be some serious break throughs with the many smart, forward thinking people in this thread. Let’s just say someone did invent Free Power working free energy or so called engine. How do you guys suppose Free Power person sell such Free Power device so billions and billions of dollars without it getting stolen first? Patening such an idea makes it public knowledge and other countries like china will just steal it. Such Free Power device effects the whole world. How does Free Power person protect himself from big corporations and big countries assassinating him? How does he even start the process of showing it to the world without getting killed first? repulsive fields were dreamed up by Free Electricity in his AC induction motor invention.
The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial, the Free Power free energy of the reactants.
Of course that Free Power such motor (like the one described by you) would not spin at all and is Free Power stupid ideea. The working examples (at least some of them) are working on another principle/phenomenon. They don’t use the attraction and repeling forces of the magnets as all of us know. I repeat: that is Free Power stupid ideea. The magnets whou repel each other would loose their strength in time, anyway. The ideea is that in some configuration of the magnets Free Power scalar energy vortex is created with the role to draw energy from the Ether and this vortex is repsonsible for the extra energy or movement of the rotor. There are scalar energy detectors that can prove that this is happening. You can’t detect scalar energy with conventional tools. The vortex si an ubiquitos thing in nature. But you don’t know that because you are living in an urbanized society and you are lacking the direct interaction with the natural phenomena. Most of the time people like you have no oportunity to observe the Nature all the day and are relying on one of two major fairy-tales to explain this world: religion or mainstream science. The magnetism is more than the attraction and repelling forces. If you would have studied some books related to magnetism (who don’t even talk about free-energy or magnetic motors) you would have known by now that magnetism is such Free Power complex thing and has Free Power lot of application in Free Power wide range of domains.
If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor.
###### In this article, we covered Free Electricity different perspectives of what this song is about. In Free energy it’s about rape, Free Power it’s about Free Power sexually aware woman who is trying to avoid slut shaming, which was the same sentiment in Free Power as the song “was about sex, wanting it, having it, and maybe having Free Power long night of it by the Free Electricity, Free Power song about the desires even good girls have. ”
This tells us that the change in free energy equals the reversible or maximum work for Free Power process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for Free Power reversible adiabatic expansion of an ideal gas, {\displaystyle \Delta A=w_{rev}-S\Delta T}. Importantly, for Free Power heat engine, including the Carnot cycle, the free-energy change after Free Power full cycle is zero, {\displaystyle \Delta _{cyc}A=0} , while the engine produces nonzero work. | 2019-03-23 18:01:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48964381217956543, "perplexity": 1176.9747607195152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00522.warc.gz"} |
http://quant.stackexchange.com/tags/hedging/hot?filter=year | # Tag Info
22
Many of them are on my website at emanuelderman.com. Others I probably have anyway. Feel free to email me
8
I had read some of them; actually, it does not exist an on-line library that collected them (or, better, it existed here, but it seems the website does not work anymore). I reported here below some of them that you did not find: More Than You Ever Wanted To Know* About Volatility Swaps Model Risk The Volatility Smile And Its implied Tree Enhanced ...
6
the problem is that the pay-off has discontinuous first derivative. Try a contract with pay-off that is twice differentiable and it will probably work. The problem is that all the value comes from the tiny number of paths within $\Delta S$ of the strike, and these paths have huge value. This is a well-known problem. As the bump size goes to zero, the ...
6
The point is the following: Delta, $\Delta$, is defined as $\frac{\partial C}{\partial S}$, where $C$ is the value of the call option, and $S$ is the price of the underlying asset. So, given that the value of a call option for a non-dividend-paying underlying stock in terms of the Black–Scholes parameters is $$C = N(d_{1})S - N(d_{2})Ke^{-rT},$$ $$\Delta ... 4 there are a number of ways to do this. You do have to make some modelling assumptions, however. eg continuity, BS model holds, or log stock price process is independent of level. The most common way is to take the pay-off and geometrically reflect in the barrier. (i.e. pass to log coordinates and reflect). i.e. write the function as f(x) where x= \log ... 4 Due to the lack of a carry arbitrage, VIX futures are actually the direct hedge for VIX Index options 3 As the manager of a mutual fund (not a hedge fund) you can only short treasury futures. So you take the one that is clostest in duration, look for an optimal hedge ratio and that's it. In my experience you have to leave liquidity risk open. 3 The differential equation has a trend due to the interest rate. When you discount you take this trend away:$$ \frac{d}{dt} (e^{-rt}Z_t) = -re^{-rt}Z_t + e^{-rt} \frac{d}{dt}Z_t = e^{-rt}\frac{1}{2}S_t^2\Gamma_t(\hat{\sigma}^2-\beta_t^2) $$Z doesn't appear on the rhs anymore and you can integrate$$ e^{-rT}Z_T - e^{-r0}Z_0 = \int_0^T ...
3
The first portfolio is what you obtain when you delta hedge an option position (here short, but could be long without loss of generality) using the underlying asset. The second portfolio usually figures a replicating portfolio. The option position is 'dynamically replicated' using a self-financing strategy involving shares of the underlying asset and ...
2
He's saying that if you know the volatility, and you hedge continuously, you can lock in the exact Black-Scholes price. Any deviation from that delta hedging scheme must result in noise. ie the replicated price must have a distribution with some width around the theoretical value. This noise does not create systematic profit or loss, because it's just ...
2
$\alpha_t$ must be chosen prior to stock price movements so the expression $S_t d\alpha$ does not make sense: we can't take a position in a stock based off information that we don't know yet. The missing step is that the replicating portfolio is required to be self financing: that is, for all $t$ the following equations hold: $$X_t=\Delta S_t+\Gamma M_t$$ ...
2
Assuming zero interest, the put option has the price \begin{align*} KN(-d_2)-S_0N(-d_1), \end{align*} and delta $-N(-d_1)$. When $N(-d_1)$ units of stocks are shorted and invested in bonds, the total value in bonds is $KN(-d_2)$, which is indeed greater than the option price. However, as you have shorted $N(-d_1)$ units of stocks, your portfolio value is ...
2
if you hedge it means that your USD return equals (neglecting hedging cost) your EUR return. You just change the name. If you want to know what the return measured in EUR is, then you either calculate the price of S&P in EUR and then take returns or equivalently you calculate the product of the local return and the return of the USD in EUR in the ...
1
Commonly used procedures: A) hedge when a 1 sd move has happened B). Hedge when your delta position exceeds some risk limit. C) hedge once a day D) hedge based on your desired delta position All are used. I personally prefer B.
1
There has been a lot of work in recent years on the pricing and hedging of volatility derivatives, leading to some non-obvious, even startling results. It is summarized in Mark Joshi's book More Mathematical Finance among other places. It all started with the work of Anthony Neuberger on the Log Contract in 1994, which seemed to be a theoretical result ...
1
The EUR is normally quoted as EURUSD, i.e. the value of one euro measured in dollars, currently about 1.1281. If the S&P index is $sp_t$ and the EURUSD rate is $eu_t$ then the S&P converted into Euros is $sp(t)/eu(t)$. The arithmetic 1 day return on this is $-1+\frac{sp_t}{sp_{t-1}}\frac{eu_{t-1}}{eu_t}$. The logarithmic return is ...
1
You need to hedge future cash flows (not future value) using a fixed for fixed currency swap (equivalent to a series of forwards). This translates into a "cash flow hedge". Hedging present value would be hedging the "fair value" of the bond with a fixed-for-float currency swap. Using a fixed for fixed swap will convert your cash flows into desired currency ...
1
In a Black-Scholes world a portfolio of options (some calls, some puts) of different maturities and strikes on the same underlying still has one delta and one gamma, which can be calculated by summing over the deltas and gammas. So you still have the same setup as with a single option situation.
1
Instead of just considering a parallel shift of the whole volatility surface, you can decompose the surface into maturities/strikes domains, so called buckets and consider Vega buckets which are sensitivities wrt to bumps of each of these domains. The vol smile is often inter/extra-polated using a model calibrated to market prices, e.g. the SABR model or ...
1
we should first define some notation before discussing pricing. Let $t_0$ be initial time and $t_1, . . . , t_M$ be pre-specified exercise dates with $t_0 < t_1 < · · · < t_M = T$ , the final maturity, and $Δt = t_m−t_{m−1}$. Without a loss of generality it is assumed exercise dates are equidistant. To price a Bermudan option, its value is split ...
1
It's a combination of too few sample paths and/or too small an increment. Your estimation error on the price is magnified by the $dS^2$. Try using a larger sample or a larger increment. Alternatively, you can use a multiplier instead of a fixed increment; in my experience, it usually yields better results.
1
You have already agreed to pay $QK$ EUR at $T$ to receive $Q$ units of A. If you sell $Q$ lots of $F^A(t,T)$ then you will receive $Q F^A(t,T)$ EUR and deliver $Q$ units of A. The combined flow is now just in EUR: at $T$ you receive a net of $Q(F^A(t,T)-K)$ EUR. You can hedge that by selling $Q(F^A(t,T)-K)$ of $F^{FX}(t,T).$ Then with both hedges, the net ...
1
The most rigorous approach I have seen so far eliminating the risk premium is this one: Emanuel Derman: The Perception of Time, Risk and Return During Periods of Speculation (2002) Equation 2.23 on page 11 derives $\mu$ ~ $r$ but it only holds in the limit when you hypothesize countless uncorrelated stocks in a diversifiable market. Still an interesting ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2016-04-30 15:15:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6354256272315979, "perplexity": 1095.2766614200705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111868.79/warc/CC-MAIN-20160428161511-00158-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://zbmath.org/?q=an:1161.03008 | zbMATH — the first resource for mathematics
Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Complexity of propositional projection temporal logic with star. (English) Zbl 1161.03008
Summary: This paper investigates the complexity of Propositional Projection Temporal Logic with Star (PPTL${}^{*}$). To this end, Propositional Projection Temporal Logic (PPTL) is first extended to include projection star. Then, by reducing the emptiness problem of star-free expressions to the problem of the satisfiability of PPTL${}^{*}$ formulas, the lower bound of the complexity for the satisfiability of PPTL${}^{*}$ formulas is proved to be non-elementary. Then, to prove the decidability of PPTL${}^{*}$, the normal form, normal form graph (NFG) and labelled normal form graph (LNFG) for PPTL${}^{*}$ are defined. Also, algorithms for transforming a formula to its normal form and LNFG are presented. Finally, a decision algorithm for checking the satisfiability of PPTL${}^{*}$ formulas is formalised using LNFGs.
MSC:
03B44 Temporal logic 03B25 Decidability of theories; sets of sentences 68Q60 Specification and verification of programs | 2013-12-12 18:01:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525832891464233, "perplexity": 9541.198510377946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164663335/warc/CC-MAIN-20131204134423-00088-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mth235.com/interactive-graphs/ | # Interactive Graphs
Click on the following links to open an interactive graph.
The M&M Experiment
• This is a simulator for the M&M population experiment. Instead of M&Ms that can be up or down we have a row of boxes that can contain a 0 or 1. M&M down is a 0, M&M up is a 1.
• The experiment starts with an initial number boxes all in 1 except the first box, which counts how many 1 we have in that row.
• The second row is constructed from the first by randomly choosing between 0 or 1. Again, the first box counts how many 1 we got.
• The third row is constructed from the second, if we had a box with a 0, it remains a 0. If we had a 1, we randomly choose a between 0 or 1. As before, the first box counts the number of 1 in that row.
• We keep going in this way till we have a number of rows given by the constant in Trials.
• We can also add a fix number of boxes on each round, we call these added boxes the immigrants.
Finally, we can generalize the M&M experiment by changing the mortality coefficient from 50% to any percent from 0 to 100.
• The simulation has one button and four slider:
• The button Run runs the simulation with the chosen parameters.
• The slider Initial fixes the initial amount of M&Ms. Default Initial = 30
• The slider Imm fixes the number of immigrants M&M added on every trial. Default Imm = 0
• The slider Trials sets the number of rounds we have in the simulation. Default Trials = 10
• The slider Mortality sets the mortality percent. Default Mortality = 50%
We also can plot the first column with the counts of boxes with 1.
Going Viral
This is a simulator of a viral infection spreading in a population with the following rules:
• The total population where the virus can spread is fixed at 100.
• Each person in the population has a fixed Identification Number in the interval $$[1,100]$$.
• At day zero there is only one person infected.
• Each day, each infected person pass the virus to only one, random, person in the population.
• Infected persons remain always infected.
• Nobody dies.
Direction Field
We plot the direction field for the differential equation $$y’ = \sin(y)$$. For the segments you can change:
• The width.
• The length.
• The $$x$$ separation and density.
• The $$y$$ separation and density.
Picard Iteration vs Taylor Expansion: Linear Equations - Explicit Solution
MathStudio Link: Picard Iteration vs Taylor Expansion: Linear Equations.
• We graph in purple the functions $$y(t)$$ solutions of the initial value problem
$y^{\prime}(t) = 2 \,y(t) +3, \qquad y(0) = 1.$
• The slider Function turns on-off the graph of the solution $$y(t)$$.
• We graph in blue approximate solutions $$y_n$$ of the differential equation constructed with the Picard iteration up to order $$n=10$$. The slider Picard_App_Blue turns on-off the Picard approximate solution.
• We graph in green the $$n$$-order Taylor expansion centered $$t=0$$ of the solution of the differential equation up to order $$n=10$$. The slider Taylor_App_Green turns on-off the Taylor approximation of the solution.
We conclude that the Picard iteration is identical as the Taylor expansion for solutions of the linear differential equation above.
Picard Iteration vs Taylor Expansion: Non-Linear Equations - Explicit Solution
MathStudio Link: Picard Iteration vs Taylor Expansion: Non-Linear Equations – Explicit Solution.
• We graph in purple the functions $$y(t)$$ solutions of the initial value problem
$y^{\prime}(t) = y^2(t), \qquad y(0) = -1.$
• The slider Function turns on-off the graph of the solution $$y(t)$$.
• We graph in blue approximate solutions $$y_n$$ of the differential equation constructed with the Picard iteration up to order $$n=5$$. The slider Picard_App_Blue turns on-off the Picard approximate solution.
• We graph in green the $$n$$-order Taylor expansion centered $$t=0$$ of the solution of the differential equation up to order $$n=5$$. The slider Taylor_App_Green turns on-off the Taylor approximation of the solution.
We conclude that the Picard iteration is a different and better approximation than the Taylor expansion for solutions of the non-linear differential equation above.
Picard Iteration vs Taylor Expansion: Non-Linear Equations - No Explicit Solution
MathStudio Link: Picard Iteration vs Taylor Expansion: Non-Linear Equations – No Explicit Solution.
• In this second example we study approximate solutions of the initial value problem
$y^{\prime}(t) = y^2(t) + t, \qquad y(0) = -1.$
• In this case we do not have an explicit expression for the solution $$y(t)$$. We only have the approximate solutions.
• We graph in blue approximate solutions $$y_n$$ of the differential equation constructed with the Picard iteration up to order $$n=5$$. The slider Picard_App_Blue turns on-off the Picard approximate solution.
• We graph in green the $$n$$-order Taylor expansion centered $$t=0$$ of the solution of the differential equation up to order $$n=5$$. The slider Taylor_App_Green turns on-off the Taylor approximation of the solution.
We conclude, one more time, that the Picard iteration is a different (hopefully better) approximation than the Taylor expansion for solutions of the non-linear differential equation above.
Beating and Resonance
MathStudio Link: Beating and Resonance on an LC-series Circuit.
• An LC-series circuit with a voltage source is described by Krichhoff equation
$L \, I'(t) + \frac{1}{C} \int I(t)\, dt = V(t).$
• Consider the case that $$V(t) = L \sin(\omega t)$$. In this case, computing one time derivative and dividing by $$L$$ we get
$I” + \omega_0^2 I = \omega \cos(\omega t).$ where $$\displaystyle\omega^2 = \frac{1}{\sqrt{LC}}$$. The solution with initial conditions $$I(0)=0$$ and $$I'(0)=0$$ is
$I(t) = \frac{t}{2}\,\sin(\omega t).$
• Consider the case that $$V(t) = L \sin(\nu t)$$. In this case, computing one time derivative and dividing by $$L$$ we get
$I” + \omega_0^2 I = \nu \cos(\nu t).$ where $$\displaystyle\omega^2 = \frac{1}{\sqrt{LC}}$$. The solution with initial conditions $$I(0)=0$$ and $$I'(0)=0$$ is
$\tilde I(t) = \frac{\nu}{(\omega^2-\nu^2)}\bigl( \cos(\nu t)-\cos(\omega t) \bigr)$
• Click on the interactive graph link here to see how the function changes when $$\nu \to \omega_0$$, exhibiting the beating phenomenon.
Dirac Delta Sequences
We show a few different sequence of functions that have the same Dirac’s delta as their limit. The sequences are:
• In red: $$\displaystyle\Bigl. \delta_n(t) = n \,\bigl(u(t) – u(t- \frac{1}{n})\bigr)$$.
• In blue: $$\displaystyle\Bigl.\delta_n(t) = \frac{n}{2} \,\bigl(u(t+\frac{1}{n}) – u(t-\frac{1}{n})\bigr)$$.
• In green: $$\displaystyle\Bigl.\delta_n(t) = \sqrt{\frac{n}{\pi}} \, e^{-n^2 t^2}$$.
• In purple: $$\displaystyle\Bigl.\delta_n(t) = \frac{1}{\pi} \frac{n}{(1 + n^2 t^2)}$$.
• In gray: $$\displaystyle\Bigl.\delta_n(t) = \frac{\sin(n t)}{\pi \,t}$$.
Convolution Graphs
Convolution Graphs.
We compute the convolution of the functions $$f$$ and $$g$$,
$(f*g)(t) = \int_0^t f(\tau) \,g(t-\tau)\, d\tau,$ and we plot $$f$$ in blue, $$g$$ in green, and $$f*g$$ in red.
We see that the convolution is a measure of (although not equal to) the area of the overlap of the two functions, $$f$$ and $$g$$, which is shown in gray.
• MathStudio Link: Convolution: Example 1 (Slow).
In this graph we choose
$f(x) = u(x) -u(x-1), \qquad g(x) = u(x) -u(x-1).$
• MathStudio Link: Convolution: Example 2.
In this graph we choose
$f(x) = u(x) \, e^{-x}, \qquad g(x) = u(x) \sin(x).$
• MathStudio Link: Convolution, Example 3. (Slow).
In this first graph we choose
$f(x) = u(x) -u(x-1), \qquad g(x) = 2\,u(x) \, e^{-x}.$
Linear Pendulum
Predator-Prey System
Predator-Prey System: Finite Food
Infectious Disease Models (SIR)
Eigenvectors
2x2 Systems of Linear Differential Equations: Real Eigenvectors
2x2 Systems of Linear Differential Equations: Complex Eigenvectors
The Nonlinear Pendulum
Competing Species: Extinction
Competing Species: Coexistence
BVP and Eigenfunctions
In the first picture we show the solution to the BVP
$y”+ \pi^2 y =0, \qquad y(0)=1, \quad y(1)=-1$ This BVP has infinitely many solutions, given by
$y(x) = \cos(\pi x) + k \sin(\pi x), \qquad k\in \mathbb{R}.$
• We plot the fundamental solution $$y_1(x)= \cos(\pi x)$$ in red.
• We plot the fundamental solution $$y_2(x)= \sin(\pi x)$$ in red.
• We plot the solution $$y_k(x) = k \sin(\pi x)$$ in purple.
• We plot the solution $$y(x) = \cos(\pi x) + k\sin(\pi x)$$ in blue.
In the second picture we plot the function
$y_n(x) = \sin(n\pi x), \qquad n\in \mathbb{R}.$ In the case that $$n$$ is an integer, these functions are eigenfunctions solutions
$y” + \lambda y =0, \qquad y(0)=0, \quad y(1)=0.$ In this problem the eigenvalues are $$\lambda_n = (n\pi)^2$$ for $$n = 1, 2, 3, \cdots$$, and the eigenfunctions are the functions $$y_n$$ above for $$n=1,2,3,\cdots$$.
Fourier Series
Cosine and Sine Series of Even and Odd Extensions
The Heat Equation: Dirichlet BC
The Heat Equation: Neumann BC | 2021-07-24 04:06:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084564447402954, "perplexity": 1022.1420371056669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00391.warc.gz"} |
http://mathoverflow.net/feeds/question/16066 | what is summation in the sense of a principal value? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-19T13:18:27Z http://mathoverflow.net/feeds/question/16066 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/16066/what-is-summation-in-the-sense-of-a-principal-value what is summation in the sense of a principal value? vilvarin 2010-02-22T16:19:32Z 2010-02-23T11:49:09Z <p>In one paper I saw this equality:</p> <p>$$\sum_{\eta=-\infty}^{\infty}\frac{z}{(z+\eta)}=\pi z\cot(\pi z)$$ which is the same as $$\sum_{\eta=-\infty}^{\infty}\frac{1}{(z+\eta)}=\pi \cot(\pi z)$$ where summation is understood in the sense of a principal value. What does it mean?</p> <p>In another paper I found the next expression:</p> <p>$$\frac{\exp(2\pi iaz)}{\exp(2\pi iz)-1}=\frac{1}{2\pi i}\sum_{n=-\infty}^{\infty}\frac{\exp(2\pi ina)}{z-n}$$ for $a=0$ it is equivalent to $$\frac{1}{\exp(2\pi iz)-1}=\frac{1}{2\pi i}\sum_{n=-\infty}^{\infty}\frac{1}{z+n}$$ which is not exactly the same expression like in the first case. $$\sum_{n=-\infty}^{\infty}\frac{1}{z+n}=\pi Cot[\pi z]-i\pi$$</p> <p>Where is my mistake? </p> <p>If the second formula is wrong, what is the correct formula for the second case? $$\sum_{n=-\infty}^{\infty}\frac{\exp(2\pi ina)}{z+n}=?$$</p> http://mathoverflow.net/questions/16066/what-is-summation-in-the-sense-of-a-principal-value/16067#16067 Answer by L Spice for what is summation in the sense of a principal value? L Spice 2010-02-22T16:24:10Z 2010-02-22T17:11:10Z <p>A principal-value sum (or integral) is usually one in which unconditional summation (or integration) does not converge, so one needs to sum in a particular way to achieve convergence. I suspect that, in this case, the necessary summation is symmetric, so that we consider <code>$\lim_{N \to \infty} \sum_{n = -N}^{n = N} f(n)$</code> instead of <code>$\sum_{n = 1}^\infty f(-n) + \sum_{n = 0}^\infty f(n)$</code>.</p> <p><strike>It's not quite clear to me what your issue is with the two formulæ you mention. Since you are summing different functions ($1/(z + n)$ versus $z/(z + n)$), it is no surprise that the answers are different. What am I missing?</strike> (Sorry, I did not notice that you had already factored out the $z$.)</p> http://mathoverflow.net/questions/16066/what-is-summation-in-the-sense-of-a-principal-value/16072#16072 Answer by Harald Hanche-Olsen for what is summation in the sense of a principal value? Harald Hanche-Olsen 2010-02-22T17:19:04Z 2010-02-22T17:19:04Z <p>See the answer of L Spice for the principal value bit.</p> <p>For the second bit, the formula from the second paper is rather suspect. For example, $a=1/2$ produces the divergent sum (even in the principal value sense) <code>$$\sum_{n=-\infty}^\infty \frac{(-1)^n}{z-n}.$$</code> And for your case $a=0$, $z=1/2$ yields <code>$\sum(z-n)^{-1}=0$</code> by symmetry, so the formula cannot be right then either.</p> | 2013-05-19 13:18:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660042524337769, "perplexity": 792.765523285398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697552127/warc/CC-MAIN-20130516094552-00013-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/261131/question-about-the-laplace-of-a-step-function/263757 | # Question about the Laplace of a step function.
I'm just now learning how to take the Laplace of a simple step function, but I have a question about the terms. I'll show my work so far and hopefully someone can step in and answer the question I pose at the end.
$\int_0^\infty u_c(t)f(t-c)e^{-st}dt$
$u_c$ is the unit step function.
Up until the point $c$, this function will evaluate to zero. So, I'll get the same value if I change my lower limit to $c$.
$\int_0^\infty u_c(t)f(t-c)e^{-st}dt = \int_c^\infty f(t-c)e^{-st}dt$
That takes care of the unit step function. Now, a substitution and a rewrite.
$x=t-c, dx=dt, t=x+c$
$\int_0^\infty f(x)e^{-s(x+c)}dx$
$=e^{-sc}\int_0^\infty e^{-sx}f(x)dx$
Now, I can see that the integral here is the Laplace of $f(x)$ but with an $x$ where a $t$ usually is.
Here's where my confusion is. The book then says that I can go ahead and switch $x$ for $t$ and say that the Laplace transform of my step function is
$=e^{-sc}\scr L\{f(t)\}$
I understand that the choice of symbol is arbitrary. However, $t$ still has a context in this problem, and it's equal to $x+c$. So, is it really correct to use $t$ again like this?
I mean, saying that
$\scr L\{f(t)\}=\scr L\{f(x)\}$ is fine if $x=t$. But $x=t-c$. So I'm confused.
Is it just a matter of assuming that $c$ is relatively small? That seems a little too convenient.
-
The notation is poor, but unfortunately is ingrained in text books.
The Laplace operator $\scr L$ takes a function and returns a function. It would be better to write $\scr L (f)$ (or $\scr L f$) or $\scr L (t \mapsto f(t))$ to denote the $s$-domain function. The value of the transformed function at some $s \in \mathbb{C}$ would be $(\scr L f) (s)$, or even $\scr L (t \mapsto f(t))(s)$.
Then $\scr L (t \mapsto f(t))$ and $\scr L (x \mapsto f(x))$ are obviously the same functions, just with a different 'dummy' variable representing the input function.
So if we let $\phi(t) = u_c(t)f(t-c)$, what you have shown above is that \begin{eqnarray} (\scr L \phi) (s) = e^{-sc} (\scr L (x \mapsto f(x))) (s) = e^{-sc} (\scr L (t \mapsto f(t))) (s) = e^{-sc} (\scr L f) (s) \end{eqnarray}
-
Strictly speaking, the second $t$ they refer to is different than the first $t$, but it doesn't matter. Here's why: when you get to this step $$e^{-sc}\int_0^\infty e^{-sx}f(x)dx,$$ just think of that latter part as $$\int_0^\infty e^{-s\bigstar}f(\bigstar)d\bigstar$$ which is (by definition) the same as $$\mathscr{L}\{f(\bigstar)\},$$ regardless of what $\bigstar$ is (it's just a dummy variable of integration).
For emphasis of what plays the role of the independent variable in the transform domain, we might write $$\mathscr{L}\{f(\bigstar)\}(s)\quad\text{ or }\quad F(s).$$
- | 2014-12-22 20:58:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159663915634155, "perplexity": 172.24109421784584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776563.13/warc/CC-MAIN-20141217075256-00025-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://web2.0calc.com/questions/how-many-rimes | +0
what is ..
0
65
2
3÷2.94
Jan 7, 2021
edited by Guest Jan 7, 2021
edited by Guest Jan 7, 2021
#1
0
The answer to the question is 1.02040816327 you can use a calculator and see for yourself hope this helps
Jan 7, 2021
#2
+41
0
The answer is $$50\over49$$
First you can write it as $$3\over2.94$$
If you mutiply the numerator and denominator by 100 you get $$300\over294$$
Simplifed you get $$50\over49$$
Jan 8, 2021 | 2021-04-13 22:27:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621377348899841, "perplexity": 5763.644344299512}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038075074.29/warc/CC-MAIN-20210413213655-20210414003655-00204.warc.gz"} |
https://admin.clutchprep.com/organic-chemistry/practice-problems/16148/write-a-structural-formula-for-the-most-stable-conformation-of-each-of-the-follo-6 | # Problem: Write a structural formula for the most stable conformation of each of the following compounds:
###### Problem Details
Write a structural formula for the most stable conformation of each of the following compounds: | 2020-06-02 02:10:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317340850830078, "perplexity": 1366.1860225595676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347422065.56/warc/CC-MAIN-20200602002343-20200602032343-00092.warc.gz"} |
https://stats.stackexchange.com/questions/523007/beta-distribution-for-uncertain-binary-trials | # Beta distribution for uncertain binary trials
I have a larger problem but have presented what I believe is a minimal example. Imagine that you are trying to determine the true probability of a potentially-biased coin landing on heads, and want to take a bayesian perspective. Our prior is hence that the probability of heads is beta(1,1) distributed. Say that we flip the coin once and we get a heads. Our posterior is now beta(2,1).
Then we flip once more, but the coin lands crooked against an object on the table. It looks like it would have landed tails, but say that we are only 70% sure that it would have landed tails (so 30% sure that it would have been heads).
Obviously the 'best' solution is to ignore and retest, but if these coin flips are limited/expensive that might not be ideal. So is there anyway to include this result even with the uncertainty? Possibilities I've considered are
1. Ignore result, p ~ beta(2,1)
2. Include and pretend we are certain, p ~ beta(2,2)
3. Include with uncertainty, p ~ beta(2, 1.7)
4. Include with uncertainty for both, p ~ beta(2.3 1.7)
Option 4 seems reasonable, but I'm worried this is a statistical golem and I'm missing something obvious. I'm trying to stay in the bayesian setting so the answer here Distribution of partially observable binominal parameter isn't sufficient. Cheers!
If the coin toss going wrong is just a random thing that has nothing to do with what the result of the coin toss would have been (had nothing gone wrong), then ignoring the result of this particular toss is the easiest.
Pretending that we are completely certain or that we only add a downscaled weight to the second shape parameter (options 2 and 3) ignores the possibility of the toss could have ended up as heads (i.e. it's not quite right).
Adding 0.3 and 0.7 is the right thing to do, if you truly believe that there was a 30:70 probability that the coin would have come up heads vs. tails. However, note you need to believe this no matter how unfair the coin would be in truth. Perhaps, it only looks like that conditional on the coin being fair? Let's look at an extreme example:
• You have observed 99 heads and 0 tails
• A coin toss goes wrong and you feel like that was 30% likely to be heads and 70% tails.
With option 4, your belief about the proportion of heads before this 100th toss was a 95% credible interval from 0.994 to 0.9997. After this toss it's 95% CrI from 0.95 to 0.998. Before this toss, what the probability that the proportion is below what is now your lower CrI limit was less than $$2 \times 10^{-22}$$, but now it's 0.025. You may question whether that seems quite right, but it's indeed the right update to your belief, if you really think that toss would have landed tails up with 70% probability and heads-up with 30% probability.
Another issue with option 4 is that if you keep having such "failed" coin tosses and they all favor tails over heads (in your judgement) by 70:30, then you eventually converge to believing in a probability of the coin coming up tails being 70%. Again, as above this may be the right update.
An alternative model of what is going on is that you think that if this is a fair coin, then what you saw was increasing the probability of this toss ending up tails from 50% to 70% (=increasing the log-odds from 0 to log(0.7)-log(0.3)=0.8472979). So, in that case your belief about the coin overall influences what you believe the outcome of this coin toss was. In fact, the more you learn, your opinion on this toss will change as more data comes in in the future. In that case, some simple conjugate updating rule will not work. I feat you'll have to write down an explicit model and do MCMC sampling for it, I fear. That could look like this:
• Observed coin tosses follow $$Y_i \sim \text{Binary}(\pi)$$
• Messed up tosses also follow $$Z_i \sim \text{Binary}(\pi)$$ for the latent (but unobserved) outcome they would have had
• We only observe that $$P(Z_i = 1)$$ is $$\text{logit}^{-1}(\text{logit}(\pi)-0.847)$$.
That's actually surprisingly hard to code up in Stan (my normal preferred MCMC sampler) due to the discrete latent variable, but presumably this is possible to deal with, but it's definitely a bit messy.
• Thank you very much. So it seems that if you can be accurate in your uncertainty then option 4 is reasonable. However, if if for example you had just forgotten whether it was heads or tails then updating the model to beta(a+.5, b+.5) would not be reasonable. Is that correct? EDIT: Additionally, do you have any idea on what doing option 4 might be called? In case I would want to do further reading, I was trying to avoid going the MCMC route, but agree that creating the chain of dependant probabilities would probably work. May 10 at 9:09
• I would call this "writing down the likelihood for the data generating model and summing up the likelihood contributions from the data constellations compatibile with the observed data". I think in the missing data literature, this type of method (but also the alternative I described) would be called "full information maximum likelihood". MCMC is hard to get started with and really convenient and easy, if one gets used to it (of course, hard to say if it's worth it to get started). May 10 at 12:56
I think you are overthinking the exact prior distribution to use. If you have examined the coin and it seems symmetrical, that information alone might push you closer to $$\mathsf{Beta}(2,2)$$ than to $$\mathsf{Beta}(1,1).$$ Also, there is accumulating experimental evidence that the method of tossing may have more to do with results than do minor imperfections in the coin.
An analysis after you have the data you can afford could show how sensitive your prior distribution is to your choice of prior.
Suppose you can afford $$n = 10$$ of your expensive tosses of the coin and you get seven heads and three tails. Then let's look at the practical difference between priors $$\mathsf{Beta}(2,1)$$ and $$\mathsf{Beta}(2,1.7).$$ Posteriors $$\mathsf{Beta}(9,4)$$ [solid blue] and $$\mathsf{Beta}(9,4.7)$$ plotted below.
hdr = "Posterior Distn's BETA(9,4) [blue] and BETA(9,4.7)"
curve(dbeta(x, 9,4), 0, 1, col="blue", lwd=2, main=hdr)
curve(dbeta(x, 9,4.7), add=T, col="brown", lwd=2, lty="dotted")
abline(h=0, col="green2")
Respective 95% posterior credible intervals are $$(0.43, 0.90)$$ and $$(0.40, 0.87).$$ Is the difference between these two posteriors going to make a practical difference in your course of action?
qbeta(c(.025,.975), 9,4)
[1] 0.4281415 0.9007539
qbeta(c(.025,.975), 9,4.7)
[1] 0.3974857 0.8731326 | 2021-10-26 02:02:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041074275970459, "perplexity": 638.2041161200639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587794.19/warc/CC-MAIN-20211026011138-20211026041138-00601.warc.gz"} |
https://n1ne.writeas.com/ | # n1ne
nein nein nein
## Adding Matomo tracking to my WriteAs blog
You know, I like to take a look at statistics of the content I create. WriteAs has built-in functionality for this, but it is quite limited. The reports it gives you contain only data about views, without giving more information. And also, their charts are broken: my previous post has 21 views, but the chart shows only one.
I can solve this by adding my own tracking to my blog, using the same analytics software as WriteAs itself: Matomo (previously named Piwik). I have it installed on my dedicated server and need it anyway for my client's sites. I can add it to my site by selecting my preferred blog URL to be n1ne.writeas.com, and then going to the blog customization page to edit the JavaScript code that is injected in every page. Custom JavaScript is a pro feature.
Here's what it looks like on the customize page:
## Solving exam schedules using PySchedule
I'm in the process of creating an exam scheduler for my end of studies project. It involves a GUI for entering all the data and a solver that takes all the data and creates a solution.
After looking around a bit, I come to the conclusion that I cannot create a solver from scratch. There are multiple ways to approach that problem: genetic algorithms, heuristics and constraint programming. I chose to go with the last one mainly thanks to the next paragraph.
I'm quite lazy, so I went looking for the nicest library I could find that would handle as much for me as possible, no matter the programming language. And to my surprise, I found the perfect one: PySchedule. As you may guess from the name, It's a python library that does scheduling stuff. The maintainer is very active on the project and there are tons of examples.
I created a script that parses yaml files for a list of teachers/classes/courses and the links between them, and PySchedule happily solves it in under 0,25 seconds for the test data I have entered (see below). The test data is the data from the exam schedule of the previous year at my school, but only for the years 1, 2, 4, 5 and 6, which is half of what needs to be scheduled.
The source code for the entire project is available on the github repository, which includes the interface part built on top of Electron. The complete project should be fully working in 3-4 months, as I do have a deadline :p
## Learning how to touch-type
A few days ago I started learning touch-typing will all my fingers. It's not easy but I think it will help me a lot in the future. I've always been pretty familiar with my keyboard, at least that I can remember. I can type without looking at an acceptable 60 words per minute, but I would love to type more quickly.
Learning a new way of typing makes me type more slowly for a certain amount of time, as you can guess. But it's worth it.
## Working on a content management system
I love building web applications, and I find that most of the applications clients have asked me about can easily be built on WordPress. WordPress is a great platform, but it has a flaw: it's slow, bloated.
So, I started working on my own CMS. I called it Golog for now, and it is built in Golang. A part of it is working already: theme loading, administration panel (written using the Vue framework), but no functionality is implemented yet. That will be for tomorrow.
Here's the GitHub repo: kindlyfire/golog
I'll continue to post any updates on this blog.
## Finding my why
It's not an easy task. I want to start my business, but why do I want to do what I do ? It's a very important step, but also a very hard one. What I want to do is easy to find. How is a little bit harder. Why is difficult.
Is it, to empower the people to do greater things ? Empower small businesses to do greater things ? Help people in this world of madness ? Who knows, I haven't found yet.
And I need a name for my business. Why do we always need to find a name for everything ?
I started reading The Seven Habits of Highly Successful People for the third time. The last time is almost four months ago. It's a really nice book and reading it makes me want to work on my projects.
I'm turning 18 next month, so I'll be able to start my business. I'm working as much as I can at my student job to make money to start it, though I have just enough. My first product is a vacation house manager. I have first-hand experience with that activity so I think it suitable for a first product. It will allow anybody to manage their vacation house(s): edit availability, prices for each “season”, track payments, online booking page, public and embeddable availability calendar, API, widgets, and so forth.
On the tech side, everything is done in Golang, using the Macaron framework. It's blazing fast and stable.
If you're interested in such a product, let me know ! It isn't going to launch right away, but we can always talk: me [AT] kindlyfire.me.
## PoC keylogger in Golang
Yesterday I wrote about me working on a keylogger in Go. I published it on github.com as a module you can import in your application.
The reason I created this keylogger is because I couldn't find one that didn't hard-code the keyboard layout. All the other keyloggers I could find limit their characters to A-Za-z0-9 because of their hard-coded keymap, and they fail to recognize any special characters like \$^£¨ø.
My keylogger design is flawed as well, but not on the key parsing design. Like most other keyloggers, I use the GetAsyncKeyState Windows user32.dll method to query the state of the keyboard every few milliseconds. This isn't the best option performance-wise because Windows exposes hooks for the keyboard, but it is easier to build.
Feel free to use my code !
## Running Kubuntu
You know, in my last post I said I was going to install Solus on my computer, getting rid of Windows. I ended up not installing Solus but installing Kubuntu instead. Don't get me wrong, Solus is amazing, but it lacks some packages and personalisation through the system settings.
I'm now running a full disk-encrypted setup, including all non-primary hard drives I have in my computer. A VPN is yet to come.
Having setup all that, I can now get to why I did all that: because I'm a programmer and Windows gets old really fast. I'm now working on a PoC virus written in Go, made to report to a CoC server (hopefully over the Tor network). I'll open-source it if it gets anywhere, and do some write-ups.
## Moving to Solus
As a first step to improving my online security, I am moving to a Linux distribution. The distribution I have chosen is Solus, developed from the ground up and receiving updates almost every day.
I will write another post after the installation. | 2019-03-21 18:04:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18581262230873108, "perplexity": 1261.1823823614243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202530.49/warc/CC-MAIN-20190321172751-20190321194751-00031.warc.gz"} |
https://eeb.uconn.edu/2019/04/11/field-technician/ | # FIELD TECHNICIAN
FIELD TECHNICIAN needed from approximately 15 May through 28 July 2019, for research looking at habitat management implications in grassland habitats at the Joint Base McGuire-Dix-Lakehurst. Duties include conducting point count surveys for grassland bird species, using distance sampling methodologies; making detailed observations; collecting data in the field; data entry and management. Experience identifying birds of the eastern U.S. by sight and sound and conducting point counts required. Target species include Grasshopper Sparrow, Eastern Meadowlark, and Upland Sandpiper. The position requires working independently in the field, working irregular hours including weekends, walking long distances over potentially rough terrain, carrying equipment, and tolerating exposure to variable and sometimes adverse weather and environmental conditions. Proficiency with GPS and range finders preferred. Proficiency with MS Excel and Word software a must. Must be willing and able to interact, coordinate and work well with partners. Salary $1040 –$1200/biweekly, depending on experience. Housing provided. Must have own vehicle and a valid and clean driver’s license. Please send cover letter of interest, resume, and three references as a single PDF document (including email and phone contact info) to [email protected] by May 1, 2019. | 2023-01-31 07:04:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2355860471725464, "perplexity": 14512.563667520988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00813.warc.gz"} |
http://121.43.60.238/sxwlxbA/CN/Y2021/V41/I6/1750 | • 论文 •
一类带有两个参数的临界薛定谔-泊松方程的多重解
1. 1 广西科技大学理学院 广西柳州 545006
2 Georg-August-University of Göttingen, Göttingen 37073
• 收稿日期:2020-09-11 出版日期:2021-12-26 发布日期:2021-12-02
• 通讯作者: 杨志鹏 E-mail:[email protected];[email protected]
• 作者简介:陈永鹏, E-mail: [email protected]
• 基金资助:
广西高校中青年教师科研基础能力提升项目(2017KY1383);广西高校中青年教师科研基础能力提升项目(2021KY0348)
Multiplicity of Solutions for a Class of Critical Schrödinger-Poisson System with Two Parameters
Yongpeng Chen1(),Zhipeng Yang2,*()
1. 1 School of Science, Guangxi University of Science and Technology, Guangxi Liuzhou 545006
2 Mathematical Institute, Georg-August-University of Göttingen, Göttingen 37073
• Received:2020-09-11 Online:2021-12-26 Published:2021-12-02
• Contact: Zhipeng Yang E-mail:[email protected];[email protected]
• Supported by:
the Basic Ability Improvement Project of Young and Middle-Aged Teachers in Guangxi Universities(2017KY1383);the Basic Ability Improvement Project of Young and Middle-Aged Teachers in Guangxi Universities(2021KY0348)
Abstract:
In this paper, we consider the following critical Schrödinger-Poisson system \begin{eqnarray*} \left\{ {\begin{array}{*{20}{l}}{\begin{array}{*{20}{l}}{ - \Delta u + \lambda V{\rm{(}}x{\rm{)}}u + \phi u = \mu |u{|^{p - 2}}u + |u{|^4}u{\rm{, }}\; \; \; }\\{ - \Delta \phi = {u^2}, \; \; \; \; \; \; \; }\end{array}\begin{array}{*{20}{c}}{x \in {\mathbb{R}^3},}\\{x \in {\mathbb{R}^3},}\end{array}}\end{array}} \right. \end{eqnarray*} where $\lambda, \mu$ are two positive parameters, $p\in(4, 6)$ and $V$ satisfies some potential well conditions. By using the variational arguments, we prove the existence of ground state solutions for $\lambda$ large enough and $\mu>0$, and their asymptotical behavior as $\lambda\to\infty$. Moreover, by using Lusternik-Schnirelmann theory, we obtain the existence of multiple solutions if $\lambda$ is large and $\mu$ is small.
• O175.2 | 2022-01-25 16:27:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8462035059928894, "perplexity": 6272.268217736408}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304859.70/warc/CC-MAIN-20220125160159-20220125190159-00024.warc.gz"} |
https://freenode.irclog.whitequark.org/lisp/2018-12-15 | jackdaniel changed the topic of #lisp to: Common Lisp, the #1=(programmable . #1#) programming language | <http://cliki.net/> <https://irclog.whitequark.org/lisp> <http://ccl.clozure.com/irc-logs/lisp/> | SBCL 1.4.5, CMUCL 21b, ECL 16.1.3, CCL 1.11.5, ABCL 1.5.0
Lycurgus has joined #lisp
dmiles has quit [Ping timeout: 252 seconds]
logicmoo has joined #lisp
Kundry_Wag has quit [Remote host closed the connection]
Kundry_Wag has joined #lisp
ah_ has joined #lisp
Kundry_Wag has quit [Ping timeout: 268 seconds]
dacoda has quit [Ping timeout: 268 seconds]
ah_ is now known as akoana
<griddle> Does anyone know of a good reading source for how lisp JIT compilers or ahead of time compilers work? I'm mostly interested in how scoping is implemented
<Bike> what does scoping have to do with jit?
<griddle> dynamic scoping
<Bike> i still don't understand.
<Bike> (i'm a compiler dev)
<griddle> I guess, when you define what is essentially a "curried" function right?
<Bike> No?
<griddle> (lambda (x) (lambda (y) (+ x y)))
<griddle> how does the inner code know about the outer x in a compiled lisp?
<Xach> griddle: the book Lisp in Small Pieces goes into great detail on the topic
<Bike> that would actually be a lexical binding, not special.
<Xach> it starts simple and slow and gets more complex and fast in a nice didactic way
<Bike> The most obvious way is that the (lambda (y) ...) produces a "closure", which stores the binding of x along with the function.
<Bike> LiSP is indeed good for this.
<griddle> so thats just some data structure on the stack or something?
<Bike> not necessarily on the stack, but yes.
Lycurgus has left #lisp ["Deus Ex"]
nchambers has joined #lisp
<griddle> and you'd have some kind of feature to lookup information in that structure? Or would you know ahead of time where in the structure that value lives
sjl has quit [Ping timeout: 250 seconds]
<Bike> If it's compiled, no reason not to do the latter.
logicmoo has quit [Ping timeout: 250 seconds]
flazh has quit [Ping timeout: 245 seconds]
smokeink has joined #lisp
flazh has joined #lisp
smokeink has quit [Remote host closed the connection]
Mr-Potter has quit [Quit: Leaving]
<aeth> griddle: Naively, there could just be a defstruct to create a structure-object for every scope except the global one. Gensym its name to avoid the problem with redefinition.
<jasom> griddle: In a compiled implementation, a lexical binding is just a mapping from an identifier to a location (i.e. lexical bindings do not need to exist in a meaningful way at runtime, only at compile time).
<aeth> (I say structure-object because they're simpler and they can have inline accessors, or something like that. i.e. the location is known to the compiler at compile time)
<aeth> (well, at least the offset, since if it's on the heap the GC can move it around)
<Bike> i think if griddle doesn't know what a closure is they might be unfamiliar with defstruct.
<griddle> I've implemented a lisp interpreter in the past, I'm just looking into how things like lexical scoping is done in a JIT if you can define different variable names at runtime
<aeth> Bike: The parens are for everyone else
<jasom> what usually happens is compilers have an expensive binding implementation that works in all cases and lives on the heap, but then a fast binding implementation for when a variable is never closed over (or is only closed over by lambdas that don't escape the heap).
<Bike> griddle: i don't think being jit versus aot matters here.
<jasom> griddle: for the most part, compiled implementations of lisp do not allow manipulation of lexical bindings at runtime
<griddle> oh ok
<jasom> griddle: the standard itself provides no such mechanism anyways
<jasom> dynamic bindings have a defined lifetime at runtime, and those can be accessed via symbol-value.
<griddle> yeah in past I've just had a really slow implementation with a recursive scope lookup of dynamic bindings
<griddle> I never really gave a fixed time approach any thought
<jasom> griddle: for variables that are never closed over, the value can be stored in a register. For variables that are closed over by lambdas that don't escape the dynamic scope, it can be the stack. For variables that are closed over by lambdas that might escape the dynamic scope, they must be stored on the heap.
<jasom> griddle: I think you have dynamc and lexical backwards. Dynamic bindings are trivial to implement with global variables and unwind-protect.
<jasom> (at least in the single-threaded case)
<griddle> I think I do, yeah
<jasom> griddle: (let ((*x* 1)) (defun foo (print *x*))) (let ((*x* 2)) (foo)) ;; prints 2
<jasom> ^^^ this is dynamic binding
<jasom> griddle: (let ((x 1)) (defun foo (print x))) (let ((x 2)) (foo)) ;; prints 1
<jasom> ^^^ this is lexical binding
<griddle> yeah ok makes sense
<aeth> jasom: Absolutely (re: expensive and fast implementations). The variable isn't even guaranteed to exist unless (debug 3)
<aeth> On the plus side, that means that there's no (runtime) cost to naming intermediate steps if you want to
<jasom> griddle: if you're familiar with compilers for less dynamic languages, all the approaches used there for temp allocation work just fine for any variables that are never closed over.
<jasom> anyway I have to leave
elfmacs has joined #lisp
<griddle> awesome thanks for all the help
<verisimilitude> Are you familiar with the concept of Phantom Stacks, griddle?
<griddle> looking at the paper now. tldr?
shifty has joined #lisp
<verisimilitude> Well, I enjoy the colorful phrasing RMS used, comparing the stack to a government agency.
<griddle> aptly, lol
<verisimilitude> Put simply, treat the stack as a stack until it can no longer be treated as a stack, in which case it was never really a stack at all.
<griddle> so treat it as a "heap" after a certain point and find a new one?
<verisimilitude> Treat it as a heap if the stack discipline would be violated.
<griddle> very interesting
arescorpio has joined #lisp
_whitelogger has joined #lisp
graphene has quit [Remote host closed the connection]
graphene has joined #lisp
thijso has quit [Ping timeout: 246 seconds]
arescorpio has quit [Remote host closed the connection]
stux|RC has quit [Ping timeout: 246 seconds]
permagreen has joined #lisp
beach has quit [Read error: Connection reset by peer]
wglb has joined #lisp
wglb has quit [Remote host closed the connection]
wglb has joined #lisp
flazh has quit [Ping timeout: 268 seconds]
wooden has quit [Read error: No route to host]
elderK has quit [Quit: WeeChat 1.9]
notzmv has quit [Ping timeout: 244 seconds]
elderK has joined #lisp
flazh has joined #lisp
zmt01 has joined #lisp
gendl has quit [Ping timeout: 252 seconds]
zmt00 has quit [Ping timeout: 264 seconds]
rnmhdn has joined #lisp
rumbler31 has joined #lisp
atgreen has quit [Remote host closed the connection]
atgreen has joined #lisp
nchambers has quit [Quit: WeeChat 2.2]
dmiles has joined #lisp
notzmv has joined #lisp
PyroLagus has quit [Quit: ZNC / WeeChat]
arescorpio has joined #lisp
marusich has joined #lisp
PyroLagus has joined #lisp
pjb has quit [Ping timeout: 268 seconds]
jcowan has joined #lisp
<jcowan> Can someone give me a brief tutorial on restarts?
khisanth_ has quit [Ping timeout: 246 seconds]
<jcowan> I understand the idea of conditions with restarts, such that the condition-catcher can choose which restart to invoke, do a snippet of code set up by the signaler, and then either return or do a non-local exit
<jcowan> but I don't understand what the rest of the restart API is for.
robdog has quit [Ping timeout: 250 seconds]
dddddd has quit [Remote host closed the connection]
resttime has joined #lisp
khisanth_ has joined #lisp
<Bike> so, what, the :interactive and the :report and the :test?
equwal has joined #lisp
dmiles has quit [Ping timeout: 250 seconds]
logicmoo has joined #lisp
orivej has quit [Ping timeout: 244 seconds]
<jcowan> Bike: I was thinking of restart-bind and friends, the ones that push the restarts into the dynamic environment
<jcowan> as opposed to attaching them directly to a condition
<Bike> well, probably restart-case expands to restart-bind, actually. the condition association is a separate thing.
pjb has joined #lisp
<Bike> in sbcl at least, with-condition-restarts just puts the condition on a list stored within the restart.
<Bike> (so you can't get to the restart from the condition, e.g.)
<rnmhdn> any thoughts on a 1:30 h interesting talk related to functional programming?
<Bike> usually handler-bind and resetart-bind just shove the thing on an internal dynamically bound list
<Bike> and then invoke-restart or whatever just calls a thunk
<Bike> a thunk that, for restart-case, will do a nonlocal exit
<Bike> you can probably suss it out from the macroexpansion, really.
<jcowan> Yes, I get that that's what it *does*, but what is it *for*?
<Bike> oh, sorry, i misunderstood.
<Bike> restart-bind i've never seen anyone use. restart-case though...
<jcowan> np, I'm not being very clear, I know.
<Bike> it's basically for when you want to recover from an error, i guess
<Bike> like...
<Bike> you can do an ASSERT, and when an ssert fails it'll pop up the debugger and ask for new values to use
<Bike> so that you can then pass the restart
<Bike> that's kind of nice sometimes
<Bike> then it'll just resume after the assert.
<Bike> (with those new values)
<jcowan> But how does the caller know which restarts the signaler can handle at all?
<no-defun-allowed> if i'm destructuring in a loop for clause, can i give the destructured values of-types?
<jcowan> no-defun-allowed: Probably with the
<no-defun-allowed> true
elfmacs has quit [Ping timeout: 252 seconds]
milanj has quit [Quit: This computer has gone to sleep]
SaganMan has joined #lisp
akoana has quit [Quit: Leaving.]
arescorpio has quit [Quit: Leaving.]
gravicappa has joined #lisp
hectorhonn has joined #lisp
<hectorhonn> good morning
hectorhonn_ has joined #lisp
hectorhonn has quit [Ping timeout: 256 seconds]
hectorhonn_ has quit [Client Quit]
hectorhonn has joined #lisp
jkordani has quit [Read error: Connection reset by peer]
griddle has quit [Quit: The Lounge - https://thelounge.github.io]
griddle has joined #lisp
<Bike> jcowan: i don't understand the question
Arcaelyx has joined #lisp
akoana has joined #lisp
hectorhonn has quit [Ping timeout: 256 seconds]
wusticality has joined #lisp
nchambers has joined #lisp
lose has joined #lisp
pierpal has quit [Quit: Poof]
pierpal has joined #lisp
<jcowan> I push some restarts on the stack and invoke some code that eventually calls assert. The user, or I on the user's behalf, chooses a restart, its code is executed, and the assert returns. But presumably the caller of the assert was doing that because some assumption that it depends on is being violated, and what is the caller to do when unexpectedly he gets control again?
<jcowan> Bike ^^
Arcaelyx has quit [Read error: Connection reset by peer]
atgreen has quit [Ping timeout: 240 seconds]
<Bike> i think you're misunderstanding something. control won't return from the assert unless the restart was set up in the assert.
Arcaelyx has joined #lisp
<Bike> if you have like, (restart-case (... (assert ...) ...) (foo ...)) and select the foo restart when the assert fails, control returns from the restart-case. the assert is abandoned.
<Bike> i'm pretty sure. little tired admittedly
<jcowan> Hmm. Well, thanks, maybe I can figure it out from there.
atgreen has joined #lisp
<Bike> with restart-bind you can have a restart that doesn't transfer control like that or at all, but i honestly have no conception of why anyone would want that, so i can't help there.
snits has quit [Remote host closed the connection]
lose has quit [Ping timeout: 250 seconds]
<jcowan> I'll study the example at clhs restart-case
phax has joined #lisp
<verisimilitude> You may be interested in the similarities between Common Lisp condition handling and PL/I and Multics, jcowan, if that's something you already find interesting.
phax has left #lisp [#lisp]
graphene has quit [Remote host closed the connection]
lose has joined #lisp
makomo has quit [Ping timeout: 240 seconds]
graphene has joined #lisp
Selwyn has quit [Ping timeout: 250 seconds]
<jcowan> verisimilitude: I haven't looked at PL/I condition handling in years
<jcowan> It appears to be plain exception handling with resumption semantics, like Scheme or Mesa (both of which I have used). But restarts seem to be only in CL and Dylan.
rnmhdn has quit [Ping timeout: 250 seconds]
nchambers has quit [Quit: WeeChat 2.2]
akoana has left #lisp [#lisp]
notzmv has quit [Ping timeout: 272 seconds]
notzmv has joined #lisp
vlatkoB has joined #lisp
Bike has quit [Quit: Lost terminal]
slyrus1 has joined #lisp
Oddity has joined #lisp
marusich has quit [Ping timeout: 250 seconds]
Lycurgus has joined #lisp
lose has quit [Ping timeout: 264 seconds]
dale has joined #lisp
_whitelogger has joined #lisp
Lycurgus has quit [Ping timeout: 244 seconds]
akoana has joined #lisp
yvy has joined #lisp
<akoana> (quit
akoana has quit [Quit: Leaving]
hectorhonn has joined #lisp
dale has quit [Quit: dale]
<hectorhonn> the primary unit of abstraction in java is classes. what is the primary unit of abstraction in lisp?
<beach> It is called a "protocol" and it is not a first-class object.
<hectorhonn> protocol?
hectorhonn_ has joined #lisp
<hectorhonn_> argh unstable internet connection
<hectorhonn_> beach: ok, let me read that up. thanks!
hectorhonn has quit [Ping timeout: 256 seconds]
<splittist> good morning
hectorhonn_ has quit [Client Quit]
<beach> Hello splittist.
elfmacs has joined #lisp
hectorhonn has joined #lisp
<hectorhonn> beach: i see its a chapter 5. looks like a good book, how do i get to the other chapters?
<beach> You wait until I finish the book and then you buy it from Amazon.
<no-defun-allowed> beach: irrelevant, but how do you typeset those nice looking function/class/gf description lines in your documentation on closos and sicl? Is it something you made yourself or is it a common LaTeX library?
<hectorhonn> :O
<hectorhonn> you run the site?
<beach> hectorhonn: Yes, metamodular is my site.
<hectorhonn> beach: impressiveee
themsay has joined #lisp
<beach> no-defun-allowed: I think it is just \begin{verbatim}...\end{verbatim}
<beach> no-defun-allowed: Oh, wait.
<beach> no-defun-allowed: I am using a small library that came with the CLIM documentation called specmacros.tex
<beach> no-defun-allowed: You can find it in the SICL repository.
<beach> I suppose Scott McKay wrote it.
<hectorhonn> beach: erm, what's the take away from chapter 5? i'm don't feel like i've learnt anything
<beach> Sorry to hear that.
<beach> Let me see if I can find an example...
<beach> Look at appendix A in this document: http://metamodular.com/cluffer.pdf
<hectorhonn> beach: ok..
<hectorhonn> beach: so, the primary unit of abstraction in lisp is also classes? and generic functions?
<no-defun-allowed> beach: thanks, I'll go take a look at it.
<hectorhonn> beach: for example, suppose i want to write a parser manually, as an exercise. should i represent the AST as a defclass?
<beach> Yes.
<hectorhonn> beach: instead of a list?
<beach> You are talking representation.
<beach> The very idea of a protocol is to avoid talking about representation.
<beach> Abstraction means that you explicitly avoid talking about representation.
<hectorhonn> isn't abstraction the same as representation?
<beach> It's the very opposite.
<beach> In a protocol, you talk about abstract types and abstract operations.
<no-defun-allowed> hectorhonn: abstraction is how you hide representation.
<verisimilitude> I'd think symbols and lists and functions and that manner of thing are the primary abstraction of Common Lisp.
<beach> verisimilitude: You would think wrong.
<verisimilitude> You could boil any language down to protocol'' as the primary unit of abstraction, if you wanted to.
<verisimilitude> That's just boring.
<hectorhonn> sorry, coming from java oop shop, haha
<verisimilitude> Common Lisp has an OO system called CLOS, hectorhonn, but you can rather entirely ignore it if you feel like it.
<hectorhonn> let me think of a way to rephrase
<beach> verisimilitude: That's not very good advice.
<Inline> morning
<beach> Hello Inline.
<Inline> heya beach :)
<verisimilitude> Do you know what no good Common Lisp programmer can ignore, beach?
<verisimilitude> It's macros.
<verisimilitude> Macros are more important to Lisp than any OO system.
<verisimilitude> Macros rely on the homoiconic nature of Lisp, related to its lists. Lists are important for abstraction.
rnmhdn has quit [Ping timeout: 268 seconds]
ggole has joined #lisp
<hectorhonn> i guess what i really wanted to ask was, for example in java, i would write a XyzToken class for each type of token parsed, with a common base class Token, maybe with some common and specific operations. Then another class to represent a list of Tokens. so i have somehow represented (or abstracted?) this idea of "token" in the program,
milanj has joined #lisp
<hectorhonn> In haskell there would be a record sum type, and several functions that can act on the record. then another record type to represent a list of tokens
<hectorhonn> What would be a good way to do it in CL?
notzmv has quit [Ping timeout: 246 seconds]
<beach> Those are representations, not abstractions.
yvy has quit [Read error: Connection reset by peer]
<jackdaniel> hectorhonn: Common Lisp doesn't give you a recipe how to represent your program
<jackdaniel> (i.e it is less opinionated how you should write your program than Java or Haskell)
<hectorhonn> ok, i guess i meant representation, haha
<beach> In Common Lisp, you would choose whatever is appropriate in terms of performance. It could be symbols, lists, hash tables, arrays, standard objects, whatever.
<hectorhonn> jackdaniel: i see, then what options does CL give me?
<jackdaniel> so while having "the one way" how to do things is comfortable it is also limiting
<verisimilitude> Just use a list of lists or structures or what have you, hectorhonn.
notzmv has joined #lisp
<jackdaniel> you may arrange your program around protocols (like in haskell), you may create a class hierarchy with functions operating on it (like in Java), or you may write your own dsl with macros (like verisimilitude suggests). I think that finding the programming style which suits you best is a hard but worthwhile endavour
<verisimilitude> Your main options are making a class for tokens which you instantiate, making a structure representing tokens, or just using an informal representation, such as a list of whatever you need, hectorhonn.
<verisimilitude> Until you have a good idea of what you're doing, the list of whatever you need option is the easiest and most flexible.
angavrilov has joined #lisp
<jackdaniel> i.e there are libraries in Common Lisp which are referentially transparent (something often praised among programmers who fancy functional programing), but there are also many libraries depending heavily on OO
rumbler31 has quit [Remote host closed the connection]
<jackdaniel> minion: tell hectorhonn about paip
<minion> hectorhonn: paip: Paradigms of Artificial Intelligence Programming. More about Common Lisp than Artificial Intelligence. Now freely available at https://github.com/norvig/paip-lisp
<jackdaniel> this book presents a nice overview of techniques you could use with Common Lisp
<jackdaniel> minion: tell hectorhonn about pcl
<minion> hectorhonn: have a look at pcl: pcl-book: "Practical Common Lisp", an introduction to Common Lisp by Peter Seibel, available at http://www.gigamonkeys.com/book/ and in dead-tree form from Apress (as of 11 April 2005).
<Inline> where is AIM now ?
<Inline> the successor to paip
<jackdaniel> and this book is more concentrated about mainstream (these days) style of CL programming heavily depending on CLOS (but starts from basics)
<jackdaniel> I don't know what AIM is (never heard of it)
<verisimilitude> I assume he means AOL Instant Messager.
<verisimilitude> That's the only AIM I'm aware of.
<hectorhonn> haha... ever get that feeling when trying to start drawing something on a blank canvas? CL gives me that no restriction feeling
<verisimilitude> Lisp is best, I think, when you don't quite know what you're doing yet, hectorhonn.
<jackdaniel> hectorhonn: if you decide to listen to my advice, I'd recommend starting reading PAIP
<verisimilitude> There are plenty of better languages if you know exactly what you're going to be doing, but Lisp is good for exploration.
<jackdaniel> since it provides many case studies analyzing and evolving code bit by bit towards the sketched goal
ealfonso has quit [Disconnected by services]
ealfonso has joined #lisp
<verisimilitude> Also, hectorhonn, I noticed I've yet to recommend you any reading material, even in our last discussion, so I'll recommend something you already have.
<hectorhonn> ok, so combining advice from beach, jackdaniel, and verisimilitude, i should start out with lists, explore different ways to find out which style i like best, and remember that i can use *anything* in lisp, there is no idiomatic way like in haskell or java. that right?
<verisimilitude> Press C-h i in your Emacs and go to the Emacs Lisp Intro''.
<verisimilitude> Read that for a nice beginner's introduction to Lisp in general.
<verisimilitude> There's a saying that if you give twenty different Lisp programmers something to do, you'll get twenty different programs, hectorhonn.
<hectorhonn> jackdaniel: i'm halfway through pcl, yet to start on on paip
makomo has joined #lisp
<verisimilitude> The related saying is if you give twenty different Java programmers something to do, you'll get twenty copies of the same program.
<jackdaniel> hectorhonn: another way of studying (which proves to be more engaging for some people) is to contribute to open source projects
<jackdaniel> then you have some conventions already in place, so you are not left without any clues how to do things
<hectorhonn> verisimilitude: i don't see that option, only option for slime
<verisimilitude> Just type in M-x info , then.
<hectorhonn> verisimilitude: an absolute enterprise nightmare
<hectorhonn> verisimilitude: yeah same
<hectorhonn> (oh is your nickname long to type! :D)
<hectorhonn> jackdaniel: any beginner friendly ones in CL?
<verisimilitude> Use tab completion, hectorhonn.
<ealfonso> I have a non-lisp source file vars.conf which I need to read from lisp. using a path relative to (uiop:current-lisp-file-pathname) works on the repl, but fails with no-such-file when the lisp source file is in ~/.cache/common-lisp: /home/USER/.cache/common-lisp/sbcl-1.3.14.debian-linux-x64/home/USER/git/path-to-file/vars.conf. How can I either make the compiler include the vars.conf next to the compiled source files, or refer to the
<ealfonso> original source file pathname?
<jackdaniel> I don't see much value in belittling programmers of other languages (or other programmers in general) [re java programmers]
<verisimilitude> I didn't write the saying, jackdaniel.
<hectorhonn> "The related saying is if you give twenty different Java programmers something to do, you'll get twenty copies of the same program" Actually this is exactly what a language should strive for, no? clear semantics
<jackdaniel> repeating it is enough
<hectorhonn> verisimilitude: OMG!
<verisimilitude> Well, feel free to not see the value in it.
<hectorhonn> verisimilitude: i feel really stupid now. hahaha
<jackdaniel> ealfonso: try asdf:system-relative-pathname function
<verisimilitude> What, you just got it, hectorhonn?
<hectorhonn> verisimilitude: yeah, the tab thing
<jackdaniel> that was euphemism, but whatever
<hectorhonn> i've been typing nicknames since joining lisp
<verisimilitude> Well, now you know.
<hectorhonn> verisimilitude: thanks!
<verisimilitude> It's no issue.
<ealfonso> jackdaniel thanks
<jackdaniel> hectorhonn: cffi maintainers are very nice people to work with. from other projects you could help with mcclim and ecl (I'm working on these); but I think that the best advice is to look for something you find useful and fun to work with
<jackdaniel> slime project is also something benefitting whole community, so even small improvements have big impact
<hectorhonn> jackdaniel: i see. anyone happen to know mmontone, the maintainer for djula?
<verisimilitude> If we're recommending Common Lisp projects now, I'd recommend you at least start with projects that are implemented in pure standard Common Lisp, hectorhonn.
<hectorhonn> there are impure standard Common Lisp? i thought common lisp is a dialect itself
dacoda has joined #lisp
<jackdaniel> hectorhonn: some projects depend on foreign function interface
<jackdaniel> (and shared libraries written in other languages)
<hectorhonn> jackdaniel: oh, i see
<verisimilitude> Put simply, there are plenty of Common Lisp programs or libraries that either only work on one implementation, usually SBCL, or use the CFFI and offload all of the work onto something written in a different language, both of which I believe should be avoided.
<jackdaniel> laters \o
<hectorhonn> verisimilitude: i must say i agree. CL seems capable enough for most tasks
makomo has quit [Ping timeout: 240 seconds]
<verisimilitude> See you later then, jackdaniel.
<verisimilitude> I was writing a program with an interactive terminal interface and did take a look at the available libraries.
<verisimilitude> Guess what I noticed, hectorhonn.
<hectorhonn> verisimilitude: what did you notice?
<verisimilitude> All of these were just bindings to other libraries, most often Ncurses.
heisig has joined #lisp
<verisimilitude> You have Common Lisp, this nice language, and yet these libraries risk memory leaks for what amounts to printing text.
<hectorhonn> verisimilitude: that's a pragmatic decision i guess, ncurses library already exists and can interface easily with the os
<verisimilitude> Not only that, but it's so much more bothersome to lead a Common Lisp library that wants an entire C library with it.
<verisimilitude> So, I wrote my own library for this.
<hectorhonn> verisimilitude: what is the library?
<verisimilitude> You can see it here, if you're interested:
<hectorhonn> wow, you all have your own websites?
<verisimilitude> I do; I can't claim for the rest of them.
<hectorhonn> verisimilitude: no github? solo project?
<verisimilitude> I refuse to use Github and I'm the sole author, yes.
<verisimilitude> As a tangent, Github is awful, hectorhonn, and you should avoid it wherever possible.
<verisimilitude> I'd much rather a company need to bother my VPS provider than just email Github and demand something be taken down, which happens.
<hectorhonn> verisimilitude: has your stuff been taken down?
<verisimilitude> Mine hasn't, no.
<hectorhonn> verisimilitude: i see. that's good to hear. stuff taken down probably violates copyright
<verisimilitude> Here's an example:
<verisimilitude> It would be much harder to abuse copyright for taking down things you don't like, if so many people weren't all in a centralized location.
<hectorhonn> i see. the problem is dmca, not github
<hectorhonn> but yeah, it's good to have a backup site
makomo has joined #lisp
notzmv has quit [Quit: WeeChat 2.3]
random-nick has joined #lisp
dacoda has quit [Remote host closed the connection]
marusich has joined #lisp
dacoda has joined #lisp
milanj has quit [Quit: This computer has gone to sleep]
notzmv has joined #lisp
wigust has joined #lisp
wigust- has quit [Ping timeout: 250 seconds]
robdog has joined #lisp
makomo has quit [Ping timeout: 240 seconds]
elfmacs has quit [Ping timeout: 250 seconds]
_whitelogger has joined #lisp
rumbler31 has quit [Ping timeout: 250 seconds]
nckx has quit [Quit: Updating my GNU GuixSD server — gnu.org/s/guix]
nckx has joined #lisp
milanj has joined #lisp
nirved is now known as Guest64238
Guest64238 has quit [Killed (adams.freenode.net (Nickname regained by services))]
nirved has joined #lisp
<pjb> verisimilitude: lists are important for abstraction, only if you use them to represent everything. If you were ready to use CLOS classes for everything (like in Smaltalk, say), then classes would become an abstraction device. Until that, they're just a representation device. It can be used to represent external objects, and thus abstract _them_, but not as an internal abstraction, since you could also have structures, vectors,
<pjb> functions, or other representations.
<pjb> verisimilitude: also, there is a way to distinguish the representation from the abstraction. This week, I had to debug a bug in C (it would be the same in lisp) where I had a typedef struct { void* data; int size } buffer; with a function buffer_new(int size) and a function buffer_free. The call to free failed on invalid pointer even thought it was allocated in buffer_new.
<pjb> verisimilitude: the reason why that some client code used &buffer->data and incremented the pointer.
<pjb> verisimilitude: the solution was to stop using this representation (the structure) and replace it with an abstraction: typedef struct buffer; void* buffer_data(buffer* b); int buffer_size(buffer* b); Then the client code could not increment the buffer data pointer. That representation was abtracted away, code become safe and correct.
<ebrasca> What is "#++" in cl ?
<beach> clhs #+
<pjb> Tests whether (member :+ *features*) and if true, reads the following form
<Inline> the presence
<Inline> and absence is #-
<pjb> Otherwise, it skips over the following s/form/sexp/.
rnmhdn has joined #lisp
<ebrasca> OK
<Inline> for example #+clim
<ebrasca> it is literaly skiping + ?
<Inline> ?
<Inline> no no no
<Inline> oh man
<Inline> it is skipping the next form if the #+ fails
<ebrasca> "#+" and "+"
nirved is now known as Guest24972
nirved has joined #lisp
<ebrasca> mmm skip some code in all cl implementations?
<Inline> ebrasca: say you want to eval some form conditionally
<ebrasca> I have read macro with name "with-with" .
<Inline> ebrasca: say you want to eval (startup-my-editor) only when clim system was loaded
<Inline> ebrasca: then you can say #+clim (startup-my-editor)
<ebrasca> I have allways read someting like #+sbcl or #+genera but never #++.
<ebrasca> 2 +
<Inline> ebrasca: now if clim was not loaded and hence not pushed onto the *features* list then it will skip that form
Guest24972 has quit [Ping timeout: 252 seconds]
<Inline> ebrasca: depends, if it is pushing + onto *features* itself it can check for it too
<Inline> ebrasca: + can be anything
<ebrasca> So it allways ignore?
<pjb> verisimilitude: there's gitlab.com instead. Or framagit.org
<Inline> ebrasca: no, it's a feature
<Inline> ebrasca: it checks for a feature keyword named +
<Inline> ebrasca: you get it ?
<ebrasca> No
<ebrasca> How do you give feature + to #++ ?
<pjb> (setf *features* (delete :+ *features*)) (read-from-string "(a #++ b c)") #| --> (a c) ; 11 |# (pushnew :+ *features*) (quote (a #++ b c)) #| --> (a b c) |#
<pjb> In #++ you already gave the feature :+ to #+ !
hectorhonn has quit [Quit: Page closed]
<pjb> To give the feature + (assuming standard readtable and packages, ie. cl:+) to #+ you would write #+cl:+
<Inline> so b was skipped because there was no + in the *features*
<pjb> Because there was no :+
<Inline> and in the second example b was not skipped because + was found in *features*
<pjb> #+ reads the feature expression in the keyword package.
<pjb> Not +, :+
<Inline> ok
<Inline> forgive my incompleteness sirrah!
<pjb> #+#.(cl:if (cl:= (cl-user::version) 2) '(:and) '(:or)) (new-function) calls new-function only if (version) returns 2…
<ebrasca> I think I undestand it now.
<pjb> More simply: #+sbcl x #+ccl y #-(or sbcl ccl) (error "not implemented for ~S" (lisp-implementation-type))
<ebrasca> wow I think I can make someting like #+little-endian
resttime has quit [Quit: Leaving]
nopolitica has joined #lisp
<pjb> (find "LITTLE" *features* :test (lambda (x y) (search x (string y)))) #| --> :little-endian-target |# in ccl
<pjb> in sbcl and ecl we have :LITTLE-ENDIAN, and in abcl and clisp nothing. (those works on VM that abstract away the endianness. eg. clisp always writes binary files in little endian, even on big-endian architectures, so clisp binary files can be compatible between clisp on different platforms)
<pjb> In any case, if you depend on this flag, your code is in error.
<pjb> (it's a big code-smell).
<pjb> bbl
<ebrasca> I have read it in my *features*
dacoda has quit [Ping timeout: 250 seconds]
Bike has joined #lisp
<sindan> where can I read up on what everything in *features* means?
<jackdaniel> sindan: you can't, because they are not standarized and they are pushed from different places
heisig has quit [Remote host closed the connection]
<jackdaniel> i.e you may find a description in the implementation manual chapter about weak references
<jackdaniel> or in the library documentation
heisig has joined #lisp
<jackdaniel> there is a library which aims at partial standarization called trivial-features
<sindan> I see. So, trivial-features and starting from implementation docs if they exist. I'm guessing there will be acceptable documentation for sbcl.
<ebrasca> jackdaniel: What about mezzano , GNU hurd and others?
<ebrasca> And in cpus why they don't add ppc64le or ppc64be?
<jackdaniel> ebrasca: endianess is a separate feature
<jackdaniel> it is listed first
<sindan> that library is pretty green
<jackdaniel> regarding mezzano, I think that such contribution would be accepted in the library, but I don't really know; ask luis
<ebrasca> Why linux is in OPERATING SYSTEM section?
<jackdaniel> and as of gnu hurd: is there lisp which actually works on gnu hurd as of today? and if yes, I suspect gnu hurd being a posix system, so this is explained that "sysname" information from uname is used
<ebrasca> Linux is one kernel.
<jackdaniel> I recommend reading paragraph before "Examples:"
elderK has quit [Ping timeout: 250 seconds]
<ebrasca> Linux is not one operating system .
elderK has joined #lisp
<jackdaniel> if you had read the pararaph as I have suggested, you would know, that on POSIX systems the "sysname" information from uname(3) should be used
buhman has quit [Ping timeout: 252 seconds]
<ebrasca> ja
<jackdaniel> and on linux this is, indeed, "linux"
buhman has joined #lisp
<ebrasca> I think correct is GNU/Linux
themsay has quit [Ping timeout: 246 seconds]
ealfonso has quit [Ping timeout: 250 seconds]
themsay has joined #lisp
scymtym has quit [Ping timeout: 240 seconds]
themsay has quit [Ping timeout: 268 seconds]
atgreen has joined #lisp
<jackdaniel> everyone is entitled to think what they desire to think, fact remains that this works exactly as specified, sysname value is "Linux" on systems powered by linux kernel
<jdz> Not all operating systems that use Linux kernels use GNU userland utilities.
<jackdaniel> (most notable example would be Android)
<ebrasca> GNU/Linux , Android are ok . Linux is not ok.
<jdz> It is OK, when talking about system interfaces.
rnmhdn has quit [Ping timeout: 250 seconds]
<jdz> There's nothing GNU in POSIX or Linux syscalls.
* jackdaniel resigns from trying to explain why it is irrelevant given how this *features* entry is specified in the document (with a disappointment)
<ebrasca> Then unix
<jdz> Also, Alpine Linux is a Linux distribution, and the GNU part is optional.
<jdz> No, AIX is also Unix.
themsay has joined #lisp
<jdz> ebrasca: Not sure what you are trying to accomplish here, but this is very off-topic, so I also resign from further discussion.
<ebrasca> jdz: OK
themsay has quit [Read error: Connection reset by peer]
<ebrasca> jdz: I am learning cl-vulkan. I am thinking if I can take care of cl-vulkan.
themsay has joined #lisp
Arcaelyx has quit [Ping timeout: 250 seconds]
<jdz> Is the project abandoned?
<ebrasca> I think |3b| have abandon it 3 years ago.
<ebrasca> Or 1-3
<ebrasca> years ago
makomo has joined #lisp
<ebrasca> jdz: If I remember correctly I have write with |3b| and he say he abandoned it.
<ebrasca> jdz: yea I have find it in my log. "<|3b|> ebrasca: no idea, haven't tried to run it since i stopped working on it :/"
vlatkoB_ has joined #lisp
<pjb> sindan: for example, you can see the doc of *my* features for *this* project at: https://framagit.org/patchwork/patchwork/blob/master/notes.txt#L326
<pjb> sindan: there's also https://cliki.net/features (please, update it!)
<elderK> Holy craaaaaap. A month since I last used C or C++!
<|3b|> cl-vulkan isn't "abandoned", but it also isn't currently being worked on, so if you want to use it, you will probably need to add to it
<elderK> |3b|: ;) you could just say it's "mature" :P :D
<elderK> "It's not dead. Just mature."
<elderK> Or "sleeping" :)
<|3b|> elderK: that would sort of require it to be able to do anything first :p
vlatkoB has quit [Ping timeout: 240 seconds]
<elderK> |3b|: It's reminiscent of GLAD :)
<|3b|> "sleeping" maybe, that doesn't imply any sort of completion :)
* |3b| intends to work on it more, it just isn't likely to make it to the top of my priority stack any time soon
<elderK> :P It is taking a nap :D :)
<elderK> Fair enough :)
<ebrasca> Last Commits on Apr 14, 2016 from |3b|
<pjb> You can make a CL implementation run directly on the Linux kernel. See an example with emacs: https://www.informatimago.com/linux/emacs-on-user-mode-linux.html for a CL implementation with (or new emacs with modules) FFI it would be easier, since we could do the mount directly.
<elderK> It's like my tinkering on binary-typesy stuff has taken a backseat - past couple weeks I've been spending a ton of time learning about lisp implementation.
<|3b|> (2 other unrelated projects ahead of it, then i probably want to work on spirv compiler some more before vulkan itself, and also get a better idea of how i want to use it)
<elderK> Lots of reading old research papers and things, too. It's been fascinating.
<elderK> |3b|: Sounds smart.
<|3b|> spirv is probably less important now than when i last worked on it though, since i think there are glsl extensions available that would be good enough for getting the rest working
<ebrasca> |3b|: Have you read https://vulkan-tutorial.com/ ?
<|3b|> knowing how i (or anyone else for that matter) would want to use it is important though, hard to design good abstractions without knowing use cases
<elderK> |3b|: Parsing the spec file looks kind fo horrible
<elderK> Like, the spec file itself looks kind of horrible.
<elderK> so much C assumption there
<|3b|> ebrasca: i'm not sure that was available last time i looked at vulkan
orivej has joined #lisp
<|3b|> elderK: at least it is c :)
<ebrasca> |3b|: I think you can do someting like this.
<|3b|> and a lot of the C stuff can be ignored
<elderK> |3b|: Sweet.
<ebrasca> |3b|: I don't know how to make instance of VkSurfaceKHR .
dddddd has joined #lisp
<ebrasca> |3b|: What 2 projects are you doing?
<|3b|> trying to assemble a thermal camera (mostly just having trouble getting a working configuration of a 64bit arm board with all the drivers i need), and writing some simple utilities for android
<ebrasca> |3b|: I am using linux not win32.
notzmv has quit [Ping timeout: 268 seconds]
<|3b|> https://github.com/3b/cl-vulkan/blob/master/vk/wrappers.lisp#L451 is the definition of the function/macro it uses, so you will need to write a linux version of that for whichever OS API you use
random-nick has quit [Ping timeout: 250 seconds]
<|3b|> but could be xcb or wayland or whatever depending on how you create the window
<|3b|> (but probably xlib)
<ebrasca> I am new to vulkan and glfw.
<|3b|> what glfw bindings do you use?
notzmv has joined #lisp
<ebrasca> |3b|: borodust sugested me https://github.com/borodust/bodge-glfw .
varjag has joined #lisp
<|3b|> ok, looks like glfw wants you to call its functions to create the surface, so you will have to look at bodge-glfw to figure that out
<ebrasca> |3b|: I have someting like http://ix.io/1w2X/lisp .
<|3b|> and probably write a similar with-*-surface macro to handle destroying it
<ebrasca> |3b|: Some years ago you helped me with cl-opengl. I like to be more like you and write good code and look like I know everiting.
<|3b|> %glfw:create-window-surface + %vk:destroy-surface-khr should replace the vk:with-xlib-surface
<|3b|> i'm not sure you are calling %glfw:create-window-surface correctly though, the last argument is a foreign pointer to the surface
atgreen_ has joined #lisp
<|3b|> you will need to ask borodust how to call that, i'm not familiar with the wrapper generator it uses
themsay has quit [Ping timeout: 250 seconds]
atgreen has quit [Ping timeout: 250 seconds]
<jackdaniel> |3b|: new lesson learned! you look like you know everything , don't spoil it with "I'm not familiar…" talk ;-)
heisig has quit [Quit: Leaving]
<|3b|> ebrasca: #++ is my lazy way of commenting out forms. slightly shorter and easier to type than #+()
scymtym has joined #lisp
<jackdaniel> nb: both ways are incorrect, it should be even longer #+(or)
<|3b|> yeah, that
* |3b| forgets the 'correct' way since i don't use it :)
<ebrasca> |3b|: Your macro "with-with" is very interesting.
<jackdaniel> or if anyone bothers to define a reader macro, it *could be* #;something
<|3b|> emacs/slime understands #++
<jackdaniel> right, that is a good argument against #;
<|3b|> and :+ on *features* seems sufficiently unusual that i'm willing to just let someone send me a patch to fix it if they come up with a good reason they need it
<|3b|> (same as odd reader/printer settings, etc)
<|3b|> not like people can't push :sbcl :ccl :genera etc onto *features* at random too :)
<|3b|> (or remove them for that matter, though ideally most code would have pure CL fallbacks for non-bugfix cases)
<jackdaniel> I'm taking my time to correct all #+nil 's to #+(or) when I encounter them in code I work with (for sake of correctness by default)
<|3b|> i usually try to just remove them completely once i'm done actively working on something
SaganMan has joined #lisp
<|3b|> most of them are debugging and/or multiple attempts at something i haven't figured out yet
<pfdietz> Hmm. A use for an Eclector-based code rewriting tool.
robdog has quit [Remote host closed the connection]
<ebrasca> pfdietz: What do you mean with ^?
rnmhdn has joined #lisp
<pjb> I prefer the more direct #+(and) included or #-(and) skipped. #+(or) is kind of a triple negative…
<pfdietz> A tool that would use Eclector to produce a kind of parse tree that can be converted back to something that is character-for-character equivalent to the original file. Then, do rewrites on that representation to get rid of #+nil forms (the example above).
<pjb> |3b|: One reason why #++ is very bad, is this whole discussion above!
<pfdietz> It would be easier if lisp files were directly equivalent to the forms after reading, but that loses comments and other reader-handled stuff.
<|3b|> pjb: ideally it wouldn't show up much in 'released' code, unfortunately not much of my code makes it to that point :(
<ebrasca> |3b|: I going to send PR with fix for #++ .
<|3b|> ebrasca: which 'fix'?
<pjb> #+(and)
random-nick has joined #lisp
<ebrasca> |3b|: ^
<|3b|> #-(and) you mean?
<pjb> Right.
<|3b|> ok, i'd probably apply that if sent, though i'd probably keep using #++ if i worked on it some more :)
<scymtym> pfdietz: i made a very quick and dirty demo for something like this a few weeks ago (in this case replacing IF without alternative leg with WHEN): https://techfak.de/~jmoringe/refactor.png
<pfdietz> The analogous tool in the C world is clang-tidy
atgreen_ has quit [Remote host closed the connection]
atgreen_ has joined #lisp
<pfdietz> Although that can be a bit more semantic.
<ebrasca> pfdietz: Is there some tool for detecting duplication in code?
<pfdietz> Clone detectors? Lots of work on that in general. Dont know about for Common Lisp specifically.
<ebrasca> mmm run some detector of duplication on all cl libraries and get better code.
<pfdietz> I'd be happy to start with getting rid of package name duplication. :)
<ebrasca> |3b|: Do you know how I can start making vulkan for mezzano OS?
<pjb> |3b|: use something like: https://pastebin.com/vYhbL47s
rnmhdn has quit [Ping timeout: 240 seconds]
<ebrasca> |3b|: Have in mind I have done bad fat32 implementation and part of ext for mezzano.
rnmhdn has joined #lisp
Zaab1t has joined #lisp
<pjb> I guess something like my electric-+-suppress https://pastebin.com/vYhbL47s could be achieved with abbrev too.
rnmhdn has quit [Ping timeout: 244 seconds]
rnmhdn has joined #lisp
yvy has joined #lisp
gravicappa has quit [Ping timeout: 250 seconds]
FreeBirdLjj has joined #lisp
FreeBirdLjj has quit [Ping timeout: 250 seconds]
cpt_nemo has joined #lisp
rnmhdn has quit [Ping timeout: 250 seconds]
hectorhonn has joined #lisp
<hectorhonn> how do i specify that a function returns a list of integers?
<beach> You can't.
<beach> ... unless, of course, the length of the list is constant.
hectorhonn has quit [Ping timeout: 256 seconds]
lose has joined #lisp
hectorhonn has joined #lisp
<hectorhonn> beach: oh dear. how about a function that returns an integer?
<beach> Sure.
<beach> Why do you care so much about specifying that? Just make sure that it does return an integer or a list of integers.
<hectorhonn> it would make it easy to read
<hectorhonn> instead of reading the entire function, just look at the signature
<beach> Put it in a comment then.
<hectorhonn> hmm, so its like python then
<hectorhonn> beach: thanks!
<beach> hectorhonn: Some authors claim that statically typed languages force the programmer to supply information early on in a project; information that is then very likely to change later.
<beach> More often than not, the information is about representation of objects, which is an implementation detail that can change later. If you want to supply type information, you should do it in terms of the abstract types of your protocol.
<hectorhonn> beach: true. on the other hand, changing the return type of a function would require a change at all call sites, so imho its better to have that kind of information supplied early during design and set in stone. then write an wrapper as an abstraction for the function if things do change
<hectorhonn> beach: yup, the abstract types of the protocol, that's what i meant
<beach> A list of integers is hardly an abstract type.
robdog has joined #lisp
<hectorhonn> well.. technically its not very abstract :D
razzy has joined #lisp
varjag has quit [Ping timeout: 250 seconds]
mulk has quit [Ping timeout: 244 seconds]
hectorhonn has left #lisp [#lisp]
mulk has joined #lisp
lose has quit [Ping timeout: 260 seconds]
<scymtym> pfdietz: thank you for the idea. prototyping this revealed a few shortcomings in eclector: https://techfak.de/~jmoringe/bad-reader-conditionals.png
<pfdietz> 'todo' and 'later' were coming out as nil?
<scymtym> no, i explicitly added those to the rule
<pfdietz> Ah ok
<scymtym> the shortcomings were related to the fact that this reader client does not use host symbols or packages
lose has joined #lisp
notzmv has quit [Ping timeout: 250 seconds]
notzmv has joined #lisp
SaganMan is now known as Mysterion
rnmhdn has joined #lisp
gravicappa has joined #lisp
bars0 has joined #lisp
slyrus1 has joined #lisp
kajo has quit [Ping timeout: 250 seconds]
Lycurgus has joined #lisp
kajo has joined #lisp
rnmhdn has quit [Ping timeout: 268 seconds]
bars0 has quit [Ping timeout: 240 seconds]
asarch has joined #lisp
kajo has quit [Ping timeout: 250 seconds]
slyrus1 has quit [Ping timeout: 250 seconds]
kajo has joined #lisp
rnmhdn has joined #lisp
kajo has quit [Ping timeout: 250 seconds]
kajo has joined #lisp
rnmhdn has quit [Ping timeout: 250 seconds]
shifty has quit [Ping timeout: 250 seconds]
milanj has quit [Quit: This computer has gone to sleep]
hiroaki has joined #lisp
Lycurgus has quit [Quit: Exeunt]
igemnace has quit [Ping timeout: 246 seconds]
dvdmuckle has quit [Quit: Bouncer Surgery]
dvdmuckle has joined #lisp
<jcowan> beach: that definition doesn't seem to be particularly CL-centric; I think it could apply to almost any language with (dynamic) types.
rnmhdn has joined #lisp
<verisimilitude> That's effectively what I thought, as well.
* beach doesn't know what is being referred to.
MoziM has joined #lisp
<MoziM> is Univac 1100 Lisp still relavent to modern lisps? www.frobenius.com/source.htm
<beach> No
grumble is now known as \x01VERSION\x01
themsay has joined #lisp
themsay has quit [Ping timeout: 250 seconds]
<ggole> "Interesting" source, with the manual namespacing and mixed brackets
<pjb> MoziM: however, Univac were nice computers. It would be fun to implement (or find) an emulator, and run this lisp.
<pjb> Also, the compiler is written in lisp, so you could easily write a driver to run it in CL…
<pjb> (with some reader macros ;-))
ealfonso has joined #lisp
dddddd has quit [Remote host closed the connection]
<MoziM> is this an accurate way to summarize the "LISP" way of thinking? still trying to get my head around it https://i.imgur.com/m2BmyGa.png
ealfonso has quit [Read error: No route to host]
jack_rabbit has quit [Ping timeout: 268 seconds]
Mysterion is now known as blackadder
notzmv has quit [Ping timeout: 268 seconds]
notzmv has joined #lisp
meepdeew has quit [Remote host closed the connection]
voidlily_ has quit [Ping timeout: 268 seconds]
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Client Quit]
jmercouris has joined #lisp
<jmercouris> (cl-string-match:match-re "\:" "https://www.google.com") --> NIL
<jmercouris> should It not be matching? I'm escaping ":" with "\"
<jmercouris> cl-string-match uses pcre, just for reference
kajo has quit [Ping timeout: 250 seconds]
kajo has joined #lisp
lose has quit [Ping timeout: 250 seconds]
voidlily_ has joined #lisp
Lord_of_Life has joined #lisp
cmjones has quit [Read error: Connection reset by peer]
themsay has joined #lisp
Arcaelyx has joined #lisp
mrcom has quit [Read error: Connection reset by peer]
lose has joined #lisp
Lord_of_Life has quit [Quit: Laa shay'a waqi'un moutlaq bale kouloun moumkine]
Lord_of_Life has joined #lisp
voidlily_ has quit [Ping timeout: 260 seconds]
jmercouris has quit [Ping timeout: 245 seconds]
jmercouris has joined #lisp
jmercouris has quit [Client Quit]
ggole has quit [Quit: ggole]
scymtym has quit [Ping timeout: 268 seconds]
<drmeister> Hey folks - bordeaux-threads *default-special-bindings* is confusing me - in what thread are the forms supposed to be evaluated?
<drmeister> It says in the documentation for *default-special-bindings*: "Forms are evaluated in the new thread or in the calling thread? Standard contents of this list: print/reader control, etc. Can borrow the Franz equivalent?"
<drmeister> What the heck kind of documentation is that?
voidlily_ has joined #lisp
<drmeister> It makes more sense to me to evaluate the forms in the parent thread because then bindings like ( ('*print-pretty* . *print-pretty*)) will get the value of the *print-pretty* binding of the parent. If you evaluate *print-pretty* in the child thread - then you will get the global value.
mrcom has joined #lisp
kajo has quit [Quit: From my rotting body, flowers shall grow and I am in them and that is eternity. -- E. M.]
random-nick has quit [Ping timeout: 268 seconds]
<beach> MoziM: No, that looks totally wrong. And we don't write it "LISP" anymore. We write it "Lisp".
<beach> MoziM: Common Lisp allows both assignments and sequences of forms (expressions to evaluate).
pierpal has quit [Read error: Connection reset by peer]
razzy has quit [Ping timeout: 250 seconds]
equwal has left #lisp ["ERC (IRC client for Emacs 27.0.50)"]
<beach> MoziM: And the "value of a symbol" thing is wrong. Common Lisp uses eager evaluation.
dddddd has joined #lisp
<beach> MoziM: And (A+B) isn't valid Common Lisp syntax.
<beach> MoziM: But if you do (SETQ X (+ A B)), then the addition is computed once and the result is stored as the value of X.
equwal has joined #lisp
equwal has left #lisp ["ERC (IRC client for Emacs 27.0.50)"]
equwal has joined #lisp
equwal has left #lisp ["ERC (IRC client for Emacs 27.0.50)"]
equwal has joined #lisp
milanj has joined #lisp
equwal has left #lisp ["ERC (IRC client for Emacs 27.0.50)"]
logicmoo is now known as dmiles
pierpal has joined #lisp
<jcowan> interestingly, Univac Lisp is a Lisp-1; that seems to be a common change for people reinventing Lisp from scratch
<pjb> So, for who searched for a non-emacs editor, here is one, in univac lisp: http://www.frobenius.com/source.htm
<pjb> So, for who searched for a non-emacs editor, here is one, in univac lisp: http://www.frobenius.com/editor.htm
<pjb> jcowan: it's a simplification for the implementer…
random-nick has joined #lisp
pierpal has quit [Ping timeout: 268 seconds]
lose has quit [Ping timeout: 250 seconds]
<jcowan> indeed it iw
<jcowan> is
<pjb> tarball at ftp://ftp.informatimago.com/pub/lisp/univac-1100-lisp.tar.bz2
<jcowan> interestingly, it has a non-closed function constructor spelled LAMBDA, and an otherwise identical closure constructor spelled LAMDA (which is how it is spelled in Modern Greek)
<jcowan> so the "classical vs. modern" lambda
equwal has joined #lisp
<Bike> ew.
rumbler31 has joined #lisp
sunset_NOVA has joined #lisp
jmercouris has joined #lisp
rumbler31 has quit [Ping timeout: 244 seconds]
pierpal has joined #lisp
jmercouris has quit [Remote host closed the connection]
graphene has quit [Remote host closed the connection]
<phoe> What is the function that does the inverse of REMOVE? I want to keep all the elements instead of removing them.
graphene has joined #lisp
zmt01 has quit [Quit: Leaving]
<phoe> I can do it via (remove-if-not (curry #'eql thing) list) but I wonder if there's a shorter way.
<Bike> :test-not #'eql, i think
<Bike> this is, of course, confusing
<phoe> Well, yes, that works.
<phoe> It's the first time I use :TEST-NOT.
angavrilov has quit [Remote host closed the connection]
Zaab1t has quit [Quit: bye bye friends]
rnmhdn has quit [Ping timeout: 246 seconds]
equwal has left #lisp ["ERC (IRC client for Emacs 27.0.50)"]
equwal has joined #lisp
equwal has left #lisp ["ERC (IRC client for Emacs 27.0.50)"]
gravicappa has quit [Ping timeout: 250 seconds]
rippa has quit [Quit: {#%${%&+'${`%&NO CARRIER]
_whitelogger has joined #lisp
blackadder has quit [Quit: WeeChat 1.6]
Mr-Potter has joined #lisp
orivej has quit [Ping timeout: 240 seconds]
razzy has joined #lisp
notzmv has quit [Ping timeout: 245 seconds]
gigetoo has quit [Ping timeout: 250 seconds]
vlatkoB_ has quit [Remote host closed the connection]
notzmv has joined #lisp
<phoe> I have a class object. I want to remove it from the Lisp system altogether. No live instances of that class remain. Is it enough to call REMOVE-DIRECT-SUBCLASS on all of the direct superclasses of that class and remove all methods that specialize on that class?
<LdBeth> good afternnon
<phoe> Hey LdBeth
orivej has joined #lisp
<LdBeth> phoe: for you question, I think the class is kept somewhere that FIND-CLASS can get it by it's name
<phoe> I am already after (setf (find-class 'foo) nil).
<phoe> I want to ensure that the class object itself is inaccessible. This means removing everything that links to it from CLOS.
<phoe> Including other class objects and method objects.
equwal has joined #lisp
<LdBeth> phoe: It might be easier to delete an entire package, I guess
<flip214> phoe: I think I discovered a nice trick for that
<LdBeth> interesting, so seems (setf find-class) to nil is sufficient, all the other things are mop specific, https://groups.google.com/forum/#!topic/comp.lang.lisp/hzQ7RRTK4Lg
yvy has quit [Read error: Connection reset by peer]
<flip214> (defclass empty () ()) and (change-class (find-class 'X) 'empty)
scymtym has joined #lisp
<flip214> but then UNINTERNing 'X might be a good idea as well
kajo has joined #lisp
<flip214> phoe: alternatively just replace your class with one that has no slots - then all the accessors should become invalid at once
<phoe> flip214: does that cause the class to be garbage collected afterwards?
<flip214> I would hope so... of course you must not have any instances referenced anywhere
<phoe> I don't think so. X is then a direct subclass of EMPTY, so there still is a strong reference.
robotoad has quit [Ping timeout: 250 seconds]
<flip214> why should it become a subclass? the (CHANGE-CLASS (FIND-CLASS 'X) 'empty) replaces the _class_ object with (an incompatible) object
<phoe> (c2mop:class-direct-subclasses (find-class 'standard-object)) still has a reference to it.
<phoe> ;=> (#<EMPTY {100A3127B3}> ...)
<phoe> try it out yourself: (defclass empty () ()) (defclass foo () ()) (change-class (find-class 'foo) 'empty) (c2mop:class-direct-subclasses (find-class 'standard-object))
<phoe> then the first object of that list.
lnostdal has quit [Quit: https://www.Quanto.ga/]
<flip214> Hm, though the class FOO is destroyed, the GF are still here
<phoe> It isn't destroyed.
<flip214> and the whole class system becomes dead
<flip214> well, its definition is broken
<phoe> Yep, but the object itself is still live.
<phoe> I'm trying to figure out how to end its life.
lnostdal has joined #lisp
<flip214> my guess is the best we can do is some hack with CHANGE-CLASS - replace the class object with an empty one
<flip214> but (DEFCLASS foo () ()) still leaves the class available
<flip214> at least all the accessor methods are gone
pierpal has quit [Ping timeout: 268 seconds]
<phoe> Removing STANDARD-OBJECT as its direct superclass and SETF FIND-CLASS NIL are not enough.
<flip214> phoe: how about (setf (class-name (find-class 'X)) (gensym))
<phoe> I don't want to play with its name.
<phoe> I want the class object to get garbage-collected.
<phoe> I want to make it inaccessible.
<flip214> well, with that SETF you should make it inaccessible. and if there are no more references, it should (might?) be gone at some later time
<flip214> phoe:
<flip214> (setf
<flip214> (slot-value (find-class 'standard-object) 'sb-pcl::direct-subclasses)
<flip214> (remove 'X
<flip214> (slot-value (find-class 'standard-object) 'sb-pcl::direct-subclasses)
<flip214> :key #'class-name))
<phoe> flip214: it's not gone. I define a finalizer on it to check if it gets GCed.
<flip214> well, do you have a reference in *, **, or ***?
<flip214> phoe: how did you define the finalizer?
<phoe> (sb-ext:finalize ** (lambda () (print "a")))
<phoe> I have called the GC a few times in a row so there's no reference in the REPL vars.
<flip214> hmmm, hu.dwim.debug:path-to-root doesn't work for me
meepdeew has joined #lisp
<phoe> Neither it does for me.
<phoe> Where should I file bugs for it?
yvy has joined #lisp
razzy has quit [Ping timeout: 246 seconds]
asdf_asdf_asdf has joined #lisp
debsan has joined #lisp
iovec has quit []
sunset_NOVA has quit [Quit: leaving]
iovec has joined #lisp
graphene has quit [Remote host closed the connection]
graphene has joined #lisp
asdf_asdf_asdf has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
arauca has joined #lisp
arauca is now known as akoana
graphene has quit [Remote host closed the connection]
graphene has joined #lisp
Josh_2 has joined #lisp
razzy has joined #lisp
notzmv has quit [Ping timeout: 268 seconds]
notzmv has joined #lisp
akoana has left #lisp [#lisp]
hiroaki has quit [Ping timeout: 252 seconds]
themsay has quit [Read error: Connection reset by peer]
themsay has joined #lisp
shachaf has quit [Ping timeout: 272 seconds]
Mr-Potter has quit [Quit: Leaving]
nchambers has joined #lisp
themsay has quit [Ping timeout: 244 seconds]
shifty has joined #lisp
lemoinem has quit [Read error: Connection reset by peer]
lemoinem has joined #lisp
nopolitica has quit [Quit: WeeChat 2.2]
random-nick has quit [Ping timeout: 240 seconds]
equwal has left #lisp ["ERC (IRC client for Emacs 27.0.50)"]
vms14 has joined #lisp
<vms14> there is someone alive here?
<phoe> yes
<phoe> what's up?
<vms14> I'm just falling in love with lisp
<phoe> Sure, it happens
<vms14> I saw a lot of posts of lisp fans saying with no problem that lisp is the best language
<vms14> and that when you learn lisp you'll miss those features in other languages
<vms14> I just wanted to see if it was only fanboy stuff, or it's really good
<vms14> atm I cannot appreciate that, but the few I see, I like it
<pjb> it's really good. But you're asking in the core of the fan zone, in #lisp!
<phoe> Lisp is just yet another programming language equivalent to almost all others by means of Turing completeness.
<vms14> I see the best feature of lisp is that is easy to change the language itself
<pjb> see: Turing Tar Pit.
<vms14> and that is good for prototypes
<phoe> But it's nonetheless a pretty fun and useful one, at least for me, due to its nature of being very bendy and malleable. | 2022-10-07 16:35:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24288778007030487, "perplexity": 10726.334819778307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00005.warc.gz"} |
http://nrich.maths.org/8169/solution?nomenu=1 | ## 'That Number Square!' printed from http://nrich.maths.org/
When you arrive in the classroom on Monday morning you discover all the numbers have fallen off the class number square and they are in a heap on the floor. All that is left on the wall is a blank grid!
There's five minutes to go before the lesson starts and you need the number square.
Can you find a quick way of putting the numbers back in their right places on the grid?
Where will you start?
Before you start think - does your class number square start with $0$ or $1$? Or a different number?
Some of your friends may want to have a go too.
What different ideas are there about how to put the number tiles back as quickly as possible?
Click here to see how Jonah started.
Let us know what you think is a 'smart' way of putting the number square back together.
How quickly can you achieve this using your 'smart' strategy? | 2014-07-31 23:54:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26969775557518005, "perplexity": 1079.8303190735132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273766.32/warc/CC-MAIN-20140728011753-00189-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/two-ode-problems-not-sure-about.430735/ | # Two ODE problems not sure about
1. Sep 20, 2010
### clynne21
1. The problem statement, all variables and given/known data
consider a lake that is stocked with walleye pike and that the pike population is governed by P'=.1P(1-P/10) where time is measured in days and P is thousands of fish. Suppose that fishing is started in this lake and that 100 fish are removed daily. modify the logistic model to account for the fishing
2. Relevant equations
P'=.1P(1-P/10)
3. The attempt at a solution
I am thinking it's just P'=.1P(1-P/10)-.1 but that seems too easy LOL. any thoughts?
1. The problem statement, all variables and given/known data
Suppose a population is growing according to the logistic eqn
dP/dt=rP(1-P/K)
Prove that the rate at which the population is increasing is at its greatest when the population is at one-half of it's carrying capacity. Hint: Consider the second derivative of P
2. Relevant equations
dP/dt=rP(1-P/K)
3. The attempt at a solution
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Sep 20, 2010
### vela
Staff Emeritus
Looks good to me.
the rate at which the population is increasing = dP/dt
is at its greatest = is maximized
You should recall from calculus that when a function attains a local maximum, its derivative is equal to 0. In this problem, the function is dP/dt, and its derivative is therefore d2P/dt2. You want to set that equal to 0 and solve for P.
Next, you want to find the carrying capacity of the system in terms of r and K. Do you know how to find this?
3. Sep 21, 2010
### clynne21
Got through the first question fine and did take the second derivative of the equation, but once I do that all variables are codependent on each other so setting it equal to zero makes everything zero.
d2P/dt2= -2Pr/K
so I'm kind of at a loss of how to make that a maximum. I must be missing something. Thank you for the help! I do know how to find the carrying capacity.
4. Sep 21, 2010
### vela
Staff Emeritus
You calculated the second derivative incorrectly. You have to use the product rule, or you can just multiply dP/dt out first:
$$\frac{dP}{dt} = rP - \frac{r}{K} P^2$$
and then differentiate each term separately. Don't forget you're differentiating with respect to t, not P.
5. Sep 21, 2010
### clynne21
That's where I messed up! Was very tired last night LOL. Thanks for straightening me out! | 2018-03-22 07:16:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6574565172195435, "perplexity": 700.8110148528878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00435.warc.gz"} |
https://www.nextgurukul.in/wiki/concept/cbse/class-7/geography/our-changing-earth/earth-movements/3959601 | Notes On Earth Movements - CBSE Class 7 Geography
Alfred Wegener’s theory of continental drift proved that millions of years ago, all the continents were joined in a super-continent, which he called Pangaea. Due to the movements of the lithospheric plates, the super continent broke into large landmasses that drifted away. The big landmasses became continents and Water collected between these landmasses to form the oceans. The lithosphere consists of a number of plates called the lithospheric plates or the tectonic plates. The circular movement of the molten magma inside the earth causes the tectonic plates to move slowly. The earth’s movements can be categorised into two types based on the forces that initiate these movements i.e. endogenic or exogenic. Some movements are caused by forces acting in the interior of the earth called endogenic movements. Forces on the surface of the earth result in exogenic movements. Endogenic forces can be sudden or diastrophic. Sudden endogenic movements may result in natural disasters, like earthquakes, eruptions of volcanoes and landslides. Diastrophic forces refer to forces generated by the movement of the solid material of the earth’s crust. Exogenic forces can take the form of weathering, erosion and deposition. Weathering is the breaking of rocks on the earth’s surface by different agents like rivers, wind, sea waves and glaciers. Erosion is the carrying of broken rocks from one place to another by natural agents like wind, water and glacier.
#### Summary
Alfred Wegener’s theory of continental drift proved that millions of years ago, all the continents were joined in a super-continent, which he called Pangaea. Due to the movements of the lithospheric plates, the super continent broke into large landmasses that drifted away. The big landmasses became continents and Water collected between these landmasses to form the oceans. The lithosphere consists of a number of plates called the lithospheric plates or the tectonic plates. The circular movement of the molten magma inside the earth causes the tectonic plates to move slowly. The earth’s movements can be categorised into two types based on the forces that initiate these movements i.e. endogenic or exogenic. Some movements are caused by forces acting in the interior of the earth called endogenic movements. Forces on the surface of the earth result in exogenic movements. Endogenic forces can be sudden or diastrophic. Sudden endogenic movements may result in natural disasters, like earthquakes, eruptions of volcanoes and landslides. Diastrophic forces refer to forces generated by the movement of the solid material of the earth’s crust. Exogenic forces can take the form of weathering, erosion and deposition. Weathering is the breaking of rocks on the earth’s surface by different agents like rivers, wind, sea waves and glaciers. Erosion is the carrying of broken rocks from one place to another by natural agents like wind, water and glacier.
Next | 2020-10-01 20:12:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4937285780906677, "perplexity": 2711.9191354198992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00454.warc.gz"} |
https://tetrisconcept.net/threads/texmaster.2/page-37 | # Texmaster
Thread in 'Discussion' started by Report, 29 Jan 2008.
1. ### AnonymousUnregistered
that worked thanks.
2. ### DrPete
I have had this same problem and fixed it by running Texmaster2009.ubuntu.10.04.amd64 and copying Texmaster2009.ubuntu.10.04.nv to Texmaster2009.nv and Texmaster2009.ubuntu.10.04.sav to Texmaster2009.sav.
I didn't look too hard at what the problem was but I assume the files it comes with are only loadable with the 32-bit binary.
3. ### XeaL (i swear!)Unregistered
Hey, does anyone have a re-host of texmaster?
I tried dl but link is broken?
4. ### XeaL (i swear!)Unregistered
Nvm. went back one page and foudn the links..
Thanks guys!
5. ### Rich Nagel
Sorry to dig up these old replies in this thread. I was wondering if anyone had ever discovered any info on this? I'd LOVE to add the original Tracker tunes to my Tracker collection... if that is indeed what the source of the music is
6. ### RighbachUnregistered
has anyone come across a glitch where the "left key" is held down constantly?
I have attempted to replace my .ini several times but somehow I am still getting this issue. When I set the left action to arrow left or even a seperate key, the issue still continues.
7. ### Zaphod77Resident Misinformer
I wonder if there is any chance the randomizer will be recoded for TI modes. since we now know it's not the same as TAP. (rolls are from a 35 piece bag, and pieces rolled and taken out of the bag are replaced with the least dealt piece, even if they are rerolled after. the new randomizer is even kinder then TAPs 6 rolls)
8. ### DiscoCokkroach
Is it known that this game crashes like crazy when you play Doubles? It's really fun when it works, but I'm not sure if finishing a game of Doubles is possible on the current version of Texmaster.
9. ### Qlex
It's been noted, I think some people have completed it, but when you clear it the timer doesn't stop iirc
DiscoCokkroach likes this.
11. ### Alexaus
Hey everybody
First post, woo!!!
With that out of the way, I was wondering if anybody would be kind enough to help me out.
Hopefully this is the correct place to post this, and I apologize if this has been answered before -- I really couldn't find a similar thread where it was answered, but then again, I'm not great at sifting through forums
Okay, so, I am trying to customize Texmaster to be more TGM visually and aurally. Texmaster is absolutely fantastic as is, especially for being purely fan made, and it's to my understanding that it's the most faithful (mechanically) to the TGM series. So far I've been able to customize Nullpomino to appear like TGM1 -- bgs, sound, and music -- which I'm loving, and now I'm attempting to do the same for Texmaster (note: not the official music or sounds, but those that I could find that seemed the closest). Nullpomino and Heboris are also great games that I'm enjoying a whole lot of, and I have no desire to alter Heboris because I actually quite like its initial aesthetic, it's just the fact that Texmaster is the best TGM clone that I want to take it upon myself to modify it as I've done with Nullpomino -- and, well, the default assets can be a bit distracting (that's really beside the point, since gameplay should matter more).
Before going any further, I suppose this the best spot to stop reading and just let me know if the answer is a simple 'no, it can't be done'. But if it's possible, then please continue.
The version I downloaded is the first link in the original post (2010/05/01 - Texmaster 2009 Version 3 (SDL-1.2.14). The backgrounds were a piece of cake to change, so no problems there. On the other hand, I've had no success with changing the music and sounds (only attempted music but I feel I'd get the same results with sound). In the Tetris Concept wiki for Texmaster, I was instructed to use .wav files with specific file names and to put them in the /data/bgm/ directory. In the /data/ directory, I also changed the corresponding values referenced in the "data.html", "pcm_list.txt", and "special_loop.txt" files to reflect the file name of the new song file I was replacing with the old song file. (in this case, changing tm2_1.adpcm [original file] to tm2_1.wav [new song]). When I went to test this method, there would be silence in-game during the mode I changed the music for. When I press F7 for sound test, the new file is there (tm2_1.wav), and it will play, but it is just silence. I've tried converting .wav and .mp3 files with a myriad of different PCM encodings and bit ratings, including the Microsoft ADPCM encoding, all of which resulted in .wav files. When I played them in my media player, they would play fine, and it gave me the correct encoder the music file was encoded with (for example, Microsoft ADPCM). Tried those .wav files, same process as before, and I'd either get the same result as before (silence in-game and in sound test, but the game was reading the file), or the file wouldn't appear at all in Texmaster (as in, tm2_1.wav can't be found in sound test). Am I missing a file where the sound file is referenced else and need to change it there as well (aside from data.html, pcm_list.txt, and special loop.txt)?
Sorry for the really long post. I figured it would be worse if I was one of 'those' people that post "halp. how change muzik to lil t's fire mixtape? thx :3"; I mean, at least I explained how I tried to get Lil' T's fire mixtape on Texmaster....
Thank you for taking the time to read, and I appreciate any and all help I can get.
Last edited: 14 Aug 2015
12. ### nahucirujano
Wow, indeed, that was long xD
Lastest version doesn't use .wav files, so don't edit pcm_list.txt
You need the adpcm encoder, which is included with Texmaster Version 2 Beta 3.
Just drag and drop your wav file into adpcm.exe and you'll have the .adpcm file which you now can use with lastest Texmaster version.
By the way, my customized Texmaster version is the best, it has all sound and music (even looped), but yes, I cannot share any links here.
So, if you want it, just send me a PM.
Alexaus likes this.
13. ### Alexaus
Okie dokie! I was going to be embarrassed if the answer was something obvious but I was overlooking it somehow because I tried really hard with various methods on my own with no luck, haha.
I'm glad I save myself from 'some' embarrassment and that I was pointed in the right direction. I'm going to try out your suggestion and edit this post with my results so I don't risk making a double-post.
Thank you very much for the swift reply and help
EDIT:
Alright, I have tried the method that nahucirujano suggested and it worked! Woo
I've only tried one song, but I think it is safe to say that it's going to work for whatever I throw at it. I'm happy that I know what to do now, but part of me is curious as to why this specific method of creating .adpcm files allows Texmaster to read the files properly as opposed to the methods of creating .adpcm files I tried in the past. Don't feel obligated to answer this question, I'm just thinking out loud, really, haha. As for changing the sound effects, I'm guessing it's the same process? Drag and drop the .wav into the adpcm.exe and swap the new file for the old one in the /data/sound/ directory? That's the method I'm going to try first. (edit note: yep, this method worked for sound)
Considering other users might run into this problem, wouldn't it be wise to update the Texmaster wiki entry with the correct method of changing music and sound? I don't know, I'm not sure if it's that big of a deal, but I'm thinking about the possibility of other users in the future that might run into the same problem I had. I guess my forum post here will be enough help to get them the right answer *shrug*.
In any case, I appreciate all the help! Now I'll be on my way to cooking up some fun mods for Texmaster, hohoho
Last edited: 16 Aug 2015
14. ### Kitaru
I think something about the ADPCM format or encoding in Texmaster is somewhat custom and doesn't conform exactly to existing formats (i.e. MS ADPCM, IMA ADPCM). So, you have to use the supplied executable to generate the expected format.
15. ### Burbruee
Have texmaster running fine on my archlinux, (latest version of texmaster) but it doesn't read the .ini at all. The first time I started it up the terminal showed "Can't open file. (1003)" but since then it doesn't print anything to the terminal any more.
Doesn't seem to be a permission thing, tried to change permission of the ini and the entire folder just to be sure but no.. it doesn't read the ini.
Does it look for it in another location? Does anyone know? Would really enjoy a larger window than 320x240 on a 1080p monitor.. and change the default controls.
16. ### EnchantressOfNumbers
Were you able to get it working from the AUR? I ended up having to make some changes, that I no longer remember to get it running.
It looks like ~/.texmaster/data is a symlink to /opt/texmaster/data and ~/.texmaster/Texmaster2009 is a symlink to /opt/texmaster/Texmaster2009.
My ~/.texmaster/Texmaster2009.ini has permissions "-rw-r--r--".
Also, my /usr/bin/texmaster looks like this:
Code:
#!/bin/sh
# Small Texmaster 2009 startup script by p2k #
if ! [ -d "$HOME/.texmaster" ];then mkdir -p "$HOME/.texmaster"
cd "$HOME/.texmaster" ln -s /opt/texmaster/Texmaster2009 ln -s /opt/texmaster/data cp /opt/texmaster/Texmaster2009.ini . fi cd "$HOME/.texmaster"
./Texmaster2009 $* I hope that helps 17. ### Burbruee Actually I just downloaded the zip from the first post here, extracted the 64-bit ubuntu binary and made it executable (expected it to not work) but actually the game starts and plays absolutely perfectly except for the fact that it doesn't read the config. Figured maybe it reads the name of the binary and adds ".ini" to it, but renaming the ini to Texmaster2009.ubuntu10.04.amd64.ini didn't work either. Interestingly enough is that if I rename the binary to just Texmaster2009 then I can see for a split second that it creates a 640x480 window (so appears to read the config?) but then instantly crashes with Code: [burbruee@arch Texmaster2009-3]$ ./Texmaster2009
terminate called after throwing an instance of 'std::bad_alloc'
EDIT: Ok, extracted the 32-bit binary instead and renamed it to Texmaster2009 and it works fine, reads the config and I'm happy.
18. ### Xaphiosis
That's because Texmaster2009.ubuntu10.04.amd64 reads Texmaster2009.ubuntu10.04.ini, and NOT Texmaster2009.ubuntu10.04.amd64.ini as you would expect.
19. ### dark_samus
hello all, been playing TGM for awhile, unfortunately I've only been able to play in MAME thus far, however I figured I'd try this out. It was a great idea, there seems to be much less input lag than MAME. I'm running ubuntu 15.04 and have both 32 and 64 bit running on my system, however I'm experiencing a minor, but annoying, issue; when I press up to sonic drop a piece sometimes it seems to drop my input, it's really disorienting and has caused me to make many mistakes I normally wouldn't have made. All other inputs seem to be fine except that one, it may have to do with my driver (I'm running an AMD Radeon HD R7770 with the open source radeon driver, I can't get fglrx to work properly with it for some reason) on another note there does seem to be a slight lag when the line clear animation happens, I'm not sure if that has to do with the speed of my system (I'd guess not, I can run much more intense applications and games without issue) I'll keep testing and see if I can get these issues resolved
EDIT: tried the game in wine, there were no input issues anymore but the game seems to be back to the MAME timing (basically there's now more input lag)
Last edited: 17 Jan 2016
20. ### Betelgeuse
Hey guys, trying to get the latest version of Texmaster that's meant to work on OS X to work on my machine.
It's not using the bundled SDL.dll of course, so I just plopped SDL 1.2 into the Frameworks directory and the game started up.
Here's the weird thing. Before I renamed the ini file to the proper name, the game loaded in a tiny window and seemed to render correctly. After renaming, and making sure the renamed ini was read (I can play the game, but it's a bit slow and clunky) - the game looks like this:
Tried looking at and changing a bunch of video settings in the ini, to no avail. What do?
EDIT: The in-game metrics show that I'm getting 15/60 fps roughly, and the "speed" is fluctuating between 100-115%. That's already weird.
EDIT2: I just saw someone else in the thread was having this same problem, and zero fucks were given. I guess there's not much hope here, huh?
EDIT323245: Tried with the tiny 320x240 that the game apparently defaults to, and it works great then. Bah.
Last edited: 26 Jan 2016
Qlex likes this. | 2020-02-21 16:22:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45680946111679077, "perplexity": 1883.6146624509545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00121.warc.gz"} |
http://em.nr.no/eng/method/ | # Method
## Statistical model
Consider a game between team A and team B. In our model, the number of goals scored by team A are Poisson distributed with parameter (a number) L(A,B). This means that we expect that team A will score about L(A,B) goals in a match against team B. Here
$L(A,B)=\text{Normal number of goals}\times \frac{\text{Strength of team A}}{\text{Strength of team B}}$
“Normal number of goals” is a parameter (a number) which is interpreted as the average number of goals scored by one team in a match between to equally good teams. “Strength of team A” is a parameter (a number), which tells how good team A is, whereas “Strength of team B” tells how good team B is. The strength of Germany is fixed to 100, and the strengths of all other teams are given relative to this.
Further, the number of goals scored by team B are Poisson distributed with parameter $$L(B,A)$$, independent of the number of goals scored by team A.
This means that if team A has a higher strength than team B, we will expect that team A wins, because L(A,B) will be greater than L(B,A). However, it will still be a possibility for a draw, or that team B wins.
Of course, our simple model is not able to cover all important aspects of a football match. However, in the championship, few match results will be available for parameter estimation, so a simple model is needed to avoid overfitting. However, in a national league, with many games over a season, one may consider several extensions to our model. These include
• Offense strength and defense strength
• That the strengths of the teams vary over the season
• Number of goals of each team are correlated
In the later years, several articles about this theme have been published in the statistical literature. A good introduction is Lee, A. (1997), “Modeling Scores in the Premier League: Is Manchester United Really the Best?”, Chance, Vol 10, pp. 15-19.
## Parameter estimation
The parameters in the model are “Normal number of goals” and the strengths of each team. These parameters must be estimated before any probability calculations can be done. Before the start of the tournament, the estimation is based on evaluations from several Norwegian football experts. The experts have guessed the results of several hundred hypothetical games, and these results are tranferred to number values of the parameters.
When the tournament starts, the real games are taken into account as well. The information value of the hypotethical games (the expert guesses) are weighted versus the real games, such that the hypothetical games are equally important the real games when all teams have played two matches. When all teams have played three or more matches, the real games are the most important part in the estimation of the parameters.
Estimating the parameters means to find the parameter values that fit the data (the match results) as good as possible. In our case, the parameters are estimated by maximising a modified Poisson likelihood. The difference from an ordinary Poisson likelihood is that it is robustified by downweighting large victories and by adding a penalty term that shrinks the individual strength parameters towards a common mean.
## Estimated strength of all teams
The current estimate of “Normal number of goals” is 1.07.
The estimated strength values of each team is given in the table to the right (sorted), together with the FIFA ranking per May 10, 2021. The discrepancy between the strength and the FIFA ranking is due to the fact that the expert opinions differ somewhat from the FIFA ranking. In addition, as the tournament progresses, teams with good results will obtain high strength, even though they might have a low FIFA ranking.
Last update: Jun 22 2021 12:19 | 2021-06-22 20:48:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40471503138542175, "perplexity": 844.6617584838559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00516.warc.gz"} |
http://physics.stackexchange.com/tags/commutator/info | # Tag Info
## About commutator
A mathematical construct used to study the effect of applying two operators in succession. | 2014-07-30 11:21:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850095272064209, "perplexity": 3763.9859175961637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270399.7/warc/CC-MAIN-20140728011750-00472-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://newproxylists.com/at-algebraic-topology-pi_2n-1operatornameso2n-element-represents-the-tangent-bundle-ts2n-not-torsion-and-indivisible/ | # at.algebraic topology – \$pi_{2n-1}(operatorname{SO}(2n))\$ element represents the tangent bundle \$TS^{2n}\$, not torsion and indivisible?
Question: Is the element $$alpha$$ in $$pi_{2n-1}(operatorname{SO}(2n))$$ representing the tangent bundle $$TS^{2n}$$ of the sphere $$S^{2n}$$ indivisible and not torsion?
My understanding so far —
An $$operatorname{SO}(2n)$$ bundle over $$S^{2n}$$ corresponds to an element in $$pi_{2n}operatorname{BSO}(2n) =pi_{2n-1}operatorname{SO}(2n)$$.
Not torsion: There does not exist any integer $$m > 0$$ such that $$malpha$$ is a trivial element.
Indivisible: There does not exist any integer $$k > 1$$ and any element $$beta$$ in $$pi_{2n-1}operatorname{SO}(2n)$$ such that $$alpha=kbeta$$.
Ref: Mimura, Toda: Topology of Lie groups. Chapter IV Corollary 6.14. | 2021-07-25 15:17:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5938048958778381, "perplexity": 134.34136943482847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151699.95/warc/CC-MAIN-20210725143345-20210725173345-00278.warc.gz"} |
https://stacks.math.columbia.edu/tag/0B5J | Lemma 27.11.8. Let $S$ be a graded ring. Let $d \geq 1$. Set $S' = S^{(d)}$ with notation as in Algebra, Section 10.56. Set $X = \text{Proj}(S)$ and $X' = \text{Proj}(S')$. There is a canonical isomorphism $i : X \to X'$ of schemes such that
1. for any graded $S$-module $M$ setting $M' = M^{(d)}$, we have a canonical isomorphism $\widetilde{M} \to i^*\widetilde{M'}$,
2. we have canonical isomorphisms $\mathcal{O}_{X}(nd) \to i^*\mathcal{O}_{X'}(n)$
and these isomorphisms are compatible with the multiplication maps of Lemma 27.9.1 and hence with the maps (27.10.1.1), (27.10.1.2), (27.10.1.3), (27.10.1.4), (27.10.1.5), and (27.10.1.6) (see proof for precise statements.
Proof. The injective ring map $S' \to S$ (which is not a homomorphism of graded rings due to our conventions), induces a map $j : \mathop{\mathrm{Spec}}(S) \to \mathop{\mathrm{Spec}}(S')$. Given a graded prime ideal $\mathfrak p \subset S$ we see that $\mathfrak p' = j(\mathfrak p) = S' \cap \mathfrak p$ is a graded prime ideal of $S'$. Moreover, if $f \in S_+$ is homogeneous and $f \not\in \mathfrak p$, then $f^ d \in S'_+$ and $f^ d \not\in \mathfrak p'$. Conversely, if $\mathfrak p' \subset S'$ is a graded prime ideal not containing some homogeneous element $f \in S'_+$, then $\mathfrak p = \{ g \in S \mid g^ d \in \mathfrak p'\}$ is a graded prime ideal of $S$ not containing $f$ whose image under $j$ is $\mathfrak p'$. To see that $\mathfrak p$ is an ideal, note that if $g, h \in \mathfrak p$, then $(g + h)^{2d} \in \mathfrak p'$ by the binomial formula and hence $g + h \in \mathfrak p'$ as $\mathfrak p'$ is a prime. In this way we see that $j$ induces a homeomorphism $i : X \to X'$. Moreover, given $f \in S_+$ homogeneous, then we have $S_{(f)} \cong S'_{(f^ d)}$. Since these isomorphisms are compatible with the restrictions mappings of Lemma 27.8.1, we see that there exists an isomorphism $i^\sharp : i^{-1}\mathcal{O}_{X'} \to \mathcal{O}_ X$ of structure sheaves on $X$ and $X'$, hence $i$ is an isomorphism of schemes.
Let $M$ be a graded $S$-module. Given $f \in S_+$ homogeneous, we have $M_{(f)} \cong M'_{(f^ d)}$, hence in exactly the same manner as above we obtain the isomorphism in (1). The isomorphisms in (2) are a special case of (1) for $M = S(nd)$ which gives $M' = S'(n)$. Let $M$ and $N$ be graded $S$-modules. Then we have
$M' \otimes _{S'} N' = (M \otimes _ S N)^{(d)} = (M \otimes _ S N)'$
as can be verified directly from the definitions. Having said this the compatibility with the multiplication maps of Lemma 27.9.1 is the commutativity of the diagram
$\xymatrix{ \widetilde M \otimes _{\mathcal{O}_ X} \widetilde N \ar[d]_{(1) \otimes (1)} \ar[r] & \widetilde{M \otimes _ S N} \ar[d]^{(1)} \\ i^*(\widetilde{M'} \otimes _{\mathcal{O}_{X'}} \widetilde{N'}) \ar[r] & i^*(\widetilde{M' \otimes _{S'} N'}) }$
This can be seen by looking at the construction of the maps over the open $D_+(f) = D_+(f^ d)$ where the top horizontal arrow is given by the map $M_{(f)} \times N_{(f)} \to (M \otimes _ S N)_{(f)}$ and the lower horizontal arrow by the map $M'_{(f^ d)} \times N'_{(f^ d)} \to (M' \otimes _{S'} N')_{(f^ d)}$. Since these maps agree via the identifications $M_{(f)} = M'_{(f^ d)}$, etc, we get the desired compatibility. We omit the proof of the other compatibilities. $\square$
There are also:
• 2 comment(s) on Section 27.11: Functoriality of Proj
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2022-05-21 13:40:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9934384226799011, "perplexity": 113.56080788086193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00366.warc.gz"} |
https://www.nature.com/articles/s41598-021-00307-5?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+srep%2Frss%2Fcurrent+%28Scientific+Reports%29&error=cookies_not_supported&code=77a0d401-b976-416e-8566-692865fec35c | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Hybrid strategy of graphene/carbon nanotube hierarchical networks for highly sensitive, flexible wearable strain sensors
## Abstract
One-dimensional and two-dimensional materials are widely used to compose the conductive network atop soft substrate to form flexible strain sensors for several wearable electronic applications. However, limited contact area and layer misplacement hinder the rapid development of flexible strain sensors based on 1D or 2D materials. To overcome these drawbacks above, we proposed a hybrid strategy by combining 1D carbon nanotubes (CNTs) and 2D graphene nanoplatelets (GNPs), and the developed strain sensor based on CNT-GNP hierarchical networks showed remarkable sensitivity and tenability. The strain sensor can be stretched in excess of 50% of its original length, showing high sensitivity (gauge factor 197 at 10% strain) and tenability (recoverable after 50% strain) due to the enhanced resistive behavior upon stretching. Moreover, the GNP-CNT hybrid thin film shows highly reproducible response for more than 1000 loading cycles, exhibiting long-term durability, which could be attributed to the GNPs conductive networks significantly strengthened by the hybridization with CNTs. Human activities such as finger bending and throat swallowing were monitored by the GNP-CNT thin film strain sensor, indicating that the stretchable sensor could lead to promising applications in wearable devices for human motion monitoring.
## Introduction
Flexible electronics has become a hot area of research attracting extensive attention in recent years1,2,3,4. Flexible tactile sensor, as a significant subpart of wearable electronics, is usually mounted on wearable equipment to detect human body movement5,6,7,8,9. Specially, strain sensors with high sensitivity, stretchability, durability, and rapid response/recovery are essential in monitoring human motion10. However, traditional strain sensors based on metal foils or semiconductor films possess unsatisfying flexibility and poor stretchability (ε < 5%) due to the brittleness of sensing materials11,12,13. Therefore, these sensors are not suitable for occasions where both high sensitivity and large stretchable range are required. In order to fabricate high-performance strain sensors, researchers have made tremendous efforts by utilizing various kinds of conductive materials such as silver nanowires7, carbon nanotubes (CNTs)14,15,16, CVD graphene17, and reduced graphene oxide18,19, as the sensing elements.
Networks of overlapping graphene nanoplatelets (GNPs) have been reportedly used in piezoresistive strain sensors, owing to superior stretchability, chemical stability, and economic/scalable synthesis10,19. Each GNP is few-layer graphene with thickness of few nanometers. When these platelets connect with each other to form a thin film, the resistance could be influenced by the contact resistances between GNPs. Upon mechanical loading, the change in film resistance originates from the disconnection, cracking, and tunneling effect between GNPs in the film plane10. To date, a large variety of flexible strain sensors with large workable strain range and/or high sensitivity have been reported. For example, Wang et al. reported a strain sensor based on buckled graphene film deposited on polydimethylsiloxane (PDMS). The sensor was able to be stretched up to ~ 30%, but showing a small GF of 220. Graphene-based strain sensor has been reported with a high gauge factor (GF) of 1000 in the strain range of 2–6%17. However, the graphene conductive network could be irreversibly broken, when the applied strain is larger than 7%, leading to the destruction of sensors under deformation. For better practical applications, it’s a significant task to obtain both large workable strain range and high sensitivity. In addition, the fabrication processes of nanomaterials and the strain sensors are generally complicated and costly, possibly generating by-products hazardous to environment. Cost-effective, scalable, and biocompatible approaches are highly demanded for fabricating strain sensors with high sensitivity and large workable strain range.
In this work, we present a strain sensor based on hybridized film of GNPs and CNT, prepared by spray-coating method. CNTs joined GNPs by the van der Waals force to form entangled networks. For comparison, strain sensors based on sole GNPs and sole CNTs thin films were also prepared by similar method. The characteristics of thin films and the sensing performance of proposed strain sensors were investigated. Besides, the application of GNP-CNT hybrid thin film strain sensor as a wearable device was demonstrated by mounting it on human finger and front neck to monitor body-motion. It is revealed that CNTs hybridization greatly improves the sensitivity of GNPs and the proposed strain sensor has a promising perspective in applications of human body monitoring.
## Results and discussion
### Fabrication process and device structure
The strain sensors were prepared via a simple spray-coating process. Figure 1 shows schematically the solution-based fabrication of the GNP-CNT hybrid thin film strain sensor. To prepare the hybrid thin films, the GNP-CNT mixed dispersion was made, following the procedure detailed in “Experimental methods” section. Using the sonicated GNP-CNT mixed solution, a spray coating technique was carried out to fabricate the GNP-CNT hybrid thin films. In brief, the GNP-CNT mixed solution was first deposited onto a PDMS substrate at an elevated temperature (controlled at 90 °C) to facilitate solution evaporation. After the spray-coating process, the GNP-CNT/PDMS film was peeled off from the glass slide substrate. Then, silver paste was applied on the two ends of the film to form the electrodes. Finally, the device was covered by a layer of PDMS for encapsulation. Similar procedure was carried out to fabricate strain sensors based on sole GNPs and sole CNTs thin films.
Figure 2 shows the working mechanism of three types of strain sensors. Neat CNTs networks are easily destroyed under repeated mechanical load. CNTs bend and buckle upon the release of loads due to the flexibility of 1D nanostructure, which results in wavy structures in the network (Fig. 2a). The generation of nanotube buckles prevents the restoration of conductive paths in the network after releasing of the strain, leading to permanent loss in network conductivity21,22. GNPs are loosely connected with each other forming some microcracks and micropores in the 2D nanostructure (Fig. 2b). Under plane strain, GNPs have chances to arrange themselves to maintain connection by reducing overlapped area. Therefore, the connection resistance of GNPs does not change obviously under strain and the GF value is relatively small10. Compared with the pristine CNTs network and pristine GNPs network, a GNP-CNT hybrid network is mechanically stronger and more flexible to respond plane strain (Fig. 2c). GNPs are stacking in the dispersion network formed by entangled CNTs, the hybrid network demonstrating properties of both 1D and 2D nanostructure. Due to the weak interactions at the nanotube joints, the networks respond to tensile loads through interfacial sliding between the neighboring nanotubes23. Under the same tensile strain, the CNTs show much smaller deformation (slide instead of bending and buckling) than the pristine CNTs network, demonstrating an enhanced strength and better stability. Besides, the flexibility improves because the resistance change of strain sensor based on the GNP-CNT hybrid thin film is not only due to the fracture or crack propagation of GNPs. Figure 2d shows the measurement scheme of a strain sensor based on GNP-CNT hybrid. As demonstrated in Fig. 2e, from relaxed state (Fig. 2eI) to tensile state (Fig. 2eII), GNPs and CNTs slide by the deformation of the PDMS matrix so that the number of disconnection gradually increases by higher tensile strain, causing the resistance of the strain sensor to increase11.
### Characterization
The morphology and structure of CNT, few-layer graphene, and the GNP-CNT hybrid material were measured using AFM, TEM, and Raman spectroscopy to demonstrate the percolation theory in thin film. The GNP-CNT hybrid material was fabricated with 5 ml GNPs dispersion and 2 ml CNTs dispersion. Figure 3a shows the AFM image of the GNP-CNT hybrid material acquired in a liquid cell. The whiter boxes in Fig. 3a mark the locations of large GNPs. With AFM imaging, we estimated the lateral dimension (50–2500 nm) and the thickness (~ 4.5 nm) of GNPs. The average bundle length and diameter of the CNTs in GNP-CNT hybrid thin films were determined to be 2000 nm and 5 nm, respectively. It can be seen that the CNTs are intertwined with each other in a mixed structure, while GNPs stacking in the hybrid dispersion network. Raman spectrum of few-layer graphene, CNT, and GNP-CNT hybrid materials are presented in Fig. 3b. Figure 3bI presents a strong G band at 1578 cm−1 but rather weak D band at 1349 cm−1, suggesting that the few-layer graphene is mainly constituted of sp2 hybridized carbon24. The 2D band (2714 cm−1 in Fig. 3bI) of few-layer graphene has been regarded as a sensitive indication of the number of layers25,26,27,28,29. The radial breathing mode (RBM) as well as the G and D band on the spectrum (Fig. 3bII) is markers for determining the diameters of the CNT30,31. Focusing on the RBM of the spectrum, the Raman shift at 68.6 cm−1, 157.7 cm−1, 191.2 cm−1 and 270.5 cm−1 respectively correspond to CNTs approximately 3.98 nm, 1.54 nm, 1.25 nm and 0.87 nm in diameter32. Raman spectroscopy unambiguously corroborates the presence of GNP-CNT hybrid material as shown in Fig. 3bIII. The connections between GNPs and CNTs are important for achieving hybrid materials with good performance. Microstructures and morphology of the GNP-CNT hybrid material imaged at different magnifications were investigated by TEM. TEM and HRTEM images of GNP-CNT hybrid material are shown in Fig. 3c,d, respectively. From the TEM image, the junctions between GNPs and CNTs are clearly observed. The HRTEM image (Fig. 3d) confirms that small lattice fringes with an interlayer thickness of 0.34 nm, which belongs to sp2 bonded graphite carbon. The diameter of the carbon nanotubes is about 5 nm, which is consistent with the AFM result. Both images clearly reveal that GNPs and CNTs are uniformly dispersed to form the mixed structures of CNTs entangled network stacking by GPNs in the hybrid dispersion.
The SEM images of surface morphology and the cross-section of the GNP-CNT hybrid thin film were shown in Fig. 4a,b, respectively. CNTs are uniformly dispersed within the network of percolating GNPs as seen in Fig. 4a. The thickness of the hybrid thin film is estimated to be ~ 3.25 μm from Fig. 4b. From the SEM results, it suggests that the CNTs rope entangled network structures stacking mosaic morphology of GNPs in the as-prepared thin film. Figure 4c,d are photographs of a strain sensor made of GNP-CNT hybrid thin film at its original length and with over 50% strain, respectively, showing the stretchability and bendability of the sensor. Figure 4e shows the photographs of the sensors under bending stress with ultra-softness. The sensors can be directly mounted on human skin or attached to complex surfaces with perfect contact and negligible slippage.
### Piezoresistive characteristics analysis
The GNP-CNT hybrid thin film sensor was clamped to a motorized moving controller (WNMC 400 Motion Controller) to characterize its electromechanical behavior. Multiple strain/release cycles with different strain levels were applied to the sensor, while the resistance changes were measured. The gauge factor (GF) was evaluated according to the coupled electrical-cyclic tensile testing results:
$$GF=\frac{\Delta R/{R}_{0}}{\Delta L/L}$$
(1)
$$\Delta R$$ is the relative change in resistance. $${R}_{0}$$ is initial resistance without applying strain. $$\Delta L/L$$ is the displacement variation of the sensing film. The I–V curves (Fig. 5a) at different strain levels indicate that the response of the sensor based on GNP-CNT hybrid thin film to strain is steady, and the resistance (slope of I–V curves) under each applied strain is constant. The slope of the curve under strain conditions is much smaller than that of the curve when no strain is present, indicating a significant increase in the resistance.
When the strain sensor was held at various levels of strain, the resistance changes were recorded to evaluate the stability of output signal. During each period, the signals remained stable without distinct drifts, as shown in Fig. 5b, the Gauge Factors of strain sensor are 116 under 7.5% strain and 197 under 10% strain, respectively. The GF values, calculated to be 10–197, were superior to those of previously reported stretchable sensors (GF: 0.5–69)7,33,34 and some other graphene/CNTs based strain sensors (GF 0.54 at 90% strain, GF 15 at 206% strain)35,36.
Our sensor also showed long-time stability, little hysteresis, and high durability. Figure 5c shows the response of the sensor to 1000 times cyclic loading of 10% strain, indicating its remarkable stability. For comparison, strain sensors based on sole GNPs and sole CNTs thin films were prepared and measured by similar method. Figure 5d compares the GF values of sole CNTs thin film sensors (sprayed with 4 mL or 5 mL CNTs dispersion, sensing layer thickness ~ 2.0 μm or ~ 2.1 μm) with those of sole GNPs thin film sensors (sprayed with 8 mL or 11 mL GNPs dispersion, sensing layer thickness ~ 2.9 μm or ~ 3.1 μm). Figure 5e compares the GF values of GNP-CNT hybrid thin films fabricated with altered GNP/CNTs ratio under different strain levels. The sensitivity of strain sensors fabricated with optimal GNP-CNT volume ratio 5:2 is much better than that of sole CNTs and sole GNPs thin films sensors. Besides, consistent with the mechanism described in Fig. 2, after the experimental strain/release cycles, the resistance of strain sensor made of sole CNTs failed reverting to initial value and that of strain sensor made of CNT/GNP or sole GNPs was able to resume to initial value.
The skin deformation combines stretching and bending37, which could lead to the structural changes of these microcracks and micropores38, thus the film could be used as a skin-mountable strain sensor to detect human motion. In comparison to the traditional strain sensors based on metal foil or silicon, the unique feature of GNP-CNT hybrid thin film strain sensors is their remarkable extensibility and durability20,39. In combination with the high sensitivity and ease of fabrication, these properties make the strain sensor highly promising for applications in human–machine interactions and body motion monitoring. To demonstrate the potential and applicability, the GNP-CNT hybrid thin film strain sensor was mounted on the finger and the front neck to monitor body motion. We fixed the strain sensor on a middle finger to see its resistance response to the bending of the finger (Fig. 6a). When the middle finger slowly bended toward the palm to a certain angle (30°, 45°, and 90°) and subsequently released repeatedly, the finger motion was faithfully registered by the continuous increase and decrease of the resistance. The greater the bending angle is, the more the resistance increases. The resistance almost doubles when the middle finger bends to an angle of 90°. By mounting the strain sensor on human finger, the GNP-CNT hybrid thin film strain sensors could potentially be useful for a broad range of applications in human–machine interactions, such as making phone calls, clicking the mouse, typing on the keyboard, playing piano and remotely controlling the operation of vehicles. Moreover, the strain sensors based on GNP-CNT hybrid thin film can be used for body monitoring. Figure 6b shows a GNP-CNT hybrid thin film strain sensor attached onto skin of the neck to noninvasively monitor the muscle movement during pronouncing ‘a’ and swallowing. The strains induced by the motion of surficial skin along a human neck were clearly sensible with reproducible signals in resistance. The GNP-CNT hybrid thin film strain sensor exhibited high sensitivity and distinct current curves while pronouncing ‘a’ or swallowing. This further demonstrated the excellent performance of the strain sensor in voice recognition.
## Conclusion
In summary, we demonstrate a simple, cost-effective, and scalable approach for the implementation of highly sensitive and stretchable strain sensors based on GNP-CNT hybrid thin films. Strain sensors with high sensitivity (GF ~ 197 under 10% strain), high stretchability (ε ≥ 50%) and a reproducible response over 1000 loading cycles (including stretching and bending) can be achieved by incorporating CNTs into GNPs networks to form 1D-2D hybrid microstructure. The devices can be used for strain and vibration sensing in a variety of applications ranging from human physiological activity monitoring to soft robotics.
## Experimental methods
### Ethical approval
The study was approved by the Ethics Committee of University of Electronic Science and Technology of China. All procedures performed in this study involving human participants were carried out in compliance with the Declaration of Helsinki. The informed consent was obtained from all the human participants in the study.
### Materials and chemicals
Graphite powder (≥ 325 mesh, 99.95% metals basis) was purchased from Aladdin Biochemical Technology Co. Ltd. TNWDIS (90 wt%) and single-walled carbon nanotubes with an average length of 5–30 μm and average diameter of 1–2 nm were purchased from Chengdu Organic Chemicals Co. Ltd, Chinese Academy of Sciences. N-Methylpyrrolidone (NMP, 99.13 (MW), analytical pure) was purchased from Kelong Chemical Reagents Factory. The PDMS monomer and curing agent were provided by Sylgrad-184, Dow Corning. All the materials and chemicals were used as received without further purification.
### Preparation of GNPs dispersion and CNTs dispersion
Graphene dispersion was obtained from exfoliation of natural graphite powder based on Chiang’s liquid exfoliation method40,41. Graphite dispersions were prepared by dispersing natural graphite in N-Methylpyrrolidone (with the mass fraction 0.2 of deionized water) at an initial concentration of 5 mg/ml. These dispersions were sonicated in a pulse operation mode (on 2 s, off 3 s) with the power set at 60 W for 15 h, followed by standing for 24 h to allow the formation of any unstable aggregates in the bottom. The dispersion was then centrifuged at a speed of 3000 rpm for 15 min. The supernatant obtained was the graphene dispersion. The CNTs dispersion (0.3 mg/ml) was prepared by sonicating a mixture of 30 mg CNTs and 105 mg TNWDIS in 100 g deionized water. The sonication conditions, except the sonication duration (2.5 h), were the same as those of preparing GNPs dispersion. The dispersion was then centrifuged at a speed of 2000 rpm for 15 min. The supernatant obtained was the CNTs dispersion.
### Fabrication of GNP-CNT hybrid nanomaterials
The GNP-CNT hybrid nanomaterials were prepared by introducing CNTs solution into GNPs dispersion. To make the hybrid mix uniformly, the mixture was sonicated for 2.5 h under stirring with the speed of 1000 rpm/min to form a homogeneous suspension.
### Fabrication of strain sensor
The strain sensor was fabricated by spraying 10 ml diluted GNP-CNT hybrid mixed dispersion (7 ml dispersion with 3 ml DI water) onto an elastomeric polydimethylsiloxane (PDMS) substrate. Spray coating was carried out with a commercial airbrush (ACG model HD-130, Taiwan). In the spray process, the PDMS substrate was heated by a hot plate set at 90 °C to accelerate the solvent evaporation and facilitate thin film formation. The GNP-CNT hybrid coating film was gradually formed by controlling the spraying speed to balance the spraying and solvent evaporation time. For all the thin film sensors, silver paste was applied on the two ends of the film to form a two-electrode configuration for the afterward sensing performance evaluation.
### Characterization
Scanning electron microscopy (SEM) was performed with Hitachi S-4800 at 5.0 kV for examining the morphologies and thicknesses of different sensors. The tapping mode AFM (Veeco Digital Instruments by Bruker Dimension D3100) was used to acquire images of GNP-CNT hybrid nanomaterials deposited on silicon wafer for thickness measurement. Structural and morphological characterization of the material was performed on a FEI Tecnai G2 F20 TEM operated at 200 kV of accelerating voltage. The two ends of the sensor were mounted on a customized micrometer moving stage, and the sensing film can be bent and stretched by moving the stage closer. Electrical properties of the sensor were collected by a source measurement unit (SMU) instrument (Keithley 2400).
## References
1. Rogers, J. et al. Materials and mechanics for stretchable electronics. Science 327, 1603–1607. https://doi.org/10.1126/science.1182383 (2010).
2. Gupta, S. et al. Ultra-thin chips for high-performance flexible electronics. npj Flexible Electron. 2, 8. https://doi.org/10.1038/s41528-018-0021-5 (2018).
3. Khan, S. et al. Recent advances of conductive nanocomposites in printed and flexible electronics. Smart Mater. Struct. 26, 8. https://doi.org/10.1088/1361-665X/aa7373 (2017).
4. Xue, J. et al. Nanowire-based transparent conductors for flexible electronics and optoelectronics. Sci. Bull. 62(2), 143–156. https://doi.org/10.1016/j.scib.2016.11.009 (2017).
5. Khan, H. et al. Sensitive and flexible polymeric strain sensor for accurate human motion monitoring. Sensors 18, 418. https://doi.org/10.3390/s18020418 (2018).
6. Zhou, H. et al. Wearable, flexible, disposable plasma-reduced graphene oxide stress sensors for monitoring activities in austere environments. ACS Appl. Mater. Interfaces. https://doi.org/10.1021/acsami.8b22673 (2019).
7. Roh, E. et al. Stretchable, transparent, ultrasensitive, and patchable strain sensor for human-machine interfaces comprising a nanohybrid of carbon nanotubes and conductive elastomers. ACS Nano 9, 6252–6261. https://doi.org/10.1021/acsnano.5b01613 (2015).
8. Li, X. et al. Highly sensitive flexible tactile sensors based on microstructured multiwall carbon nanotube arrays. Scripta Mater. 129, 61–64. https://doi.org/10.1016/j.scriptamat.2016.10.037 (2017).
9. Quan, Y. et al. Highly sensitive and stable flexible pressure sensors with micro-structured electrodes. J. Alloy. Compd. 699, 824–831. https://doi.org/10.1016/j.jallcom.2016.12.414 (2017).
10. Shi, G. et al. Highly sensitive, wearable, durable strain sensors and stretchable conductors using graphene/silicon rubber composites. Adv. Funct. Mater. 26, 7614–7625. https://doi.org/10.1002/adfm.201602619 (2016).
11. Amjadi, M. et al. Highly stretchable and sensitive strain sensor based on silver nanowire-elastomer nanocomposite. ACS Nano 8, 5154–5163. https://doi.org/10.1021/nn501204t (2014).
12. Yan, C. et al. Highly stretchable piezoresistive graphene-nanocellulose nanopaper for strain sensors. Adv. Mater. 26, 2022–2027. https://doi.org/10.1002/adma.201304742 (2014).
13. Zhou, J. et al. Flexible piezotronic strain sensor. Nano Lett. 8, 3035–3040. https://doi.org/10.1021/nl802367t (2008).
14. Shi, J. et al. Graphene reinforced carbon nanotube networks for wearable strain sensors. Adv. Funct. Mater. 26, 2078–2084. https://doi.org/10.1002/adfm.201504804 (2016).
15. Park, S. et al. Highly flexible wrinkled carbon nanotube thin film strain sensor to monitor human movement. Adv. Mater. Technol. https://doi.org/10.1002/admt.201600053 (2016).
16. Michelis, F. et al. Highly reproducible, hysteresis-free, flexible strain sensors by inkjet printing of carbon nanotubes. Carbon 95, 1020–1026. https://doi.org/10.1016/j.carbon.2015.08.103 (2015).
17. Wang, Y. et al. Wearable and highly sensitive graphene strain sensors for human motion monitoring. Adv. Funct. Mater. 24, 4666–4670. https://doi.org/10.1002/adfm.201400379 (2014).
18. Liu, Q. et al. High-quality graphene ribbons prepared from graphene oxide hydrogels and their application for strain sensors. ACS Nano 9, 12320–12326. https://doi.org/10.1021/acsnano.5b05609 (2015).
19. Gong, T. et al. Highly responsive flexible strain sensor using polystyrene nanoparticle doped reduced graphene oxide for human health monitoring. Carbon 140, 286–295. https://doi.org/10.1016/j.carbon.2018.09.007 (2018).
20. Wang, Y. et al. Super-elastic graphene ripples for flexible strain sensors. ACS Nano 5, 3645–3650. https://doi.org/10.1021/nn103523t (2011).
21. Cai, L. et al. Highly transparent and conductive stretchable conductors based on hierarchical reticulate single-walled carbon nanotube architecture. Adv. Funct. Mater. 22, 5238–5244. https://doi.org/10.1002/adfm.201201013 (2012).
22. Zhang, Y. Y. et al. Polymer-embedded carbon nanotube ribbons for stretchable conductors. Adv. Mater. 22, 3027–3031. https://doi.org/10.1002/adma.200904426 (2010).
23. Kanoun, O. et al. Flexible carbon nanotube films for high performance strain sensors. Sensors 14, 10042–10071. https://doi.org/10.3390/s140610042 (2014).
24. Chen, J. et al. A binary solvent system for improved liquid phase exfoliation of pristine graphene materials. Carbon 94, 405–411. https://doi.org/10.1016/j.carbon.2015.07.006 (2015).
25. Hernandez, Y. et al. High-yield production of graphene by liquid-phase exfoliation of graphite. Nat. Nanotechnol. 3, 563–568. https://doi.org/10.1038/nnano.2008.215 (2008).
26. Kim, K. S. et al. Large-scale pattern growth of graphene films for stretchable transparent electrodes. Nature 457, 706–710. https://doi.org/10.1038/nature07719 (2009).
27. Ferrari, A. C. et al. Raman spectroscopy as a versatile tool for studying the properties of graphene. Nat. Nanotechnol. 8, 235. https://doi.org/10.1038/nnano.2013.46 (2013).
28. Tung, V. C. et al. High-throughput solution processing of large-scale graphene. Nat. Nanotechnol. 4, 25. https://doi.org/10.1038/nnano.2008.329 (2008).
29. Ferrari, A. C. et al. Raman spectrum of graphene and graphene layers. Phys. Rev. Lett. 97, 187401. https://doi.org/10.1103/PhysRevLett.97.187401 (2006).
30. Dresselhaus, M. S. et al. Raman spectroscopy of carbon nanotubes. Phys. Rep. 409, 47–99. https://doi.org/10.1016/j.physrep.2004.10.006 (2005).
31. Saito, R. et al. Raman spectroscopy of graphene and carbon nanotubes. Adv. Phys. 60, 413–550. https://doi.org/10.1080/00018732.2011.582251 (2011).
32. Bachilo, S. M. et al. Structure-assigned optical spectra of single-walled carbon nanotubes. Science 298, 2361–2366. https://doi.org/10.1126/science.1078727 (2002).
33. Trung, T. Q. & Lee, N. E. Flexible and stretchable physical sensor integrated platforms for wearable human-activity monitoringand personal healthcare. Adv. Mater. 28, 4338–4372. https://doi.org/10.1002/adma.201504244 (2016).
34. Lipomi, D. J. et al. Skin-like pressure and strain sensors based on transparent elastic films of carbon nanotubes. Nat. Nanotechnol. 6, 788–792. https://doi.org/10.1038/nnano.2011.184 (2011).
35. Zhao, X. et al. Highly conductive multifunctional rGO/CNT hybrid sponge for electromagnetic wave shielding and strain sensor. Adv. Mater. Technol. 4(9), 1900443. https://doi.org/10.1002/admt.201900443 (2019).
36. Pan, S. et al. A highly stretchable strain sensor based on CNT/graphene/fullerene-SEBS. RSC Adv. 10, 11225–11232. https://doi.org/10.1039/D0RA00327A (2020).
37. Jung, S. et al. Reverse-micelle-induced porous pressure-sensitive rubber for wearable human-machine interfaces. Adv. Mater. 26, 4825–4830. https://doi.org/10.1002/adma.201401364 (2014).
38. Zhang, Q. et al. Highly sensitive and stretchable strain sensor based on Ag@CNTs. Nanomaterials 7(12), 424. https://doi.org/10.3390/nano7120424 (2017).
39. Luo, S. & Liu, T. Structure–property–processing relationships of single-wall carbon nanotube thin film piezoresistive sensors. Carbon 59, 315–324. https://doi.org/10.1016/j.carbon.2013.03.024 (2013).
40. Manna, K. et al. Graphene and graphene-analogue nanosheets produced by efficient water-assisted liquid exfoliation of layered materials. Carbon 105, 551–555. https://doi.org/10.1016/j.carbon.2016.04.065 (2016).
41. Luo, S. & Liu, T. SWCNT/graphite nanoplatelet hybrid thin films for self-temperature-compensated, highly sensitive, and extensible piezoresistive sensors. Adv. Mater. 25, 5650–5657. https://doi.org/10.1002/adma.201301796 (2013).
## Acknowledgements
This work was supported by, National Natural Science Foundation of China (Nos. 61971108, 61804023, 61905035), Sichuan Provincial Department of Science and Technology (2020YJ0015, 2019YJ0198, 2019YFG0222), Natural Science Foundation of Tianjin (18JCYBJC41500) and LY acknowledged China National Funds for Distinguished Young Scientists (No. 61825102).
## Author information
Authors
### Contributions
Y. Li and Q.A. planned and supervised the study. Y. Li and L.M. fabricated the strain sensor. SEM image and AFM image were acquired by Q.A. and Y. Liu. J.G. and T.G. carried out the Raman spectroscopy measurements and analysis. W.H., Y. Lin and X.Z. provided materials, laboratory tools and facility. Y. Li and Q.A. carried out the electromechanical experiment, the experimental data analysis and image graphics. The manuscript writing and preparation was done by Y. Li and W.H. All authors read and commented on the manuscript.
### Corresponding authors
Correspondence to Wen Huang or Xiaosheng Zhang.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Li, Y., Ai, Q., Mao, L. et al. Hybrid strategy of graphene/carbon nanotube hierarchical networks for highly sensitive, flexible wearable strain sensors. Sci Rep 11, 21006 (2021). https://doi.org/10.1038/s41598-021-00307-5
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-021-00307-5 | 2022-07-01 17:34:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42980608344078064, "perplexity": 8114.796637564584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00005.warc.gz"} |
http://www.bot-thoughts.com/2013/08/avr-code-size-reduction.html?showComment=1386191592566 | ## Friday, August 23, 2013
### AVR Code Size Reduction
Atmel AVR035 (pdf)) has some tips on code size reduction. Here's what I learned in the real world, reducing my flash-constrained A2D code from 4k down to 1.4k, great because the onboard ATtiny44As only have 4K of flash.
Side note: My A2D boards convert Sharp Rangers or other analog signals to digital -- I2C, and soon SPI and Serial, with oversampling, decimation, and filtering to reduce noise and increase resolution.
The biggest saving by far? Well, sorry, you have to read all the way to the end to find out. The suspense must be killing you... ;)
Meanwhile here are a subset of the AVR035 tips that helped me save some code space, from a gcc, real world perspective, reproduced here for educational purposes. I use avr-gcc rather than the IAR toolchain that AVR035 (pdf) was written for and avr-gcc provides a few automatic optimizations.
1. Compile with full size optimization. In gcc, use -Os
2. Use local variables whenever possible. This saved a few bytes here and there.
3. Use the smallest applicable data type. Use unsigned if applicable. This definitely helped shave some space. With an 8-bit micro it takes more instructions to deal with > 8-bit data.
4. Use for(;;) { } for eternal loops. No effect. Apparently gcc knows how to optimize while(1){}
5. Use do { } while(expression) if applicable. Didn't seem to matter in gcc the one time I tried this.
6. Use descending loop counters and pre-decrement if applicable. Didn't seem to matter in gcc the one time I tried this.
7. Declare main as C_task if not called from anywhere in the program. This is an IAR-ism. The avr-gcc compiler automatically ensures that main doesn't return a value.
8. Use macros instead of functions for tasks that generates less than 2-3 lines assembly code. They're right. I tried converting some macros to functions in the TWI slave library I was using. It added size.
9. Code reuse is intra-modular. Collect several functions in one module (i.e., in one file) to increase code reuse factor. I don't know for sure if this helped but I program this way normally for maintainability.
I started out at 4.2k and doing these things reduced code size to around 3.1k.
I encourage you to read the document as some of the tips I didn't mention may also help you.
## Biggest Savings?
The biggest savings by far was to avoid floating point operations at all costs. This saved me in a prior project as well. Floating point on a Tiny is just murder on flash memory.
Take a look at your map file (avr-gcc -Wl,-Map,MyProject.map in your makefile) and find any unnecessary or unexpected library calls. In my case I saw stuff like this:
...
A2Di2c.o (usiTwiSlaveInit)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(floatsisf.o)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(pow.o)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(exp.o)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(pow.o) (exp)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(fp_inf.o)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(exp.o) (__fp_inf)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(fp_nan.o)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(pow.o) (__fp_nan)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(fp_powser.o)
/usr/local/lib/gcc/avr/4.7.2/../../../../avr/lib/avr25/libm.a(exp.o) (__fp_powser)
...
Lots of libm (math library) calls. I had carelessly made a call to pow() which converts to floating point and thus adds a ton of space. Instead I converted to a simple multiplication loop using integer math. Size dropped from 3.1k down to 1.4k. Wow.
1. I have some more ideas described here:
http://nerdralph.blogspot.ca/2013/12/trimming-fat-from-avr-gcc-code.html
2. p.s. it would be nice if you turned off captcha for comments from non-anonymous posters.
3. On my project nearing 16K on an ATMega16, I had a line stating:
OCR1A=duty*2.55; | 2018-10-17 07:00:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22606657445430756, "perplexity": 4822.86326801421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511063.10/warc/CC-MAIN-20181017065354-20181017090854-00436.warc.gz"} |
https://answers.opencv.org/answers/26502/revisions/ | # Revision history [back]
The maximum depth of the trees is limited by the available memory, as can be seen from the following code snippet:
CvDTreeNode** CvGBTrees::GetLeaves( const CvDTree* dtree, int& len )
{
len = 0;
CvDTreeNode** leaves = new pCvDTreeNode[(size_t)1 << params.max_depth];
leaves_get(leaves, len, const_cast<pCvDTreeNode>(dtree->get_root()));
return leaves;
}
Therefore, maximum depth of tree should be kept small, according to Hastie et al., The elements of statistics, 2 (decision Stumps) will be sufficient in many applications and it is unlikely that the depth exceeds 10. | 2019-11-22 20:05:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36901646852493286, "perplexity": 3610.7067559499897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00432.warc.gz"} |
https://eprint.iacr.org/2013/676 | ## Cryptology ePrint Archive: Report 2013/676
Automatic Security Evaluation and (Related-key) Differential Characteristic Search: Application to SIMON, PRESENT, LBlock, DES(L) and Other Bit-oriented Block Ciphers
Siwei Sun, Lei Hu, Peng Wang, Kexin Qiao, Xiaoshuang Ma, Ling Song
Abstract: We propose two systematic methods to describe the differential property of an S-box with linear inequalities based on logical condition modelling and computational geometry respectively. In one method, inequalities are generated according to some conditional differential properties of the S-box; in the other method, inequalities are extracted from the H-representation of the convex hull of all possible differential patterns of the S-box. For the second method, we develop a greedy algorithm for selecting a given number of inequalities from the convex hull. Using these inequalities combined with Mixed-integer Linear Programming (MILP) technique, we propose an automatic method for evaluating the security of bit-oriented block ciphers against the (related-key) differential attack, and several techniques for obtaining tighter security bounds. We successfully prove that the 24-round PRESENT-80 is secure enough to resist against standard related-key differential attacks based on differential characteristic, and the probability of the best related-key differential characteristic of the full LBlock is upper bounded by $2^{-60}$. These are the tightest security bounds with respect to the related-key differential attack published so far for PRESENT-80 and LBlock.
~~~~Moreover, we present a new tool for finding (related-key) differential characteristics automatically for bit-oriented block ciphers. Using this tool, we obtain new single-key or related-key differential characteristics for SIMON48, LBlock, DESL and PRESENT-128, which cover larger number of rounds or have larger probability than all previously known results. The methodology presented in this paper is generic, automatic and applicable to many bit-oriented block ciphers.
Category / Keywords: Automatic cryptanalysis, Related-key differential attack, Mixed-integer Linear Programming, Convex hull
Original Publication (with major differences): IACR-ASIACRYPT-2014
Date: received 22 Oct 2013, last revised 12 Sep 2014
Contact author: sunsiwei at iie ac cn
Available format(s): PDF | BibTeX Citation
Note: Add the example source code.
Short URL: ia.cr/2013/676
[ Cryptology ePrint archive ] | 2021-12-03 04:53:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5526487231254578, "perplexity": 4735.228767883666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00386.warc.gz"} |
https://zbmath.org/?q=ci%3A0814.11014 | # zbMATH — the first resource for mathematics
Complexity of trajectories in rectangular billiards. (English) Zbl 0839.11006
The Sturmian sequences are the binary sequences that are a coding of a billiard trajectory in a $$(2D)$$ square, where the vertical sides are coded by 1 and the horizontal sides by 0. In particular, the (block) complexity of a Sturmian sequence is given by $$\rho (n) = n + 1$$, where $$\rho (n)$$ is the number of factors (subblocks) of the sequence with length $$n$$.
What happens if one plays billiard in a cube or hypercube? A conjecture of Rauzy stated that the complexity of the trajectories for the cubic billiards is given by $$\rho (n) = n^2 + n + 1$$. This conjecture has been proved by P. Arnoux, C. Mauduit, I. Shiokawa and J.-I. Tamura who published two papers [Bull. Soc. Math. Fr. 122, No. 1, 1-12 (1994; Zbl 0791.58034) and Tokyo J. Math. 17, No. 1, 211-218 (1994; Zbl 0814.11014)]. These four authors also conjectured a general formula for the hypercube, the formula presenting a mysterious symmetry in $$n$$ (the length of blocks) and $$d-1$$ (where $$d$$ is the dimension).
The author of the paper under review solves the question completely stating in particular that, for reasonable starting angles, one has in dimension $$d$$ $\rho_d (n) = \sum^{\min (d - 1,n)}_{k = 0} k! {d - 1 \choose k} {n \choose k}.$
##### MSC:
11B83 Special sequences and polynomials 68R15 Combinatorics on words 37E99 Low-dimensional dynamical systems
Full Text:
##### References:
[1] [AMST] Arnoux, P., Mauduit, C., Shiokawa, I., Tamura, J.-I.: Complexity of sequences defined by billiard in the cube. Bull. Soc. Math. France122, 1–12 (1994) · Zbl 0791.58034 [2] [B] Bruckstein, A. M.: Self-similarity properties of digitized straight lines. In: Vision geometry, Proc. AMS Spec. Sess., 851st Meet., Hoboken/NJ (USA) 1989, Contemp. Math.119, 1–20 (1991) · Zbl 0752.68063 [3] [FF] Ford, L.R., Fulkerson, D.R.: Flows in Networks. Princeton, NJ: Princeton Univ. Press, 1962. · Zbl 0106.34802 [4] [LP] Lunnon, W.F., Pleasants, P.A.B.: Characterization of two-distance sequences. J. Aust. Math. Soc., Ser. A53, No. 2, 198–218 (1992) · Zbl 0759.11005 [5] [MH] Morse, M., Hedlund, G.A.: Symbolic dynamics II. Sturmian trajectories. Am. J. Math.62, 1–42 (1940) · JFM 66.0188.03 [6] [R] Rauzy, G.: Mots infinis et arithmetique. In: Automata on Infinite Words, Lect. Notes in Comp. Sciences192, Berlin, Heidelberg, New York: Springer (1985) pp. 165–171 [7] [S] Stolarsky, K.B.: Beatty sequences, continuous fractions and certain shift operators. Can. Math. Bull.19, 473–482 (1976) · Zbl 0359.10028 [8] [T] Tabachnikov, S.: Billiards. Preprint (1994)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-08-05 07:17:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8112348318099976, "perplexity": 3749.841361023927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00635.warc.gz"} |
https://meridian.allenpress.com/innovationsjournals-JQSH/article/3/1/22/434853/Emerging-Role-of-Biosimilars-in-Oncology | ## Abstract
Biologics are significant drivers of globally escalating healthcare costs. Biosimilars have potential to offer cost savings with comparable efficacy and safety to innovator products and increase the access of treatment to more patients. This study aimed to increase understanding and perception of biosimilars concept. It also described the pharmacoeconomic impact of biosimilar in oncology and formulary consideration of oncology biosimilars substituting with their originators in major oncology centers in the Saudi Arabia. A biosimilar is a biological product that is similar to a reference biopharmaceutical product. As the manufacturing process hinders the ability to identically replicate the structure of the original product, biosimilar cannot be described as an absolute equivalent of the original medication. Different regulatory agencies such as United States Food and Drug Administration, European Medicines Agency, and Saudi Food and Drug Authority have approved several biosimilars of oncology biologics. The experience of biosimilar use in Europe and USA provides valuable insights into the use of biosimilars. The widespread use of biosimilars has the potential to reduce healthcare expenditure, as well as improving access without compromising patient outcomes. There is a need for increasing awareness about biosimilars to improve acceptance rates. The use of biosimilar filgrastim in Ministry of National Guard Health Affairs, Saudi Arabia, has resulted in a significant cost saving annually. It was proposed that further substitution and switching to biosimilars in oncology would lead to major savings in resources.
## Introduction
Biosimilars are biological medicines that contain a highly similar version of the active substance of an approved biologic reference product. The availability of biosimilars might provide an opportunity to lower healthcare expenditures as a result of the inherent price competition with their reference product. Understanding how biosimilar cancer drugs are regulated, approved, and paid for, as well as their impact in a value-based care environment is essential for physicians and other stakeholders in oncology.[1]
A biosimilar is a biologic product that is similar to a reference biopharmaceutical product. The manufacturing process of biosimilar hinders the ability to identically replicate the structure of the original product, and therefore, it cannot be described as an absolute equivalent of the original medication. The currently available technology does not allow accurate copy of complex molecules, but it does allow the replication of similar molecules with the same activity.[2]
New agents for the treatment and supportive care of cancer have markedly improved therapeutic options and outcome for many malignancies. Biologics include monoclonal antibodies targeted to critical pathways involved in cancer pathogenesis and growth factors to reduce or ameliorate treatment-related hematological toxicity. Unfortunately, access to potentially lifesaving biologics is limited in many areas of the world.[35] As the patent expiry of several drugs approaches, there has been intense interest in developing biosimilar agents to introduce cost savings for health-care systems and to widen global access to key biological therapies.[3,4,6] A biosimilar drug is a biological product that is highly similar, but not identical, to a licensed biological product (the reference or originator product).[79] Unlike small-molecule generic drugs that are typically chemically synthesized and easy to replicate, it is impossible to make exact copies of reference products because biosimilars (as biologics) are large and highly complex molecules produced in living cells. Structural differences to the reference product may arise because of variations in post-translational modification (such as glycosylation patterns), which could have impact on drug efficacy or safety.[79] The development of biosimilars, therefore, involves extensive evaluation and a detailed, comprehensive manufacturing process to ensure that there are no clinically meaningful differences in purity, safety, or potency.[79] As is the case for any new therapeutic agent, the evaluation process and approval requirements for a proposed biosimilar may differ between regulatory agencies, leading to differential access based on geographic location.
Extrapolation is the approval of a biosimilar for use in an indication held by the originator biologic not directly studied in a comparative clinical trial with the biosimilar. Extrapolation is a scientific rationale that bridges all the data collected (i.e., totality of the evidence) from one indication for the biosimilar product to all the indications originally approved for the originator.[10] United States (US) Food and Drug Administration (FDA) and the current World Health Organization (WHO) guidance allow the use of clinical efficacy and safety data for one indication to be extrapolated to other indications for the reference biologic. In general, guidelines suggest that extrapolation of data may be allowed for biosimilars as long as sufficient justification can be provided for the new indication (e.g., similar anticipated mechanism of action for the biosimilar) and a rationale for similar pharmacokinetics, efficacy, safety, and immunogenicity can be provided for the new indication target population. This is similar to the existing WHO guidance on extrapolation of clinical data. Examples from the European experience have shown that data for one indication of an innovator may be reasonably extrapolated to another.[11]
Different regulatory agencies such as US FDA, European Medicines Agency (EMA), and Saudi Food and Drug Authority (SFDA) have approved several biosimilars of oncology biologics and more are expected to be approved in near future. As of now EMA has approved 31 biosimilars, US FDA has approved 13 biosimilars and SFDA has approved 4 biosimilars of oncology biologicals. SFDA-listed prices of biosimilars are significantly less than the prices of their originators. It is anticipated that SFDA will approve biosimilar of trastuzumab, rituximab, and bevacizumab in near future. Below mentioned are the lists of the oncology biosimilars approved by EMA [Table 1], US FDA [Table 2], and SFDA [Table 3].[1214]
Table 1:
List of EMA-approved oncology biosimilars[12]
Table 2:
List of US FDA-approved oncology biosimilars, purple book[13]
Table 3:
List of SFDA-approved oncology biosimilar agents[14]
This study describes the pharmacoeconomic impact of biosimilar in oncology and formulary consideration of oncology biosimilars substituting with their originators in major oncology centers in Kingdom of Saudi Arabia (KSA). It also delineates challenges faced by biosimilars because of the approval of second-generation biologicals and how to address these challenges. This study also emphasizes on the need to have a rigorous pharmacovigilance efforts and naming strategies. The Ministry of National Guard Health Affairs (MNGHA) has recommended specific naming strategies of the biosimilars for effective pharmacovigilance monitoring of biologics and biosimilars.
## Pharmacovigilance of Biosimilars in Oncology
There are still some concerns regarding the long-term evaluation of biosimilars, particularly, the limited experience with these products in terms of efficacy, safety, and immunogenicity at the time of approval. For this reason, pharmacovigilance should be rigorous and is important as a public health concern. Ultimately, only clinical trials and effective post-marketing pharmacovigilance will provide definitive evidence that a biosimilar is comparable with the reference product in term of efficacy and safety. The aim of clinical trials with trastuzumab biosimilars was to show equivalence, and not patient benefit, as this was shown with brand trastuzumab.[15] Now biosimilars of trastuzumab, rituximab, and bevacizumab have been approved and we have several challenging issues that need to be addressed, such as maintaining appropriate pharmacovigilance, extrapolating across indications and automatic substitution, and switching. No consensus has yet been reached in any of these areas.
Clinical testing preapproval may not identify all possible adverse events (AEs) with most biologics, including biosimilars. An evaluation of clinical safety therefore is continued in the post-marketing setting. WHO guidance provides recommendations for post-marketing safety reports for product tolerability, and such reports include a scientific evaluation of frequency/causality of AEs. WHO also recommends that following approval, the manufacturers have a system in place to detect and assess, understand, and prevent any potentially drug-related AEs. This system, referred to as pharmacovigilance, also provides for notification regarding the occurrence of such AEs in whatever countries the product may be marketed. The goal of a post approval pharmacovigilance plan is to identify and understand, as fully as possible, the frequency and nature of AEs associated with a specific product, including potential risk factors for such AEs.[11]
To address safety considerations, the EMA mandates post approval monitoring, as well as pharmacovigilance plans for biologic drugs, including biosimilars.[16] In addition, WHO and EMA recommend that if, based on clinical experience, any additional specific safety monitoring or pharmacovigilance plan has been required for the reference biologic, or its specific product class (e.g., erythropoietin stimulating agents), the same plan should be applied to the biosimilar. Likewise, if additional concerns (e.g., increased immunogenicity of the biosimilar) have arisen during the evaluation of the biosimilar product, these also may be evaluated through appropriate safety monitoring.[12]
US FDA guidance on Good Pharmacovigilance Practice considers routine spontaneous AEs reporting to be sufficient post-marketing surveillance for products where no safety risks have been identified pre- or postapproval, and if used in adequately studied populations. US FDA considers a specific pharmacovigilance plan as appropriate; however, in the event the at-risk population needs additional study, or if safety risks have been identified either pre- or postapproval. As defined by existing US FDA guidance, such a pharmacovigilance plan could include additional measures beyond routine reporting, such as expedited reporting of serious AEs, active surveillance for specific AEs, creation of product registries, pharmacoepidemiologic studies, or additional clinical trials.[20]
Nomenclature and Product Labeling Considerations Naming is an important consideration when developing regulatory policies for biosimilars because of its potential impact on physician prescribing or patient bias, interchangeability, as well as pharmacovigilance. It is important that biosimilars have names that make them readily distinguishable from the innovator biologic (as well as other biosimilar products).[17,18] This is necessary to make certain that AEs that occur in the post-market setting can be readily and correctly matched to a specific product.[17,19] US FDA has published Nonproprietary Naming of Biological Products guidance for industry in January 2017. With the introduction of more biological products, US FDA believes it is important to encourage routine use of designated suffixes in ordering, prescribing, dispensing, recordkeeping, and pharmacovigilance practices for biological products, irrespective of their licensure pathway and date of licensure. The designated suffix will provide a consistent, readily available, and recognizable mechanism for patients and healthcare professionals, including providers and pharmacists, to correctly identify these products. US FDA believes it is likely that US FDA-designated suffixes will be used routinely when identifying, describing, and recording use of biological products if such suffixes are present in the proper names of all biological products licensed under the Public Health Service (PHS) Act.[20]
Some position statements suggest the International Nonproprietary Name (INN) system should not be used to prescribe biologic drugs.[21] One of the reasons for this is that INN nomenclature with biosimilars can lead to problems, for example, if some countries allow pharmacists to auto-substitute a less-expensive drug having the same INN as its reference product.[22] Instead, naming according to product brand has been recommended to enable better pharmacovigilance of biosimilars, so specific events can be associated with the correct product and manufacturer.[19,21]
Corporate Pharmacy and Therapeutic committee at MNGHA has approved a naming strategies policy for biosimilars in MNGHA formulary. It has recommended to use brand names to be included in computerized prescribing order entry in Health Information System in addition to the generic name (INN) of the drug to allow tracking for pharmacovigilance monitoring. The biosimilar product is identified as a “biosimilar” in the order entry screen by adding the term (Biosimilar) to the product's name. Biosimilars have a different formulary codes than the reference product. Other hospitals in the country can also use the same naming strategy for effective pharmacovigilance monitoring.
Healthcare practitioners in KSA should be encouraged to engage effectively in pharmacovigilance efforts and monitor and report AEs, efficacy concerns, immunogenicity concerns, and medication errors associated with biosimilars to SFDA. Healthcare practitioners should document correct attribution of safety event, for example, what was ordered vs. what did the patient receive in maintenance of electronic medical record, bar code administration, and medication reconciliation (during transition of care).
### Challenges faced with biosimilars due to approval of second-generation biologicals
The emergence of second-generation biologics (or biologics that make improvements on existing biologics through pegylation, alternative formulations, or other means) may affect the value of not only first-generation reference biologics, but also their biosimilars. As research and development of biologics in the oncology setting continues, newer, second-generation biologic drugs may offer different clinical properties compared with currently approved reference biologics.[23,29] They may include new formulations, different efficacy profiles and/or dosing regimens, or reduced immunogenicity. A second-generation biologic may have an improved efficacy and/or safety profile, but if the efficacy and safety of a given second-generation drug is comparable to the first-generation drug or its biosimilar, a CMA could be performed to identify the most economical solution for patients and payers. In contrast, cost-effectiveness comparison analyses could be performed with novel biologics that have different efficacy and/or safety profiles relative to first-generation products or their biosimilars. The results of pharmacoeconomic analyses that incorporate second-generation biologic drugs may affect the value that biosimilars of first-generation reference biologic drugs offer patients with cancer and healthcare providers. This may include the extent of financial and opportunity costs offset by the emergence of these therapies. For example, if a second-generation biologic has improved efficacy, the opportunity for the patient to have a better outcome would possibly negate its higher cost. In addition, the emergence of second-generation biologic drugs may affect the drug acquisition prices for first-generation reference biologic drugs and their biosimilars.[23] Manufacturers of second-generation biologics have extended their patency for another 15–20 years.
## Summary and Conclusion
Biosimilar is a biologic product that is highly similar to the reference product, notwithstanding minor differences in clinically inactive components. There are no clinically meaningful differences between the biosimilar and the reference product in terms of safety, purity, and potency.
Corporate Pharmacy and Therapeutic committee at MNGHA has approved a naming strategies policy for biosimilars in MNGHA formulary. It has recommended to use brand names to be included in computerized prescribing order entry in Health Information System in addition to the generic name (INN) of the drug to allow tracking for pharmacovigilance monitoring. Other hospitals in the country can also use the same naming strategy for effective pharmacovigilance monitoring. Healthcare practitioners in KSA should be encouraged to engage effectively in pharmacovigilance efforts when using biologicals including the biosimilars and monitor and report AEs, efficacy concerns, immunogenicity concerns, and medication errors to SFDA.
Biologics are significant drivers of globally escalating healthcare costs. Biosimilars have potential to offer cost savings with comparable efficacy and safety to innovator products and increase the access of treatment to more patients. Substitution of filgrastim (neupogen brand) with biosimilar filgrastim has resulted in significant cost saving when biosimilar filgrastim was used for prophylaxis and management of FN as well as for mobilization of stem cells in MNGHA. Moreover, substitution of three commonly used monoclonal antibodies such as rituximab, trastuzumab, and bevacizumab with their approved biosimilars will have significant cost saving in different oncology hospitals of the KSA.
Second-generation biologics may have improved, efficacy, tolerability, and convenient administration pattern saving infusion center time. In our opinion, it seems difficult to substitute SC rituximab with biosimilar rituximab because of convenient administration pattern saving infusion center time. However, IV and SC trastuzumab can be substituted with IV biosimilar trastuzumab because the difference in the administration time between IV and SC trastuzumab dosage form is not significant and this substitution will have huge impact on cost saving. Similarly, IV bevacizumab can be substituted with its biosimilar bevacizumab resulting in significant cost saving.
## References
References
1.
Nabhan
C,
S,
Mato
AR,
Feinberg
BA.
Biosimilars in oncology in the United States: A review
.
JAMA Oncol
2018
;
4
:
241
7
.
2.
Fernandes
GS,
Sternberg
C,
Lopes
G,
et al.
The use of biosimilar medicines in oncology––position statement of the Brazilian society of clinical oncology (Sboc)
.
Braz J Med Biol Res
2018
;
51
:
e7214
.
3.
Baer Ii
WH,
Maini
A,
Jacobs
I.
Barriers to the access and use of rituximab in patients with non-Hodgkin's lymphoma and chronic lymphocytic leukemia: A physician survey
.
Pharmaceuticals (Basel)
2014
;
7
:
530
44
.
4.
Lammers
P,
Criscitiello
C,
Curigliano
G,
Jacobs
I.
Barriers to the use of trastuzumab for HER2+ breast cancer and the potential impact of biosimilars: A physician survey in the united states and emerging markets
.
Pharmaceuticals (Basel)
2014
;
7
:
943
53
.
5.
McCamish
M,
Woollett
G.
Worldwide experience with biosimilar development
.
Mabs
2011
;
3
:
209
17
.
6.
Cornes
P.
The economic pressures for biosimilar drug use in cancer medicine
.
Target Oncol
2012
;
7
:
S57
67
.
7.
European Medicines Agency
.
Guideline on similar biological medicinal products
.
London, UK
:
European Medicines Agency
;
2014
.
8.
.
Scientific considerations in demonstrating biosimilarity to a reference product. Guidance for industry.
Silver Spring (MD)
:
U.S. Department of Health and Human Services, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER)
;
2015
.
9.
World Health Organization
.
Expert committee on biological standardization. Guidelines on evaluation of similar biotherapeutic products (SBPs)
.
Geneva, Switzerland
:
World Health Organization
;
2009
.
10.
Tesser
JR,
Furst
DE,
Jacobs
I.
Biosimilars and the extrapolation of indications for inflammatory conditions
.
Biologics
2017
;
11
:
5
11
.
11.
Rak Tkaczuk
KH,
Jacobs
IA.
Biosimilars in oncology: From development to clinical practice
.
Semin Oncol
2014
;
41
:
S3
S12
.
12.
European Medicines Agency (EMA)
.
Biosimilar medicines: Overview
.
Available from: https://www.ema.europa.eu/human-regulatory/overview/biosimilar-medicines. [Last accessed on 2019 Jul 14].
13.
.
Biosimilar product information
.
Available from: https://www.fda.gov/drugs/biosimilars/biosimilar-product-information. [Last accessed on 2019 Jul 14].
14.
Saudi FDA
.
15.
Nixon
NA,
Hannouf
MB,
Verma
S.
The evolution of biosimilars in oncology, with a focus on trastuzumab
.
Curr Oncol
2018
;
25
:
171
9
.
16.
Blackstone
EA,
Fuhr
JP
Jr.
Innovation and competition: Will biosimilars succeed? The creation of an FDA approval pathway for biosimilars is complex and fraught with hazard. Yes, innovation and market competition are at stake. But so are efficacy and patient safety
.
Biotechnol Healthc
2012
;
9
:
24
7
.
17.
Zelenetz
Ahmed
I,
Braud
EL,
et al.
NCCN biosimilars white paper: Regulatory, scientific, and patient safety perspectives
.
J Natl Compr Canc Netw
2011
;
9
:
S1
22
.
18.
DeMartino
JK.
Biosimilars: Approval and acceptance?
J Natl Compr Canc Netw
2011
;
9
:
S6
9
.
19.
Gascón
P,
Tesch
H,
Verpoort
K,
,
et al.
Clinical experience with Zarzio in Europe: What have we learned?
Support Care Cancer
.
2013
;
21
:
2925
32
.
20.
FDA Guidance for Industry about Nonproprietary Naming of Biological Products published on January; 2017
.
21.
Barosi
G,
Bosi
A,
Abbracchio
MP,
et al.
Key concepts and critical issues on epoetin and filgrastim biosimilars. A position paper from the Italian society of hematology, Italian society of experimental hematology, and Italian group for bone marrow transplantation
.
Haematologica
2011
;
96
:
937
42
.
22.
Covic
A,
Cannata-Andia
J,
Cancarini
G,
et al.
Biosimilars and biopharmaceuticals: What the nephrologists need to know––a position paper by the ERA-EDTA council
.
Nephrol Dial Transplant
2008
;
23
:
3731
7
.
23.
Henry
D,
Taylor
C.
Pharmacoeconomics of cancer therapies: Considerations with the introduction of biosimilars
.
Semin Oncol
2014
;
41
:
S13
20
.
24.
Campen
CJ.
Integrating biosimilars into oncology practice: Implications for the advanced practitioner
.
2017
;
8
:
688
99
.
25.
Simoens
S.
Biosimilar medicines and cost-effectiveness
.
Clinicoecon Outcomes Res
2011
;
3
:
29
36
.
26.
Kingdom of Saudi Arabia Saudi Health Council National Health Information Center Saudi Cancer Registry
.
Cancer Incidence Report Saudi Arabia 2015. Available from: https://nhic.gov.sa/eServices/Documents/E%20SCR%20final%206%20NOV.pdf. [Last accessed on 2019 Sep 23].
27.
von Minckwitz
G,
Procter
M,
de Azambuja
E,
et al.
Adjuvant pertuzumab and trastuzumab in early HER2-positive breast cancer
.
N Engl J Med
2017
;
377
:
122
31
.
28.
García-Carbonero
R,
Mayordomo
JI,
Tornamira
MV,
et al.
Granulocyte colony-stimulating factor in the treatment of high-risk febrile neutropenia: A multicenter randomized trial
.
J Natl Cancer Inst
2001
;
93
:
31
8
.
29.
Weise
M,
Bielsky
MC,
De Smet
K,
et al.
Biosimilars-why terminology matters
.
Nat Biotechnol
2011
;
29
:
690
3
. | 2020-10-31 14:20:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21235519647598267, "perplexity": 8831.835712115862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00459.warc.gz"} |
https://deepai.org/publication/sola-continual-learning-with-second-order-loss-approximation | # SOLA: Continual Learning with Second-Order Loss Approximation
Neural networks have achieved remarkable success in many cognitive tasks. However, when they are trained sequentially on multiple tasks without access to old data, it is observed that their performance on old tasks tend to drop significantly after the model is trained on new tasks. Continual learning aims to tackle this problem often referred to as catastrophic forgetting and to ensure sequential learning capability. We study continual learning from the perspective of loss landscapes and propose to construct a second-order Taylor approximation of the loss functions in previous tasks. Our proposed method does not require any memorization of raw data or their gradients, and therefore, offers better privacy protection. We theoretically analyze our algorithm from an optimization viewpoint and provide a sufficient and worst-case necessary condition for the gradient updates on the approximate loss function to be descent directions for the true loss function. Experiments on multiple continual learning benchmarks suggest that our method is effective in avoiding catastrophic forgetting and in many scenarios, outperforms several baseline algorithms that do not explicitly store the data samples.
## Authors
• 22 publications
• 25 publications
• 122 publications
03/24/2022
### Probing Representation Forgetting in Supervised and Unsupervised Continual Learning
Continual Learning research typically focuses on tackling the phenomenon...
08/08/2019
### Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
Catastrophic forgetting is a critical challenge in training deep neural ...
10/15/2019
### Orthogonal Gradient Descent for Continual Learning
Neural networks are achieving state of the art and sometimes super-human...
10/21/2021
### Center Loss Regularization for Continual Learning
The ability to learn different tasks sequentially is essential to the de...
11/25/2020
### Continual learning with direction-constrained optimization
This paper studies a new design of the optimization algorithm for traini...
04/19/2021
### Overcoming Catastrophic Forgetting with Gaussian Mixture Replay
We present Gaussian Mixture Replay (GMR), a rehearsal-based approach for...
05/17/2021
### Layerwise Optimization by Gradient Decomposition for Continual Learning
Deep neural networks achieve state-of-the-art and sometimes super-human ...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Neural networks are achieving human-level performance on many cognitive tasks including image classification krizhevsky2012imagenet and speech recognition hinton2006fast . However, as opposed to humans, their acquired knowledge is comparably volatile and can be easily dismissed. Especially, the catastrophic forgetting phenomenon refers to the case when a neural network forgets the past tasks if it is not allowed to retrain or reiterate on them again goodfellow2013empirical ; mccloskey1989catastrophic .
Continual learning is a research direction that aims to solve the catastrophic forgetting problem. Recent works tried to tackle this issue from a variety of perspectives. Regularization methods (e.g., kirkpatrick2017overcoming ; zenke2017continual ) aim to consolidate the weights that are important to previous tasks while expansion based methods (e.g., rusu2016progressive ; yoon2018lifelong ) typically increase the model capacity to cope with the new tasks. Repetition based methods (e.g., lopez2017gradient ; chaudhry2018efficient ) usually do not require additional and complex modules, however, they have to maintain a small memory of previous data and use them to preserve knowledge. Unfortunately, the performance boost of repetition based methods comes at the cost of storing previous data which may be undesirable whenever privacy is important. To address this issue, authors in farajtabar2019orthogonal proposed a method to work with the gradients of the previous data to constrain the weight updates; however, this may still be subject to privacy issues as the gradient associated with each individual data point may disclose information about the raw data.
In this paper, we study the continual learning problem from the perspective of loss landscapes. We explicitly target minimizing an average over all tasks’ loss functions. The proposed method stores neither the data samples nor the individual gradients on the previous tasks. Instead, we propose to construct an approximation to the loss surface of previous tasks. More specifically, we approximate the loss function by estimating its second-order Taylor expansion. The approximation is used as a surrogate added to the loss function of the current task. Our method only stores information based on the statistics of the entire training dataset, such as full gradient and full Hessian matrix (or its low rank approximation), and thus better protects privacy. In addition, since we do not expand the model capacity, the neural network structure is less complex than that of expansion based methods.
We study our algorithm from an optimization perspective, and make the following theoretical contributions:
• We prove a sufficient and worst-case necessary condition under which by conducting gradient descent on the approximate loss function, we can still minimize the actual loss function.
• We further provide convergence analysis of our algorithm for both non-convex and convex loss functions. Our results imply that early stopping can be helpful in continual learning.
• We make connections between our method and elastic weight consolidation (EWC) kirkpatrick2017overcoming .
In addition, we make the following experimental contributions:
• We conduct a comprehensive comparison among our algorithm and several baseline algorithms kirkpatrick2017overcoming ; chaudhry2018efficient ; farajtabar2019orthogonal on a variety of combinations of datasets and models. We observe that in many scenarios, especially when the learner is not allowed to store the raw data samples, our proposed algorithm outperforms them. We also discuss the conditions under which the proposed method or any of the alternatives are effective.
• We provide experimental evidence validating the importance of accurate approximation of the Hessian matrix and discuss scenarios in which early stopping is helpful for our algorithm.
## 2 Related work
Avoiding catastrophic forgetting in continual learning Parisi2018ContinualLL ; beaulieu2020learning is an important milestone towards achieving artificial general intelligence (AGI) which entails developing measurements toneva2018empirical ; kemker2018measuring , evaluation protocols farquhar2018towards ; de2019continual , and theoretical understating nguyen2019toward ; farquhar2019unifying of the phenomenon. Generally speaking, three classes of algorithms exist to overcome catastrophic forgetting farajtabar2019orthogonal .
The expansion
based methods allocate new neurons or layers or modules to accommodate new tasks while utilizing the shared representation learned from previous ones
rusu2016progressive ; xiao2014error ; yoon2018lifelong ; li2019learn ; Jerfel2018ReconcilingMA . Although being a very natural approach the mechanism of dynamic expansion can be quite complex and can add considerable overhead to the training process.
The repetition and memory based methods store previous data or, alternatively, train a generative model of them and replay samples from them interleaved with samples drawn from the current task shin2017continual ; kamra2017deep ; zhang2019prototype ; rios2018closed ; luders2016continual ; lopez2017gradient ; farajtabar2019orthogonal . They achieve promising performance however at the cost of higher risk of users’ privacy by storing or learning a generative model of their data.
The regularization based approaches impose limiting constraints on the weight updates of the neural network according to some relevance score for previous knowledge kirkpatrick2017overcoming ; nguyen2017variational ; titsias2019functional ; ritter2018online ; mirzadeh2020dropout ; zenke2017continual ; park2019continual . These methods provide a better privacy guarantee as they do not explicitly store the data samples. In general, SOLA also belongs to this category as we use the second-order Taylor expansion as the regularization term in new tasks. Many of the regularization methods are derived from a Bayesian perspective of estimating the posterior distribution of the model parameters given the data from a sequence of tasks kirkpatrick2017overcoming ; nguyen2017variational ; titsias2019functional ; ritter2018online
; some of these methods use other heuristics to either estimate the importance of the weights of the neural network
zenke2017continual ; park2019continual or implicitly limit the capacity of the network mirzadeh2020dropout . Similar to our approach, several regularization based methods use quadratic functions as the regularization term, and many of them use the diagonal form of quadratic functions kirkpatrick2017overcoming ; zenke2017continual ; park2019continual . In Section 5.3, we demonstrate that in some cases, the EWC algorithm kirkpatrick2017overcoming can be considered as the diagonal approximation of our approach. Here, we note that the diagonal form of quadratic regularization has the drawback that it does not take the interaction between the weights into account.
Among the regularization based methods, the online Laplace approximation algorithm ritter2018online
is the most similar one to our proposed method. Despite the similarity in the implementations, the two algorithms are derived from very different perspectives: the online Laplace approximation algorithm uses a Bayesian approach that approximates the posterior distribution of the weights with a Gaussian distribution, whereas our algorithm is derived from an optimization viewpoint using Taylor approximation of loss functions. More importantly, the Gaussian approximation in
ritter2018online is proposed as a heuristic; whereas in this paper, we provide rigorous theoretical analysis on how the approximation error affects the optimization procedure. We believe that our analysis provides deeper insights to the loss landscape of continual learning problems, and explains some important implementation details such as early stopping.
We also note that continual learning is broader than just solving the catastrophic forgetting and is connected to many other areas such as meta learning riemer2018learning , few-shot Learning wen2018few ; gidaris2018dynamic , learning without explicit task identifiers rao2019continual ; aljundi2019online , to name a few.
## 3 Problem formulation
We consider a sequence of supervised learning tasks , .111For any positive integer , we define . For task , there is an unknown distribution over the space of feature-label pairs . Let be a model parameter space,222In most cases, we consider . and for the -th task, let be the loss function of associated with data point . The population loss function of task is defined as
. Our general objective is to learn a parametric model with minimized population loss over all the
## 4 Our approach
To measure the effectiveness of a continual learning algorithm, we use a simple criterion that after each task, we hope the average population loss over all the tasks that have been trained on to be small, i.e., for every , after training on , we hope to solve . Since minimizing the loss function is the key to training a good model, we propose a straightforward method for continual learning: storing the second-order Taylor expansion of the empirical loss function, and using it as a surrogate of the loss function for an old task when training on new tasks. We start with a simple setting. Suppose that there are two tasks, and at the end of , the we obtain a model . Then we compute the gradient and Hessian matrix of at , and construct the second-order Taylor expansion of at :
˜L1(w)=ˆL1(ˆw1)+(w−ˆw1)⊤∇ˆL1(ˆw1)+12(w−ˆw1)⊤∇2ˆL1(ˆw1)(w−ˆw1).
When training on , we try to minimize . The basic idea of this design is that, we hope in a neighborhood around , the quadratic function stays as a good approximation of , and thus approximately we still minimize the average of the empirical loss functions , which in the limit generalizes to the population loss function .
We rely on the assumption that the second-order Taylor approximation of loss function can capture their local geometry well. For a general nonlinear function and arbitrary displacement, this approximation can be over-simplistic, however, we refer to the abundance of observations for modern neural networks that are seen to be well-behaved with flat and wide minima choromanska2015loss ; goodfellow2014qualitatively . Moreover, the assumption of well-behaved loss around tasks’ local minima also forms the basis of a few other continual learning algorithms such as EWC kirkpatrick2017overcoming and OGD farajtabar2019orthogonal .
Formally, let be the model that we obtain at the end of the -th task. We define the approximation of the sum of the first empirical loss functions as
˜Lk(w) =k∑k′=1ˆLk′(ˆwk′)+(w−ˆwk′)⊤∇ˆLk′(ˆwk′)+12(w−ˆwk′)⊤Hk′(w−ˆwk′) (1) =w⊤Akw+w⊤bk+ck,
where denotes the Hessian matrix or its low rank approximation, , , and is a constant that does not depend on . We construct at the end of task , and when training on task , we minimize . In the following, we name our algorithm SOLA, an acronym for Second-Order Loss Approximation.
As we can see, in the SOLA algorithm, after each task, if we choose to use the exact Hessian matrix, i.e., , , it suffices to update and in memory, and thus the memory cost of the algorithm is , which does not grow with the number of tasks. However, in practice, especially for overparameterized neural networks, the dimension of the model is usually large, and thus the storage cost of memorizing the Hessian matrix can be high. Recent studies have shown that the Hessian matrices of the loss functions of a deep neural networks are usually approximately low rank ghorbani2019investigation . If we choose as a rank- approximation of , we need to keep accumulating the low rank approximations of the Hessian matrices in order to construct , and at the end of task , the memory cost is , which in practice can be much smaller than that of using the exact Hessian matrices. We formally demonstrate our approach in Algorithm 1, and the methods that use exact Hessian matrices and low rank approximation of them are presented as options I and II, respectively. Moreover, we can use a recursive implementation for the low rank approximation, and the memory cost can be further reduced to , which does not grow with . We present the details of the recursive implementation in Section 6.
## 5 Theoretical analysis
In this section, we provide theoretical analysis of our algorithm. As we can see, the key idea in our algorithm is to approximate the loss functions of previous tasks using quadratic functions. This leads to the following theoretical question: By running gradient descent algorithm on an approximate loss function, can we still minimize the actual loss that we are interested in?
For the purpose of theoretical analysis, we make a few simplifications to our setup. Without loss of generality, we study the training process of the last task , and still use to denote the model parameters that we obtain at the end of the -th task. We use the loss function approximation in (1), but for simplicity we ignore the finite-sample effect and replace the empirical loss function with the population loss function, i.e., we define
˜LK−1(w)=K−1∑k=1Lk(ˆwk)+(w−ˆwk)⊤∇Lk(ˆwk)+12(w−ˆwk)⊤Hk(w−ˆwk), (2)
where represents or its low rank approximation. The reason for this simplification is that our focus is the optimization aspect of the problem, while the generalization aspect can be tackled by tools such as uniform convergence mohri2018foundations . As discussed, during the training of the last task, we have access to the approximate loss function , whereas the actual loss function that we care about is . We also focus on gradient descent instead of its stochastic counterpart. In particular, let be the initial model parameter for the last task. We run the following update for :
wt=wt−1−η∇˜F(wt−1). (3)
We use the following standard notions for differentiable function .
###### Definition 1.
is -smooth if , .
###### Definition 2.
is -Hessian Lipschitz if , .
We make the assumptions that the loss functions are smooth and Hessian Lipschitz. We note that the Hessian Lipschitz assumption is standard in analysis of non-convex optimization nesterov2006cubic ; jin2017escape .
###### Assumption 1.
We assume that is -smooth and -Hessian Lipschitz .
We also assume that the error between the matrices and is bounded.
###### Assumption 2.
We assume that for every , , where is defined in Assumption 1, and that for some .
### 5.1 Sufficient and worst-case necessary condition for one-step descent
We begin with analyzing a single step during training. Our goal is to understand by running a single step of gradient descent on , whether we can minimize the actual loss function . More specifically, we have the following result.
###### Theorem 1.
Under Assumptions 1 and 2, and suppose that in the -th iteration, we observe
∥∇˜F(wt−1)∥2≥cKK−1∑k=1δ∥wt−1−ˆwk∥2+ρ∥wt−1−ˆwk∥22, for some c>1, (4)
and the learning rate satisfies , then we have
F(wt)≤F(wt−1)−η(1−1c−μη2)∥∇˜F(wt−1)∥22.
We prove Theorem 1 in Appendix A. Here, we emphasize that this result does not assume any convexity of the loss functions. The theorem provides a sufficient condition (4), under which by running gradient descent on , we can still minimize the true loss function . Intuitively, this condition requires the gradient of to be large enough, such that the magnitude of the gradient is larger than the error caused by the inexactness of the loss function. In Proposition 1 below, we will see that this condition is also necessary in the worst-case scenario, at least for the case where . More specifically, we can construct cases in which (4) is violated and the gradients of and have opposite directions.
###### Proposition 1.
Suppose that , , . Then, there exists , , , and such that if , then .
We prove Proposition 1 in Appendix B. In addition, we note that Theorem 1 also implies that as training going on and decreasing, it is beneficial to decrease the learning rate , since when decreases, the upper bound on that guarantees the decay of (i.e., ) also decreases. We notice that the importance of learning rate decay for continual learning has been observed in some empirical study recently mirzadeh2020dropout .
### 5.2 Convergence analysis
Although the condition in (4) provides us with insights on the dynamics of the training algorithm, it is usually hard to check this condition in every step, since we may not have good estimates of and . A practical implementation is to choose a constant learning rate along with an appropriate number of training steps. In this section, we provide bounds on the convergence behavior of our algorithm with a constant learning rate and iterations, both for non-convex and convex loss functions. These results imply that early stopping can be helpful, and provide a theoretical treatment of the very intuitive fact that the more iterations one optimizes for the current task the more forgetting can happen for the previous ones. We begin with a convergence analysis for non-convex loss functions in Theorem 2, in which we use the common choice of learning rate for gradient descent on smooth functions bubeck2014convex .
###### Theorem 2 (non-convex).
Let , , , and . Then, under Assumptions 1 and 2, after running iterations of the gradient descent update (3) with learning rate , we have
1TT∑t=1∥∇F(wt−1)∥2≤α√T+β+γ1√T+γ2T,
where , , , and .
We prove Theorem 2 in Appendix C. Unlike standard optimization analysis, the average norm of the gradients does not always decrease as increases, when or . Intuitively, as we move far from the points where we conduct Taylor expansion, the gradient of becomes more and more inaccurate, and thus we need to stop early. In Section 7, we provide experimental evidence.
When the loss functions are convex, we can prove a better guarantee which does not have the and terms as in Theorem 2. More specifically, we have the following assumption and theorem.
###### Assumption 3.
is convex and , .
###### Theorem 3 (convex).
Suppose that Assumptions 12, and 3 hold, and define , , , and . After running iterations of the gradient descent update (3) with learning rate , we have
F(wT)−F∗≤αT+β,
where , and .
We prove Theorem 3 in Appendix D. As we can see, if or , we still cannot guarantee the convergence to the true minimum of , due to the inexactness of . On the other hand, if the loss functions are quadratic and we save the full Hessian matrices, i.e., , as we have full information about previous loss functions, we can recover the standard convergence rate for gradient descent on convex and smooth functions.
### 5.3 Connection to EWC
The elastic weight consolidation (EWC) algorithm kirkpatrick2017overcoming for continual learning is proposed based on the Bayesian idea of estimating the posterior distribution of the model parameters. Interestingly, we notice that our algorithm has a connection with EWC, although their basic ideas are quite different. More specifically, we show that in some cases, the regularization technique that the EWC algorithm uses can be considered as a diagonal approximation of the Hessian matrix of the loss function. Suppose that in the -th task, the data points are samples from a probabilistic model with the likelihood function being , and we use negative log-likelihood as the loss function, i.e., . Suppose that at the end of this task, we obtain the ground truth model parameter . Then we know that , and that the Fisher information of the -th coordinate of is . The EWC algorithm constructs a regularization term as a proxy of the loss function of the -th task, and uses it in the following tasks. As we can see, in this case, the quadratic regularization in EWC is a diagonal approximation of the quadratic term in our loss function approximation approach.
## 6 A recursive implementation
As we have seen, one drawback of the SOLA algorithm with low rank approximation in Section 4 is that the memory cost grows with the number of tasks. In this section, we present a more practical and memory efficient implementation of SOLA with low rank approximation. Recall that is the empirical loss function for the -th task, . We then define the loss function approximation in a recursive way. We begin with , and for every , we define
˜Lk−12(w) =˜Lk−1(w)+ˆLk(w) (5) ˜Lk(w) =˜Lk−12(ˆwk)+(w−ˆwk)⊤∇˜Lk−12(ˆwk)+12(w−ˆwk)⊤Qk−12(w−ˆwk), (6)
where is a rank- approximation of the Hessian matrix . This means that at the end of task , we compute the second-order Taylor expansion of the approximate loss function at , with the Hessian matrix being replaced by the low rank approximation . Thus, we only need to store and , and the memory cost is , which does not grow with . We formally present this approach in Algorithm 2. In our experiments in Section 7, we use the recursive implementation for SOLA with low rank approximation.
## 7 Experiments
We implement the experiments with TensorFlow
. When computing the exact or the low rank approximation of the Hessian matrix, we treat each tensor in the model independently; in other words, we compute the block diagonal approximation of the Hessian matrix. This technique has the benefit that the Hessian computation is independent of the model architecture and has been used in recent studies on second-order optimization
gupta2018shampoo . We use the recursive implementation for SOLA with low rank approximation. In the following, we denote SOLA with exact Hessian matrix and low rank approximation by SOLA-exact and SOLA-prox
, respectively. As for the calculation of the low rank matrix, we make use of Hessian-vector product and provide details in Appendix
E.
Datasets. We use multiple standard continual learning benchmarks created based on MNIST lecun1998gradient and CIFAR-10 krizhevsky2009learning datasets, i.e., Permuted MNIST goodfellow2013empirical , Rotated MNIST lopez2017gradient , Split MNIST zenke2017continual , and Split CIFAR (similar to a dataset in chaudhry2018efficient ). In Permuted MNIST, for each task, we choose a random permutation of the pixels of MNIST images, and reorder all the images according to the permutation. We use -task Permuted MNIST in the experiments. In Rotated MNIST, for each task, we rotate the MNIST images by a particular angle. In our experiments, we choose a -task Rotated MNIST, with the rotation angles being , , , , and degrees. For Split MNIST, we Split the labels of the MNIST dataset to disjoint subsets, and for each task, we use the MNIST data whose labels belong to a particular subset. In this paper, we use a -task Split MNIST, and the subsets of labels are , , , , and . Split CIFAR is defined similar to Split MNIST, and we use a -task Split CIFAR with the label subsets being and .
Architecture.
We use both multilayer perceptron (MLP) and convolutional neural network (CNN). In most cases, we use MLP with two hidden layers, sometimes denoted by MLP
, with and being the number of hidden units. We may use CNN- to denote a CNN model with convolutional layers, and provide details of the model in Appendix F. For Split MNIST and Split CIFAR, we use MLP and CNN models with a multi-head structure similar to what has been used in chaudhry2018efficient ; farajtabar2019orthogonal . In the multi-head model, instead of having logits in the output layer, we use separate heads for different tasks, and each head corresponds to the classes of the associated task. During training, for each task, we only optimize the cross-entropy loss over the logits and labels of the corresponding output head.
independent runs, as well as the standard deviation (as the shaded areas in the figures).
Results. We provide a comprehensive comparison among SOLA and the baseline algorithms with a variety of combinations of datasets and models. Tables 1 and 2 present the results for MNIST-based datasets and Split CIFAR, respectively. We make a few notes before discussing the results. First, the multi-task algorithm uses all the data of previous tasks, which serves as an upper bound for the performance of continual learning algorithms. Second, since the A-GEM algorithm stores a subset of data samples from previous tasks, it is not completely fair to compare A-GEM with algorithms that do not store raw data. However, here we still report the results for A-GEM for reference, and in A-GEM we store
data points for each task. Third, since the performance of the algorithms depends on the number of epochs that we train for each task, we treat this quantity as a tuning parameter, and for each algorithm, we report the result corresponding to the best epoch choice for its performance. In particular, for MNIST-based datasets, we choose epoch from
, and for Split CIFAR, we choose from . Due to memory constraints, we only implement SOLA-exact on small models such as MLP and CNN-2. We conclude from the results as follows:
• If it is allowed to store raw data, repetition based algorithm such as A-GEM should be the choice. This remarks the importance of the information contained in the raw data samples. In some cases we observe that SOLA outperforms A-GEM, e.g., on MLP. However, we expect that the performance of A-GEM can be improved if more data are stored in memory.
• If it is not allowed to store raw data due to privacy concerns, then in many scenarios, SOLA outperforms the baseline algorithms. In particular, on MNIST-based datasets, SOLA-exact or SOLA-prox achieves the best performance in out of settings.
• On Split CIFAR, we observe mixed results. When the model is relatively small (CNN-2) and we can store the exact Hessian matrix, SOLA-exact achieves the best performance. On a relatively large CNN model (CNN-6), we observe that none of the continual learning algorithms (EWC, OGD, SOLA) significantly outperforms the vanilla algorithm. On a large MLP, we observe that OGD performs the best and the result for SOLA-prox becomes worse. We believe the reason is that since in this experiment we only use eigenvectors to approximate a Hessian matrix with very high dimensions, the approximation error is so large that SOLA-prox cannot find a descent direction that is close to the true gradient. This remarks the importance of future study of SOLA on models with more complicated structure or higher dimensions.
Performance vs approximation. We study how the approximation of Hessian matrices affects the performance of SOLA-prox. In particular, we choose different values of the rank in SOLA-prox and investigate its correlation with the final average test accuracy. Our theory implies that when the approximation of Hessian matrices is better, i.e., smaller , the final performance is better. Our experiments validate this point. Figure 0(a) and Figure 0(b) show that, as we increase , i.e., using more eigenvectors to approximate the Hessian matrix, the average test accuracy over all tasks improves.
## 8 Conclusions
We propose the SOLA algorithm based on the idea of loss function approximation. We establish theoretical guarantees, make connections to the EWC algorithm, and present experimental results showing that in many scenarios, our algorithm outperforms several baseline algorithms, especially among the ones that do not explicitly store the raw data samples. Future directions include studying SOLA on broader classes of neural network architectures and parameter spaces with higher dimensions.
## Acknowledgements
We would like to thank Dilan Gorur, Alex Mott, Clara Huiyi Hu, Nevena Lazic, Nir Levine, and Michalis Titsias for helpful discussions.
## Appendix A Proof of Theorem 1
We first provide a bound for the difference between the gradients of and .
###### Lemma 1.
Let . Then we have
∥Δ(w)∥2≤1KK−1∑k=1δ∥w−ˆwk∥2+ρ∥w−ˆwk∥22.
We prove Lemma 1 in Appendix A.1. Since the loss functions for all the tasks are -smooth, we know that is also -smooth. Then we have
F(wt) ≤F(wt−1)+⟨∇F(wt−1),wt−wt−1⟩+μ2∥wt−wt−1∥22 =F(wt−1)+⟨∇˜F(wt−1)−Δ(wt−1),−η∇˜F(wt−1)⟩+μη22∥∇˜F(wt−1)∥22 ≤F(wt−1)−η(1−μη2)∥∇˜F(wt−1)∥22+η∥∇˜F(wt−1)∥2∥Δ(wt−1)∥2.
Therefore, as long as for some , we have
F(wt)≤F(wt−1)−η(1−1c−μη2)∥∇˜F(wt−1)∥22. (7)
Then we can complete the proof by combining (7) with Lemma 1.
### a.1 Proof of Lemma 1
By the definition of , for some , , we have
Δ(w) =1KK−1∑k=1∇Lk(ˆwk)+Hk(w−ˆwk)−∇Lk(w) =1KK−1∑k=1(Hk−∇2Lk(ˆwk+ξk(w−ˆwk)))(w−ˆwk),
where the second equality is due to Lagrange’s mean value theorem. Then, according to Assumptions 1 and 2, we have
∥∥Hk−∇2Lk(ˆwk+ξk(w−ˆwk))∥∥2 ≤δ+∥∥∇2Lk(ˆwk)−∇2Lk(ˆwk+ξk(w−ˆwk))∥∥2 ≤δ+ρ∥w−ˆwk∥2. (8)
Then, according to triangle inequality, we obtain
∥Δ(w)∥2≤1KK−1∑k=1δ∥w−ˆwk∥2+ρ∥w−ˆwk∥22.
## Appendix B Proof of Proposition 1
We first note that it suffices to construct and , as one can always choose and then the construction of and is equivalent to that of and . Let ,
F(w) =(w−1)2+ρ6w3,w∈[0,1], ˜F(w) =(w−1)2−δ4w2,w∈[0,1].
One can easily check that , , and . In addition, since the second derivative of is always bounded in , we know that is smooth. Since , we know that is -Hessian Lipschitz. Therefore, and satisfy all of our assumptions.
Since , we know that , , and then
|˜F′(w)|<δ2w+ρ2w2
is equivalent to , which implies that .
## Appendix C Proof of Theorem 2
Similar to Appendix A, we define . According to Assumptions 1 and 2, we know that both and are -smooth. By the smoothness of and using the fact that , we get
F(wt) ≤F(wt−1)+⟨∇F(wt−1),wt−wt−1⟩+μ2∥wt−wt−1∥22 =F(wt−1)−⟨∇F(wt−1),η(∇F(wt−1)+Δ(wt−1))⟩+μη22∥∇F(wt−1)+Δ(wt−1)∥22 =F(wt−1)−12μ∥∇F(wt−1)∥22+12μ∥Δ(wt−1)∥22,
which implies
∥∇F(wt−1)∥22≤2μ(F(wt−1)−F(wt))+∥Δ(wt−1)∥22. (9)
By averaging (9) over , we get
1TT∑t=1∥∇F(wt−1)∥22≤2μ(F0−F∗)T+1TT∑t=1∥Δ(wt−1)∥22.
By taking square root on both sizes, and using Cauchy-Schwarz inequality as well as the fact that , we get
1TT∑t=1∥∇F(wt−1)∥2≤√2μ(F0−F∗)√T+ ⎷1TT∑t=1∥Δ(wt−1)∥22. (10)
We then proceed to bound . According to Lemma 1, we have
∥Δ(wt−1)∥2 ≤1KK−1∑k=1δ(∥wt−1−w0∥2+∥w0−ˆwk∥2)+2ρ(∥wt−1−w0∥22+∥w0−ˆwk∥22) :=C+δ∥wt−1−w0∥2+2ρ∥wt−1−w0∥22,
where | 2022-05-19 16:25:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.824540376663208, "perplexity": 602.3526813160527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00453.warc.gz"} |
https://zbmath.org/?q=an:1264.58029 | # zbMATH — the first resource for mathematics
On a new normalization for tractor covariant derivatives. (English) Zbl 1264.58029
A parabolic geometry is a quotient $$G/P$$ of a real semisimple Lie group by the action of a parabolic subgroup $$P$$. More generally, a parabolic geometry of type $$(G,P)$$ on a manifold $$M$$ is a pair consisting of a principal $$P$$-bundle $$\mathcal{G} \to M$$ and a Cartan connection $$\omega \in \Omega^1 ( \mathcal{G} , \mathfrak{g})$$. Examples of parabolic geometries on a manifold include CR, conformal or projective structures.
If $$(\mathcal{G} , \omega)$$ is a (regular and normal) parabolic geometry on a manifold $$M$$, and $$\mathbb{V}$$ is a $$G$$-module, the vector bundle $$V \to M$$ associated to $$\mathcal{G}$$ and the representation $$\mathbb V$$ is called the tractor bundle. The normal Cartan connection $$\omega$$ on $$\mathcal{G}$$ induces a covariant derivative $$\nabla^\omega$$ on $$V$$.
As it is carefully explained in the detailed introduction of the paper, there exists a curved version of the Berstein-Gelfand-Gelfand (BGG) resolution of an irreducible $$G$$-module $$\mathbb V$$. This resolution is made up of invariant differential operators $$D_i$$. The first of these operators, $$D_0$$, is overdetermined and, in the flat case $$M = G/P$$, the tractor covariant derivative $$\nabla^\omega$$ is already known to yield the prolongation of $$D_0$$.
The main result of this paper consists of the curved analogue for such a prolongation. To prove this result, the authors develop a new normalization of $$\nabla^\omega$$, which is invariantly defined and generalizes some other constructions existing in the literature.
In Section 4, the authors extend this construction to the other operators $$D_i$$ in the BGG sequence. These normalizations require more complicated modifications (with differential terms) of the exterior covariant derivative $$d^{\nabla^\omega}$$. As the authors mention, their approach “is based on standard BGG techniques”.
Finally, the paper includes some examples, taken from projective, conformal and Grassmannian geometry, that illustrate the results commented above.
This paper is very clearly written, in an elegant and coordinate independent manner.
##### MSC:
58J70 Invariance and symmetry properties for PDEs on manifolds 53A30 Conformal differential geometry (MSC2010) 53A20 Projective differential geometry 58A32 Natural bundles 53A55 Differential invariants (local theory), geometric objects
Full Text:
##### References:
[1] Bailey, T. N., Eastwood, M. G., Gover, A. R.: Thomas’s structure bundle for confor- mal, projective and related structures. Rocky Mountain J. Math. 24, 1191-1217 (1994) · Zbl 0828.53012 [2] Baum, H., Friedrich, T., Grunewald, R., Kath, I.: Twistor and Killing Spinors on Riemannian Manifolds. Seminarberichte 108, Humboldt Universität, Sektion Mathematik, Berlin (1990) · Zbl 0705.53004 [3] Branson, T.: Conformal structure and spin geometry. In: Dirac Operators: Yesterday and Yoday, Int. Press, Sommerville, MA, 163-191 (2005) · Zbl 1109.53051 [4] Branson, T., \check Cap, A., Eastwood, M. G., Gover, A. R.: Prolongations of geometric overdeter- mined systems. Int. J. Math. 17, 641-664 (2006) · Zbl 1101.35060 [5] Calderbank, D., Diemer, T.: Differential invariants and curved Bernstein-Gelfand-Gelfand sequences. J. Reine Angew. Math. 537, 67-103 (2001) · Zbl 0985.58002 [6] \check Cap, A.: Infinitesimal automorphisms and deformations of parabolic geometries. J. Eur. Math. Soc. 10, 415-437 (2008) · Zbl 1161.32020 [7] \check Cap, A., Slovák, J.: Weyl structures for parabolic geometries. Math. Scand. 93, 53-90 (2003) · Zbl 1076.53029 [8] \check Cap, A., Slovák, J.: Parabolic Geometries I: Background and General Theory. Math. Surveys Monogr. 154, Amer. Math. Soc. (2009) · Zbl 1183.53002 [9] \check Cap, A., Slovák, J., Sou\check cek, V.: Bernstein-Gelfand-Gelfand sequences. Ann. of Math. 154, 97-113 (2001) · Zbl 1159.58309 [10] Cartan, É.: Les espaces ‘a connexion conforme. Ann. Soc. Polon. Math. 2, 171-221 (1923) · JFM 50.0493.01 [11] Cartan, É.: Sur les variétés ‘a connexion projective. Bull. Soc. Math. France 52, 205-241 (1924) · JFM 50.0500.02 [12] Chern, S. S., Moser, J. K.: Real hypersurfaces in complex manifolds. Acta Math. 133, 219- 271 (1974) · Zbl 0302.32015 [13] Dunajski, M., Tod, P.: Four dimensional metrics conformal to Kähler. Math. Proc. Cambridge Philos. Soc. 148, 485-503 (2010) · Zbl 1188.53078 [14] Eastwood, M. G.: Notes on conformal differential geometry. Suppl. Rend. Circ. Mat. Palermo 43, 57-76 (1996) · Zbl 0911.53020 [15] Eastwood, M. G.: Notes on projective differential geometry. In: Symmetries and Overdeter- mined Systems of Partial Differential Equations, IMA Vol. Math. Appl. 144, Springer, New York, 41-60 (2008) · Zbl 1186.53020 [16] Eastwood, M. G., Gover, A. R.: Prolongation on contact manifolds. arXiv:0910.5519 · Zbl 1251.58007 [17] Eastwood, M. G., Matveev, V.: Metric connections in projective differential geometry. In: Symmetries and Overdetermined Systems of Partial Differential Equations, IMA Vol. Math. Appl. 144, Springer, New York, 339-351 (2008) · Zbl 1144.53027 [18] Gover, A. R.: Almost Einstein and Poincaré-Einstein manifolds in Riemannian signature. J. Geom. Phys. 60, 182-204 (2010) · Zbl 1194.53038 [19] Gover, A. R., \check Silhan, J.: The conformal Killing equation on forms-prolongations and ap- plications, Differential Geom. Appl. 26, 244-266 (2008) · Zbl 1144.53036 [20] Gover, A. R., Slovák, J.: Invariant local twistor calculus for quaternionic structures and re- lated geometries. J. Geom. Phys. 32, 14-56 (1999) · Zbl 0981.53031 [21] Gover, A. R., Somberg, P., Sou\check cek, V.: Yang-Mills detour complexes and conformal geom- etry. Comm. Math. Phys. 278, 307-327 (2008) · Zbl 1141.58013 [22] Hammerl, M.: Invariant prolongation of BGG-operators in conformal geometry. Arch. Math. (Brno) 44, 367-384 (2008) · Zbl 1212.53014 [23] Hammerl, M.: Natural prolongations of BGG operators. Thesis, Univ. of Vienna (2009) [24] Hammerl, M., Somberg, P., Sou\check cek, V., \check Silhan, J.: Invariant prolongation of overdetermined PDEs in projective, conformal, and Grassmannian geometry. Ann. Global Anal. Geom. 42, 121-145 (2012) · Zbl 1270.53024 [25] Kostant, B.: Lie algebra cohomology and the generalized Borel-Weil theorem. Ann. of Math. (2) 74, 329-387 (1961) · Zbl 0134.03501 [26] Leitner, F.: Conformal Killing forms with normalisation condition. Suppl. Rend. Circ. Mat. Palermo (2) 75, 279-292 (2005) · Zbl 1101.53040 [27] Morimoto, T.: Lie algebras, geometric structures and differential equations on filtered mani- folds. In: Lie Groups, Geometric Structures and Differential Equations-One Hundred Years after Sophus Lie (Kyoto/Nara, 1999), Adv. Stud. Pure Math. 37, Math. Soc. Japan, Tokyo, 205-252 (2002) · Zbl 1048.58015 [28] Neusser, K.: Prolongation on regular infinitesimal flag manifolds. Int. J. Math. 23, no. 4, art. ID 1250007, 41 pp. (2012) · Zbl 1256.35029 [29] Penrose, R., Rindler, W.: Spinors and Space-Time Vols. 1, 2, Cambridge Univ. Press (1984, 1986) · Zbl 0538.53024 [30] Semmelmann, U.: Conformal Killing forms on Riemannian manifolds. Math. Z. 245, 503- 527 (2003) · Zbl 1061.53033 [31] Sharpe, R. W.: Differential Geometry: Cartan’s Generalization of Klein’s Erlangen Program. Grad. Texts in Math. 166, Springer (1997) · Zbl 0876.53001 [32] Spencer, D. C.: Overdetermined systems of linear partial differential equations. Bull. Amer. Math. Soc. 75, 179-239 (1969) · Zbl 0185.33801
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-10-16 18:51:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998510599136353, "perplexity": 3118.1322886104344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00528.warc.gz"} |
https://www.esaral.com/q/prove-the-following-trigonometric-identities-56756 | # Prove the following trigonometric identities.
Question:
Prove the following trigonometric identities.
If $\operatorname{cosec} \theta+\cot \theta=m$ and $\operatorname{cosec} \theta-\cot \theta=n$, prove that $m n=1$
Solution:
Given:
$\operatorname{cosec} \theta+\cot \theta=m$
$\operatorname{cosec} \theta-\cot \theta=n$
We have to prove $m n=1$
We know that, $\sin ^{2} \theta+\cos ^{2} \theta=1$
Multiplying the two equations, we have
$(\operatorname{cosec} \theta+\cot \theta)(\operatorname{cosec} \theta-\cot \theta)=m n$
$\Rightarrow\left(\frac{1}{\sin \theta}+\frac{\cos \theta}{\sin \theta}\right)\left(\frac{1}{\sin \theta}-\frac{\cos \theta}{\sin \theta}\right)=m n$
$\Rightarrow \quad\left(\frac{1+\cos \theta}{\sin \theta}\right)\left(\frac{1-\cos \theta}{\sin \theta}\right)=m n$
$\Rightarrow \quad \frac{(1+\cos \theta)(1-\cos \theta)}{\sin ^{2} \theta}=m n$
$\Rightarrow \quad \frac{1-\cos ^{2} \theta}{\sin ^{2} \theta}=m n$
$\Rightarrow \quad \frac{\sin ^{2} \theta}{\sin ^{2} \theta}=m n$
$\Rightarrow \quad 1=m n$
$\Rightarrow \quad m n=1$
Hence proved. | 2023-03-25 14:07:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727165102958679, "perplexity": 1899.4269442206657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00065.warc.gz"} |
http://www.verycomputer.com/18_af64f18735893daa_1.htm | ## inserting pictures
### inserting pictures
I am trying to create a document that has some pages that require more
than three pages per page. I am trying to use:
\begin{figure}
\begin{minipage}[h]{0.5\linewidth}
...
\end{minipage}
\end{figure}
When I do this*only allows me to only put 3-4 pages per page then
it begins to put the figures on the next page. Is there any way to
force*to put the pictures where I want.
Jason Kovach
### inserting pictures
Dear Jason and the all the rest
> I am trying to create a document that has some pages that require more
> than three pages per page. I am trying to use:
> \begin{figure}
> \begin{minipage}[h]{0.5\linewidth}
> ...
> \end{minipage}
> \end{figure}
> When I do this*only allows me to only put 3-4 pages per page then
> it begins to put the figures on the next page. Is there any way to
> force*to put the pictures where I want.
You might give it a try with the follwoing command in the preamble of your
document:
\renewcommand{\topfraction}{.9} % max. fraction of floats
on top of page (was .7)
setcounter{bottomnumber}{5} % max. nr. of floats on
top of page (was 1)
\renewcommand{\bottomfraction}{.9} % max. fraction of floats on
top of page (was .3)
\setcounter{totalnumber}{10} % max. nr. of floats
per page (was 3
\renewcommand{\textfraction}{.05} % min. fraction of text on
page (was .2)
\renewcommand{\floatpagefraction}{.05} % min. fraction of page that
has to be used for floats (was .5)
\renewcommand{\topfraction}{0.85}
\renewcommand{\bottomfraction}{0.65}
\renewcommand{\floatpagefraction}{.7}
You might get some pretty ugly effects with these settings (I found them
like this in a book), but you are free to play around with them. You might
only need 'totalnumber' one.
Greetings, Jacco
### inserting pictures
Dear Jason and the all the rest
> I am trying to create a document that has some pages that require more
> than three pages per page. I am trying to use:
> \begin{figure}
> \begin{minipage}[h]{0.5\linewidth}
> ...
> \end{minipage}
> \end{figure}
> When I do this*only allows me to only put 3-4 pages per page then
> it begins to put the figures on the next page. Is there any way to
> force*to put the pictures where I want.
You might give it a try with the follwoing command in the preamble of your
document:
\renewcommand{\topfraction}{.9} % max. fraction of floats
on top of page (was .7)
setcounter{bottomnumber}{5} % max. nr. of floats on
top of page (was 1)
\renewcommand{\bottomfraction}{.9} % max. fraction of floats on
top of page (was .3)
\setcounter{totalnumber}{10} % max. nr. of floats
per page (was 3
\renewcommand{\textfraction}{.05} % min. fraction of text on
page (was .2)
\renewcommand{\floatpagefraction}{.05} % min. fraction of page that
has to be used for floats (was .5)
\renewcommand{\topfraction}{0.85}
\renewcommand{\bottomfraction}{0.65}
\renewcommand{\floatpagefraction}{.7}
You might get some pretty ugly effects with these settings (I found them
like this in a book), but you are free to play around with them. You might
only need 'totalnumber' one.
Greetings, Jacco
### inserting pictures
>*only allows me to only put 3-4 pages per page
Pictures per page?
have a look at subfigure.sty and subfloat.sty. And read epslatex.ps.
Happy TeXing!
--
Axel Reichert -- http://www.veryComputer.com/
Hello everyone!
I want to insert jpegs in my latex-document, but \input does not work.
where have i to copy the file, so that latex will find it?
or am i just using the wrong order? | 2020-11-28 07:57:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849126696586609, "perplexity": 13146.84019769951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195198.31/warc/CC-MAIN-20201128070431-20201128100431-00299.warc.gz"} |
https://cds.cern.ch/collection/CMS%20Conference%20Reports?ln=ru | # CMS Conference Reports
Последние добавления:
2022-09-20
12:20
Dijet events with large rapidity separation in proton-proton collisions at $\sqrt{s} = 2.76$ TeV with CMS detector / Egorov, Anatolii (St. Petersburg, INP) /CMS Collaboration The new search for Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution effects is performed at the Large Hadron Collider by the Compact Muon Solenoid experiment. The cross sections for inclusive and Mueller-Navelet dijet production are measured as a function of the rapidity separation between the jets in proton-proton collisions at $\sqrt{s} = 2.76\unit{TeV}$ for jets with transverse momentum $p_{\perp} > 35\unit{GeV}$ and rapidity $\vert y\vert < 4.7$. [...] CMS-CR-2022-152.- Geneva : CERN, 2022 - 16 p. Fulltext: PDF; In : 72nd International Meeting on Nuclear Spectroscopy and Nuclear Structure : Fundamental problems and applications (Nucleus-2022), Moscow, Russia, 11 - 16 Jul 2022
2022-09-20
12:17
Upgraded CMS Fast Beam Condition Monitor for LHC Run 3 Online Luminosity and Beam Induced Background Measurements / Wanczyk, Joanna Malgorzata (CERN) /CMS Collaboration The fast Beam Condition Monitor (BCM1F) for the CMS experiment at the LHC was upgraded for precision luminosity measurement in the demanding conditions foreseen for LHC Run 3. BCM1F has been rebuilt with new silicon diodes, produced on the CMS Phase 2 Outer Tracker PS silicon wafers. [...] CMS-CR-2022-144.- Geneva : CERN, 2022 - p. Fulltext: PDF; In : 11th International Beam Instrumentation Conference (IBIC 2022), Cracow, Poland, 11 - 15 Sep 2022
2022-09-19
12:40
Measurement of nonprompt and prompt $D^0$ azimuthal anisotropy in Pb-Pb collisions at $\sqrt{s_{_{\mathrm{NN}}}} =$ 5.02 TeV / Stojanovic, Milan (Purdue U.) /CMS Collaboration Heavy quarks are primarily produced via initial hard scatterings, and thus carry information about the early stages of the Quark-Gluon Plasma (QGP). Measurements of the azimuthal anisotropy of the final-state heavy flavor hadrons provide information about the initial collision geometry, its fluctuation, and more importantly, the mass dependence of energy loss in QGP. [...] CMS-CR-2022-141.- Geneva : CERN, 2022 - 5 p. Fulltext: PDF; In : The 20th International Conference on Strangeness in Quark Matter, Busan, Kr, 13 - 17 Jun 2022
2022-09-19
12:40
Studies of heavy quark diffusion in QGP with nonprompt $D^0$ collectivity and jet-$D^0$ angular correlations in PbPb collisions / Stojanovic, Milan (Purdue U.) /CMS Collaboration Measurements of the correlations of the final-state heavy flavor hadrons are of great interest since they provide information about the initial collision geometry and its fluctuation. More importantly, those measurements could reveal the mass dependence of parton energy loss and quark diffusion in the Quark-Gluon Plasma (QGP). [...] CMS-CR-2022-140.- Geneva : CERN, 2022 - 7 p. Fulltext: PDF; In : XXIXth International Conference on Ultra-relativistic Nucleus-Nucleus Collisions, Krakow, Online, Pl, 4 - 10 Apr 2022
2022-09-19
12:40
Measurement of differential cross sections for the production of top quark pairs and of additional jets in pp collisions at $\sqrt{s} = 13$ TeV / Petersen, Henriette Aarup (DESY) /CMS Collaboration Differential cross sections for top quark pair ($\textrm{t}\bar{\textrm{t}}$) production are measured in proton-proton collisions at a centre-of-mass energy of 13 TeV using a sample of events containing two oppositely charged leptons. The data were recorded with the CMS detector at the LHC and correspond to an integrated luminosity of 138 $\textrm{fb}^{-1}$. [...] CMS-CR-2022-137.- Geneva : CERN, 2022 - 7 p. Fulltext: PDF; In : The Tenth Annual Large Hadron Collider Physics (LHCP2022), Online, Online, 16 - 20 May 2022
2022-09-19
12:34
Higgs rare decays at ATLAS and CMS / Dordevic, Milos (VINCA Inst. Nucl. Sci., Belgrade) /ATLAS and CMS Collaborations More than a decade has passed since the start of the operation of the Large Hadron Collider at CERN and the discovery of the Higgs boson by the ATLAS and CMS Collaborations. The so far observed Higgs boson decay modes cover around 90$\%$ of the total Higgs boson width. [...] CMS-CR-2022-121.- Geneva : CERN, 2022 - 6 p. Fulltext: PDF; In : The Tenth Annual Large Hadron Collider Physics (LHCP2022), Online, Online, 16 - 20 May 2022
2022-09-12
15:04
Measurement and QCD analysis of double-differential inclusive jet cross sections at 13 TeV / Makela, Toni (DESY) /CMS Collaboration A measurement of the inclusive jet production in proton-proton collisions at the LHC at $\sqrt{s}=13$~TeV is presented. The double-differential cross sections are measured as a function of the jet transverse momentum $p_\mathrm{T}$ and the absolute jet rapidity $\lvert y \rvert$. [...] CMS-CR-2022-132.- Geneva : CERN, 2022 - 6 p. Fulltext: PDF; In : The Tenth Annual Large Hadron Collider Physics (LHCP2022), Online, Online, 16 - 20 May 2022
2022-09-12
15:03
Measurement of $\text{t}\bar{\text{t}}$ and single top production cross sections in CMS / Muller, Denise (Vrije U., Brussels) /CMS Collaboration With a delivered luminosity of around 140 fb$^{-1}$ at a center-of-mass energy of 13 TeV in the CMS experiment during Run 2, almost 300 million top quarks and top antiquarks were produced. As top quarks can be produced through either strong or electroweak interaction, they are a suitable tool to probe the strong and electroweak sectors of the standard model. [...] CMS-CR-2022-126.- Geneva : CERN, 2022 - 7 p. Fulltext: PDF; In : 41st International Conference on High Energy Physics, Bologna, Italy, 6 - 13 Jul 2022
2022-09-12
15:00
First measurement of the forward rapidity gap distribution in pPb collisions at $\sqrt{s_{_{\mathrm{NN}}}} = 8.16$~TeV with the CMS experiment / Sosnov, Dmitry (St. Petersburg, INP) /CMS Collaboration We present the forward rapidity gap spectra from proton-lead collisions for both pomeron-lead and pomeron-proton topologies. The analysis is performed over 10.4 units of pseudorapidity at a nucleon-nucleon center-of-mass energy of $8.16$~TeV, i.e. [...] CMS-CR-2022-114.- Geneva : CERN, 2022 - 5 p. Fulltext: PDF; In : XXIXth International Conference on Ultra-relativistic Nucleus-Nucleus Collisions, Krakow, Online, Pl, 4 - 10 Apr 2022
2022-09-12
14:17
Upgrade of the CMS Barrel Electromagnetic Calorimeter for the High Luminosity LHC / Cooke, Charlotte Ann (Rutherford) /CMS Collaboration The high luminosity upgrade of the LHC (HL-LHC) at CERN will provide unprecedented instantaneous and integrated luminosities of up to $7.5\times10^{34}$~cm$^{-2}$s$^{-1}$ and 4500~fb$^{-1}$, respectively, from 2029 onwards. To cope with the extreme conditions of up to 200 collisions per bunch crossing, and increased data rates, the on- and off-detector electronics of the CMS electromagnetic calorimeter (ECAL) will be replaced. [...] CMS-CR-2022-097.- Geneva : CERN, 2022 - 11 p. Fulltext: PDF; In : 19th International Conference on Calorimetry in Particle Physics, Brighton, United Kingdom, 16 - 20 May 2022 | 2022-09-26 20:23:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9323705434799194, "perplexity": 5912.9450188574465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00700.warc.gz"} |
https://electronics.stackexchange.com/questions/237311/how-to-choose-adc-anti-aliasing-filter-cut-off-frequency-in-thermocouple-reading/237321 | # How to choose ADC Anti-aliasing filter cut-off frequency in thermocouple readings?
I am reading a thermocouple in an industrial environment. Thermocouple output is read twice a second. ADC performs 50/60 Hz filtering internally.
I had the intention of using a filter with a cutoff frequency between 10-100 hz (not Kilohertz) to completely filter out most frequencies, but Many circuits that I have seen have simple RC anti-aliasing filters with cutoff frequencies higher than 1khz. Am I missing something here or is this simply due to those circuits having a much higher sampling rate? In other words, what problems could arise if I use a filter with very low cutoff frequency?
Sample filter circuitry shown in red below:
It depends what you would like to do with your thermocouple readings.
The Nyquist–Shannon sampling theorem basically says that you need to ensure that the highest frequency of your signal is below half the sampling frequency (fn).
What does it means and do you need to comply with it?
If you comply with it, you ensure that you won't suffer from aliasing problems and it guaranties that your digitized signal contains enough information to perfectly recover the initial analog signal. Most digital signal processing require that the input signal is not aliased to compute meaningful results. If you would like to perform digital filtering or even a PID, it is strongly recommended to ensure compliance with the Nyquist–Shannon sampling theorem or you may encounter strange behaviors.
If you want to comply with the Nyquist–Shannon sampling theorem you need to ensure that the signal content at fn and above is zero. This is not possible because all real world filters "attenuates" and don't "discard"...
You can approximate this by having a filter that sufficiently rejects the frequencies above fn. "sufficiently" depends on the noise level and the application, but let's choose 40dB here.
If you use a simple RC filter (a first order filter) then you have a filter roll-off of -20dB/decade. Thus the cutoff frequency (fc) of your filter need to be 2 decades (100 times) smaller than 2Hz to ensure at least 40dB of attenuation after fn.
$$fc = 2Hz/100 = 0.02 Hz$$
Well, this is not practical !
You could use higher order filters with 40dB/decades. then:
$$fc = 2Hz/10 = 0.2 Hz$$
This is better but still not easily done. Here the best way would be to acquire your thermocouple readings much faster, let's say at 200kHz. A first order anti aliasing filter could be set at 1kHz to ensure -40dB at fn (100kHz).
It is much easier to build a 1kHz filter than a 0.1Hz one !
Then you apply a 100 times decimation digital filter and you get your 2Hz signal back.
Rarely, but sometimes you know your noise sufficiently well that you can match a rejection filter to it.
For instance, you acquire at 2Hz. The Nyquist–Shannon sampling theorem said that you have to ensure to have nothing above 1Hz. If you know by design that nothing can couple with your sensor and it won't be anything above 1Hz, then you don't need any filtering. If you know that only a 50/60Hz signal is likely to be present, then a 50/60Hz rejection filter is enough.
As always, the goal is to ensure that you have nothing above fs. It could be by design (shielding, slow thermocouple, noise free environment, ...) or by filtering.
• Your answer got me thinking. I am using an AD7124-8 chip that has a digital filter. It is my understanding that the actual sampling rate of the ADC is much higher (tens of Kilohertz) needed for the digital filter to function properly and the ADC then gives out some preset data rate like 2 SPS. Thus I am assuming that the ADC automatically performs the decimation when outputting 2 SPS. Is this correct? So it suffices for me to use the original ADC sampling rate of tens of Kilohertz to calculate cutoff frequency instead of the output data rate of 10 SPS? – hadez May 31 '16 at 8:02
• Maybe a stupid question, but: Since the thermocouple’s signal will be pretty slow (in the order of seconds) is all this hassle with filters really necessary? – Michael May 31 '16 at 10:53
• @hadez: About the AD7124. They are using digital filtering internally, thus yes, you can use the 2SPS output directly. But as written in the AD7124 datasheet, you have to filter the thermocouple signal at the input of the AD7124. The datasheet is huge, but complete. Sure all the details are explained somewhere, or in an Application Note available on the website. – Blup1980 Jun 1 '16 at 5:10
• @Michael: Yes, if the thermocouple signal is very slow and you are sure there is no outside noise coupled to it then it's ok to live without the filter. Filtering is just one way to remove the signal above fn. But if there is noting above fn by design, no need for filtering. – Blup1980 Jun 1 '16 at 5:13
• From the AD7124 datasheet page 72 The external antialias filter is omitted for clarity. However, such a filter is required to reject any interference at the modulator frequency and multiples of the modulator frequency. In addition, some filtering may be needed for EMI purposes. Both the analog inputs and the reference inputs can be buffered, which allows the user to connect any RC combination to the reference or analog input pins. – Blup1980 Jun 1 '16 at 5:21
For thermocouples, the big problem is noise pickup rather than intrinsic sensor noise. Furthermore, most thermocouples, particularly when coupled to a physical object, will have thermal time constants on the order of seconds or greater. So your "best" choice for an ADC with a 2 Hz sample rate is on the order of 1 Hz. if your shielding is good, you can get away with rather higher cutoff frequencies. | 2021-05-16 02:57:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5859861373901367, "perplexity": 1114.8823557789676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00250.warc.gz"} |
https://msp.org/apde/2014/7-7/p05.xhtml | Vol. 7, No. 7, 2014
Recent Issues
The Journal About the Journal Editorial Board Subscriptions Editors’ Interests Scientific Advantages Submission Guidelines Submission Form Editorial Login Ethics Statement ISSN: 1948-206X (e-only) ISSN: 2157-5045 (print) Author Index To Appear Other MSP Journals
Local and nonlocal boundary conditions for $\mu$-transmission and fractional elliptic pseudodifferential operators
Gerd Grubb
Vol. 7 (2014), No. 7, 1649–1682
Abstract
A classical pseudodifferential operator $P$ on ${ℝ}^{n}$ satisfies the $\mu$-transmission condition relative to a smooth open subset $\Omega$ when the symbol terms have a certain twisted parity on the normal to $\partial \Omega$. As shown recently by the author, this condition assures solvability of Dirichlet-type boundary problems for $P$ in full scales of Sobolev spaces with a singularity ${d}^{\mu -k}$, $d\left(x\right)=dist\left(x,\partial \Omega \right)$. Examples include fractional Laplacians ${\left(-\Delta \right)}^{a}$ and complex powers of strongly elliptic PDE.
We now introduce new boundary conditions, of Neumann type, or, more generally, nonlocal type. It is also shown how problems with data on ${ℝ}^{n}\setminus \Omega$ reduce to problems supported on $\overline{\Omega }$, and how the so-called “large” solutions arise. Moreover, the results are extended to general function spaces ${F}_{p,q}^{s}$ and ${B}_{p,q}^{s}$, including Hölder–Zygmund spaces ${B}_{\infty ,\infty }^{s}$. This leads to optimal Hölder estimates, e.g., for Dirichlet solutions of ${\left(-\Delta \right)}^{a}u=f\in {L}_{\infty }\left(\Omega \right)$, $u\in {d}^{a}{C}^{a}\left(\overline{\Omega }\right)$ when $0, $a\ne \frac{1}{2}$.
Keywords
fractional Laplacian, boundary regularity, Dirichlet and Neumann conditions, large solutions, Hölder–Zygmund spaces, Besov–Triebel–Lizorkin spaces, transmission properties, elliptic pseudodifferential operators, singular integral operators
Mathematical Subject Classification 2010
Primary: 35S15
Secondary: 45E99, 46E35, 58J40 | 2019-11-22 00:56:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7148311734199524, "perplexity": 2048.8893777279386}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00190.warc.gz"} |
https://www.qb365.in/materials/stateboard/11th-maths-two-dimensional-analytical-geometry-one-mark-question-and-answer-9114.html | #### Two Dimensional Analytical Geometry One Mark Question
11th Standard
Reg.No. :
•
•
•
•
•
•
Maths
Time : 00:30:00 Hrs
Total Marks : 10
10 x 1 = 10
1. The equation of the locus of the point whose distance from y-axis is half the distance from origin is
(a)
x2+3y=0
(b)
x2-3y2=0
(c)
3x2+y2=0
(d)
3x2-y2=0
2. Which of the following equation is the locus of (at2; 2at)
(a)
$\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$
(b)
$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$
(c)
x2+y2=a2
(d)
y2=4ax
3. The slope of the line which makes an angle 45 with the line 3x- y = -5 are
(a)
1,-1
(b)
$\frac{1}{2},-2$
(c)
$1,\frac{1}{2}$
(d)
$2,-\frac{1}{2}$
4. Equation of the straight line that forms an isosceles triangle with coordinate axes in the I-quadrant with perimeter 4 + 2$\sqrt{2}$ is
(a)
x+y+2=0
(b)
x+y-2=0
(c)
$x+y-\sqrt{2}=0$
(d)
$x+y+\sqrt{2}=0$
5. The intercepts of the perpendicular bisector of the line segment joining (1, 2) and (3,4) with coordinate axes are
(a)
5,-5
(b)
5,5
(c)
5,3
(d)
5,-4
6. The equation of the bisectors of the angle between the co-ordinate axes are
(a)
x+y=0
(b)
x-y=0
(c)
x$\pm$y=0
(d)
x=0
7. The equation of a line which makes an angle of 135° with positive direction of x-axis and passes through the point (1,1) is
(a)
x+y=2
(b)
x-y=0
(c)
$2\sqrt {2x}-\sqrt {2y}=0$
(d)
x-3y=0
8. The equation of the straight line bisecting the line segment joining the points (2,4) and (4,2) and making an angle of 450 with positive direction of x-axis is
(a)
x+y=6
(b)
x-y=0
(c)
x-y=6
(d)
x+y=0
9. The equation of median from verten B of the triangle $\triangle ABC$ the co-ordinates of whose vertices are A(-1,6)B(-3,-9)C(5,-8)
(a)
29x+4y+5=0
(b)
8x-5y-21=0
(c)
13x+14y+47=0
(d)
x+y-7=0
10. The equation of the straight line which passes through the point (2,4) and have intercept on the axes equal in magnitude but opposite in sign is
(a)
x-y=2
(b)
x-y+2=0
(c)
x-y+1=0
(d)
x-y-1=0 | 2019-10-15 07:06:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7302079796791077, "perplexity": 273.4210762764721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00128.warc.gz"} |
https://fitur.roh.engineering/ | Wrapper for computing parameters for univariate distributions using MLE. It creates an object that stores d, p, q, r functions as well as parameters and statistics for diagnostics. Currently supports automated fitting from base and actuar packages. A manually fitting distribution fitting function is included to support directly specifying parameters for any distribution from ancillary packages.
## Installation
You can install fitur from CRAN or github with:
install.packages('fitur')
devtools::install_github("tomroh/fitur")
## Example
This is a basic example to fit a poisson distribution with estimated parameters and return the functions for it.
set.seed(562)
x <- rpois(100, 1)
fittedPois <- fit_univariate(x, 'pois', 'discrete')
fittedPois$dpois(1) fittedPois$ppois(1)
fittedPois$qpois(.5) fittedPois$rpois(100)
fittedPois\$parameters | 2023-01-30 15:09:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3607665002346039, "perplexity": 5356.719915463323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00159.warc.gz"} |
https://biharboardsolutions.com/bihar-board-12th-physics-objective-answers-chapter-7-in-english/ | # Bihar Board 12th Physics Objective Answers Chapter 7 Alternating Current
Bihar Board 12th Physics Objective Questions and Answers
## Bihar Board 12th Physics Objective Answers Chapter 7 Alternating Current
Question 1.
Alternating voltage (V) is represented by the equation
(a) V(t)=Vmewt
(b) V(t) = Vm sinωt
(c) V(t) = Vm cosωt
(d) V(t) = Vm tanωt
where Vm is the peak voltage
(b) V(t) = Vm sinωt
Question 2.
A 100 Ω resistor is connected to a 220 V, 50 Hz ac supply. The rms value of current in the circuit is
(a) 1.56 A
(b) 1.56 mA
(c) 2.2 A
(d) 2.2 mA
(c) 2.2 A
Question 3.
The peak voltage of an ac supply is 440 V, then its rms voltage is
(a) 31.11 V
(b) 311.1 V
(c) 41.11V
(d) 411.1V
(b) 311.1 V
Question 4.
The rms value of current in an ac circuit is 25 A, then peak current is
(a) 35.36 mA
(b) 35.36 A
(c) 3.536 A
(d) 49.38 A
(b) 35.36 A
Question 5.
A light bulb is rated at 100 W for a 220 V ac supply. The resistance of the bulb is
(a) 284 Ω
(b) 384 Ω
(c) 484 Ω
(d) 584 Ω
(c) 484 Ω
Solution:
(c) Here, P = 100 W, = 220 V
Resistance of the bulb is
Question 6.
An ac source is of $$\frac{200}{\sqrt{2}} \mathbf{v}$$, 50 Hz. The value of voltage
after $$\frac{1}{600}$$ from the start is
(a) 200V
(b) $$\frac{200}{\sqrt{2}} \mathbf{v}$$
(c) 100V
(d) 50V
(c) 100V
Solution:
Question 7.
An ac source of voltage V = Vm sincot is connected across the resistance R as shown in figure. The phase relation between current and voltage for this circuit is
(a) both are in phase
(b) both are out of phase by 90°
(c) both are out of phase by 120°
(d) both are out of phase by 180°
(a) both are in phase
Question 8.
In the case of an inductor
(a) voltage lags the current by $$\frac{\pi}{2}$$
(b) voltage leads the current by $$\frac{\pi}{2}$$
(c) voltage lags the current by $$\frac{\pi}{3}$$
(d) voltage leads the current by $$\frac{\pi}{4}$$
(b) voltage leads the current by $$\frac{\pi}{2}$$
Question 9.
An ideal inductor is in turn put across 220 V, 50 Hz and 220 V, 100 Hz supplies. The current flowing through it in the two cases will be
(a) equal
(b) different
(c) zero
(d) infinite
(b) different
Question 10.
An inductor of 30 mH is connected to a 220 V, 100 Hz ac source. The inductive reactance is
(a) 1058 Ω
(b) 12.64 Ω
(c) 18.85 Ω
(d) 22.67 Ω
(c) 18.85 Ω
Solution:
(c) Here, L = 30 mH = 30 × 10-3 H; Vrms = 220 V,
υ = 100 Hz
Inductive reactance XL = 2πυL
= 2 × 3.14 × 100 × 30 × 10-3
= 18.85 Ω
Question 11.
A 44 mH inductor is connected to 220 V, 50 Hz ac supply. The rms value of the current in the circuit is
(a) 12.8 A
(b) 13.6 A
(c) 15.9 A
(d) 19.5 A
(c) 15.9 A
Solution:
(c) Here, L = 44 mH = 44 × 10-3 H; Vrms= 220V,
υ = 50 Hz
The inductive reactance is XL = ωL
= 2πυL = 2 × 3.14 × 50 × 44 × 10-3
= 13.82 Ω
∴ $$\quad I_{\mathrm{rms}}=\frac{V_{\mathrm{rms}}}{X_{L}}=\frac{220}{13.82}=15.9 \mathrm{A}$$
Question 12.
A 5 μF capacitor is connected to a 200 V, 100 Hz ac source. The capacitive reactance is
(a) 212 Ω
(b) 312 Ω
(c) 318 Ω
(d) 412 Ω
(c) 318 Ω
Question 13.
If a capacitor of 8 ΩF is connected to a 220 V, 100 Hz ac source and the current passing trough it is 65 mA, then the rms voltages across it is
(a) 129.4 V
(b) 12.94 V
(c) 1.294 V
(d) 15 V
(b) 12.94 V
Solution:
(b) Here, Vrms = 220 V,Irms = 65 mA = 0.065 A
C = 8 μF = 8 × 10-6 F, υ = 100 Hz
Capacitive reactance,
Then rms voltage across the capacitor is
VCrms = 7rmsXC = 0.065 × 199= 12.94 V
Question 14.
Phase difference between voltage and current in a capacitor in an ac circuit is
(a) π
(b) π/2
(c) 0
(d) π/3
(b) π/2
Question 15.
A 30 μF capacitor is connected to a 150 V, 60 Hz ac supply. The rms value of current in the circuit is
(a) 17 A
(b) 1.7 A
(c) 1.7 mA
(d) 2.7 A
(b) 1.7 A
Solution:
(b) Here, C = 30 × 10-6 F, Vrms = 150 V, υ = 60 Hz
Capacitive reactance
Question 16.
A 60 μF capacitor is connected to a 110 V (rms), 60 Hz ac supply. The rms value of current in the circuit is
(a) 1.49 A
(b) 14.9 A
(c) 2.49 A
(d) 24.9 A
(c) 2.49 A
Question 17.
In which of the following circuits the maximum power dissipation is observed ?
(a) Pure capacitive circuit
(b) Pure inductive circuit
(c) Pure resistive circuit
(d) None of these
(c) Pure resistive circuit
Question 18.
When an ac voltage of 220 V is applied to the capacitor C, then
(a) the maximum voltage between plates is 220 V.
(b) the current is in phase with the applied voltage.
(c) the charge on the plate is not in phase with the applied voltage.
(d) power delivered to the capacitor per cycle is zero.
(b) the current is in phase with the applied voltage.
Question 19.
In the series LCR circuit shown the impedance is
(a) 200 Ω
(b) 100 Ω
(c) 300 Ω
(d) 500 Ω
(d) 500 Ω
Solution:
(d) Here, L = 1 H, C = 20 μF = 20 × 10-6F; R = 300 Ω, υ = 50/π Hz
Question 20.
A 100 μF capacitor in series with a 40 Ω resistor is connected to a 100 V, 60 Hz supply. The maximum current in the circuit is
(a) 2.65 A
(b) 2.75 A
(c) 2.85 A
(d) 2.95 A
(d) 2.95 A
Solution:
(d) Here, C = 100 μ.F = 100 × 10-6 F = 10-4– F,
R = 40 Ω, Vrms = 100 V,υ = 60 Hz
V0 = √2Vrms = 100√2V
Question 21.
In series LCR circuit, the phase angle between supply voltage & current is
$$\tan \phi=\frac{X_{L}-X_{C}}{R}$$
Question 22.
In a circuit L, C and R are connected in series with an alternating voltage source of frequency υ . The current leads the voltage by 45°. The value of C is
(d) $$\frac{1}{2 \pi v(2 \pi v L+R)}$$
Question 23.
At resonance frequency the impedance in series LCR cirucit is
(a) maximum
(b) minimum
(c) zero
(d) infinity
(b) minimum
Question 24.
At resonant frequency the current amplitude in series LCR circuit is
(a) maximum
(b) minimum
(c) zero
(d) infinity
(a) maximum
Question 25.
The resonant frequency of a series LCR circuit with L = 2.0 H, C = 32 μF and R = 10 Ω is
(a) 20 Hz
(b) 30 Hz
(c) 40 Hz
(d) 50 Hz
(a) 20 Hz
Question 26.
The Q factor of a series LCR circuit with L = 2 H, C=32 μF and R = 10 Ω is
(a) 15
(b) 20
(c) 25
(d) 30
(c) 25
Question 27.
A series LCR circuit has R = 5 Ω, L = 40 mH and C = 1 μF, the bandwidth of the cirucit is
(a) 10 Hz
(b) 20 Hz
(c) 30 Hz
(d) 40 Hz
(b) 20 Hz
Question 28.
In LCR-circuit if resistance increases, quality factor
(a) increases finitely
(b) decreases finitely
(c) remains constant
(d) none of these
(b) decreases finitely
Question 29.
A series resonant LCR circuit has a quality factor (Q- factor) = 0.4. If R = 2k Ω , C=0.1 μF, then the value of inductance is
(a) 0.1 H
(b) 0.064 H
(c) 2H
(d) 5 H
(b) 0.064 H
Solution:
(b) Quality factor $$Q=\frac{1}{R} \sqrt{\frac{L}{C}} \text { or } \frac{L}{C}=(\mathrm{QR})^{2}$$
Here, Q = 0.4, R = 2k Ω = 2 × 103 Ω;
C = 0.1 μ.F = 0.1 × 10-6
∴ L = (QR)2 C
∴ L = (0.4 × 2 × 103)2 × 0.1 × 10-6 = 0.064 H
Question 30.
An alternating supply of 220 V is applied across a circuit with resistance 22 Ω and impedance 44 Ω. The power dissipated in the cirucit is
(a) 1100 W
(b) 550 W
(c) 2200 W
(d) (2200/3) W
(b) 550 W
Solution:
(b) Here, V = 220 V, Resistance, R = 22 Ω
Impedance, Z = 44 Ω
Current in cirucit, $$I=\frac{V}{Z}=\frac{220 \mathrm{V}}{44 \Omega}=5 \mathrm{A}$$
Power dissipated in the circuit,
p = I2R = (5)2 × 22 = 550 W
Question 31.
In a series LCR circuit, the phase difference between the voltage and the current is 45°. Then the power factor will be
(a) 0.607
(b) 0.707
(c) 0.808
(d) 1
(b) 0.707
Solution:
(b) Here, Φ = 45° In series LCR circuit,
power factor = cosΦ
cos Φ = cos 45° = $$\frac{1}{\sqrt{2}}$$ = 0.707.
Question 32.
The natural frequency (ω0) of oscillations in LC circuit is given by
(c) $$\frac{1}{\sqrt{L C}}$$
Question 33.
An LC circuit contains a 20 mH inductor and a 25 pF capacitor with an initial charge of 5 mC. The total energy stored in the circuit initially is
(a) 5 J
(b) 0.5 J
(c) 50 J
(d) 500 J
(b) 0.5 J
Solution:
(b) Here, C = 25 µF = 25 × 10-6 F,
L = 20mH = 20 × 10-3H, q0
= 5 mC = 5 × 10-3 C
∴ Total energy stored in the circuit initially is
Question 34.
What is the mechanical equivalent of spring constant k in LC oscillating circuit ?
(a) $$\frac{1}{L}$$
(b) $$\frac{1}{C}$$
(c) $$\frac{L}{C}$$
(d) $$\frac{1}{LC}$$
(b) $$\frac{1}{C}$$
Question 35.
A transformer works on the principle of
(a) self induction
(b) electrical inertia
(c) mutual induction
(d) magnetic effect of the electrical current
(c) mutual induction
Question 36.
Transformer is used to
(a) convert ac to dc voltage
(b) convert dc to ac voltage
(c) obtain desired dc power
(d) obtain desired ac voltage and current
(d) obtain desired ac voltage and current
Question 37.
Quantity that remains unchanged in a transformer is
(a) voltage
(b) current
(c) frequency
(d) none of these
(c) frequency
Question 38.
The core of a transformer is laminated to reduce
(a) flux leakage
(b) hysteresis
(c) copper loss
(d) eddy current
(d) eddy current
Question 39.
The loss of energy in the form of heat in the iron core of a transformer is
(a) iron loss
(b) copper loss
(c) mechanical loss
(d) none of these
(a) iron loss
Question 40.
A transformer &used to light 140 W, 24 V lamp from a 240 V at mains. If the main current is 0.7 A, the efficiency of the transformer is
(a) 63,0%
(b) 74%
(c) 83.3%
(d) 48%
(c) 83.3%
(c) Output power = 140 W,
Inpute power = 240 × 0.7 = 168 W
Efficiency
Question 41.
If the rms current in a 50 Hz ac circuit is 5 A, the value of the current 1/300 seconds after its value becomes zero is
(b) $$5 \sqrt{\frac{3}{2}} \mathrm{A}$$
Question 42.
An inductor of reactance 1Ω and a resistor of 2Ω are connected in series to the terminals of a 6 V (rms) ac source. The power dissipated in the circuit is
(a) 8 W
(b) 12 W
(c) 14.4 W
(d) 18 W
(a) $$\frac{1}{\sqrt{2}} \mathrm{A}$$
(a) $$\frac{1}{\sqrt{2}} \mathrm{A}$$ | 2023-02-03 20:11:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6756579279899597, "perplexity": 6222.513003630543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00863.warc.gz"} |
https://math.stackexchange.com/questions/1030192/logistic-differential-equation-to-model-population | # Logistic differential equation to model population
Problem Description:
The population of the world was about 5.3 billion in 1990. Birth rate in the 1990s ranged from 35 to 40 million per year and death rates ranged from 15 to 20 million per year. Let's assume that the carrying capacity for world population is 100 billion.
Write the logistic differential equation for these data. (Because the initial population is small compared to the carrying capacity, you can take k to be an estimate of the initial relative growth rate.)
My calculation:
Because it's a logistic model and the carrying capacity is 100 billion, I wrote the differential equation as:
$\frac{dy}{dx} = ky(1-\frac{y}{100})$ 100 denotes 100 billions for the carrying capacity.
My question:
How can we know the value of k?
## 1 Answer
You have the hint: "you can take $k$ to be an estimate of the initial relative growth rate".
Since, $\text{growth rate} = \text{birth rate} - \text{death rate}$, we get that the growth rate is ranged from $15$ to $35$ million per year. Hence, the average growth rate is $\frac{15+35}{2} = 25$ million per year. Therefore, the initial relative growth rate $k = \frac{25}{5 300} = 0,0047169$.
I think it is what you need. | 2022-01-20 09:51:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916308879852295, "perplexity": 117.98060108727391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00065.warc.gz"} |
http://ask.programmershare.com/76_7917482/ | # AIR - Supress Error messages in debug app?
Author: magiciany Date: 2011-10-10
In a packaged AIR app, I need to add a file called debug to C:\Program Files (x86)\The App\META-INF\AIR\debug for the software to function 'correctly' - I think it is a database error but I can not find anyway to solve it at the moment. Is it possible to add this debug flag, but to suppress/hide any modal error boxes that may appear from another possible yet unknown bug? | 2017-10-16 21:54:42 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011223316192627, "perplexity": 2750.3568011135108}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820466.2/warc/CC-MAIN-20171016214209-20171016234209-00128.warc.gz"} |
https://stats.stackexchange.com/questions/205136/using-a-zero-slope-coefficient-predictor-variable-in-multiple-regression | # Using a zero slope coefficient predictor variable in multiple regression
I ran multiple regression with three predictor variables, which according to the theory I am using, should all predict the dependent variable.
However, one of the variable's partial plot shows what looks like a zero slope. The standardised coefficient is -.032 for this variable.
Below is the partial plot. Does it make sense to include this in the regression model? I was initially not going to include it on the grounds that it violates the assumption of linearity, but then I realised that there is some form of linear relationship, albeit a zero/horizontal slope.
This is m first project I have had to use statistics for, and I have no teaching in it at all, so have had to self-study so excuse any naivety.
First off, the assumption of linearity applies only to the parameters $\beta$, $x_i$ can be squared, loged or whatever you would like. You assume only that $y$ can be written as linear combination of $x_i$. Try using something more flexible, and see where that takes you. | 2020-09-22 18:19:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7360270023345947, "perplexity": 396.8075500960039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00396.warc.gz"} |
https://electronics.stackexchange.com/questions/335133/detecting-a-dc-current | # Detecting a DC current
I want to detect if the electronic locker in my differential is functioning. The differential locker is essentially a solenoid that engages the axle locker, however since the wire runs under the vehicle it is exposed, and a damaged wire would go undetected. I have no way of mechanically detecting it (limit switch, plunger, etc), so I need to do it electrically.
The locker's solenoid operates on 12V and draws between 5-8 amps, so assuming the higher value, the locker has an effective resistance of between 1.5 and 2.4 ohms.
I have found some 10A shunt resistors with a voltage drop of 75mV (I'm guessing that means 7.5mohm), which will have minimal impact on the operation of the solenoid. My calculation is that my actual voltage drop will be between 37.5mV and 60mV.
I'm not trying to measure the current, just detect it by lighting an LED (most likely a lighted switch which expects 12V. Most of the articles or information I've found reference OpAmps to measure current, but I just want to detect it, so presumably I would use a PNP transistor, but which one?
I have done some searching, but I don't even know if I'm searching for the right thing.
• Are you wanting some kind of light bulb that lights up somewhere if the wire is damaged and the solenoid is no longer able to be activated? What are your goals here? What does "detecting it" mean, exactly? How do you get notified? Also, there are common ways that faults in the headlights are identified and reported in vehicles. Have you read about how these are achieved? – jonk Oct 18 '17 at 18:15
• Sorry, I edited out the part where I wanted to light an LED. The goal is to light an LED when current is flowing (ie: the locker is activated) such that if the light doesn't come on, we know there's a severed connection somewhere. Edited the post. I haven't looked into headlight fault detection. Will research. – Prdufresne Oct 18 '17 at 18:18
• So, is it correct to say that the light should always come on when there is 5-8 amps flowing in the solenoid and that the light should otherwise be off? – jonk Oct 18 '17 at 18:21
• Correct. However, my goal isn't to do over-current detection. – Prdufresne Oct 18 '17 at 18:23
• @Trevor I was trying not to assume anything here. (I've had cars with positive ground and this site covers the world.) But true enough. – jonk Oct 18 '17 at 18:41
Just to toss in another thought:
simulate this circuit – Schematic created using CircuitLab
None of the transistors need to be power BJTs; they can all be small TO-92 types (or smaller.) Optimally, it would be nice if they were matched. But I added $R_4$ as a potentiometer so you can make adjustments, instead. This also covers variations in your $R_1$ value. Nominally, it should be about 20-30% of the indicated value. But your mileage may vary depending on the matching of $Q_1$ and $Q_2$ and your actual $R_1$ value. (I thought about suggesting a BCM62 for matching but then there is $R_1$, so I think the potentiometer is useful until you know what you actually need here.) $Q_3$ is totally non-critical. I'm assuming there is a high-side switch here, so $SW_1$ is there to represent that detail. (Probably fused, too, but I didn't add that.) If you get BJTs that can stand off some decent voltage (like 2N5401), then there's probably no real need for load-dump protection.
I've not gone through this for all the practical details of an automotive situation. It's more a behavioral approach that avoids opamps. It might work okay, as is.
Just for grins, I tossed the above circuit into LTSpice and used the standard BJTs and $R_4=22\:\textrm{k}\Omega$ on a lark. The DC sweep came out this way:
You can play with either $R_2$ or $R_4$ to change the location where the switching takes place.
If $Q_1$ and $Q_2$ are matched, it won't even budge even if you vary $\beta$ and $V_{BE}$ over a reasonable range. Of course.. that's if they are matched. Of course it will work fine then. So if you can, pick $Q_1$ and $Q_2$ as matched pairs such as BCV62 or BCM62. They won't stand off a load dump well. But they might survive it.
Unmatched, and messing with $\beta$ and $V_{BE}$ (as a function of $I_{SAT}$), I get the following spread:
That's with a fair amount of variation on the BJTs. So it's not horrible, so long as you get the same part number. But it may require some tweaking of either $R_2$ and/or $R_4$ to make it work well for you.
I've added $R_5$ to prevent a mistake in setting the $R_4$ potentiometer to an accidentally harmful value. The value of $R_5$ should be the same as the value chosen for $R_2$, roughly speaking. I had imagined using a potentiometer until the right value was measured, then replacing it with an actual resistor. But if the potentiometer is kept, then the value of $R_5$ could be a little less than $R_2$ (as little as half) but probably shouldn't be more than 25% larger. Somewhere in that range should be okay if the same exact value isn't available.
I've added some possible protection schemes to the circuit. The diode-only would need to be something with perhaps $200\:\textrm{V}$ reverse voltage and at least $10\:\textrm{A}$ capability. The downside of it is that the flyback current dies out slowly. And in this case, it might die out way too slowly. The diode+zener combination provides a high voltage so the flyback current pulse can die out more quickly. But both parts must be capable of the same high current potentials and I'd want about $30\:\textrm{V}$ across the pair -- so select a zener in that area, if possible. Finally, there are also automotive MOVs (Littlefuse makes them, for example) which can repeatedly absorb solenoid flyback energy as well. They come in a variety of voltage ratings. Again, I'd be looking for about $30\:\textrm{V}$ here.
• +1 nice.. my turn to be picky though ;).... Winding R4 to the wrong end = smoke... – Trevor_G Oct 18 '17 at 21:13
• @Trevor Well, this whole thing was just tossed out as a random thought and more behavioral than real. Now we've got the suggestions for adding some fixed resistance to the pot and inductor flyback. I guess it's starting to get real. I will plug in a few things into the schematic (though I would not have minded you editing it.) – jonk Oct 19 '17 at 17:11
• @Prdufresne Added some solenoid flyback protection thoughts. Just so you know. The diode by itself is probably my last choice because it might take seconds to do its job. The diode+zener or an automotive MOV of sufficient size might be the best choice here. Something that will yield about 30 V across the coil during flyback would be good. – jonk Oct 19 '17 at 18:34
• :) Yes Jonk.. see what you started. But then again.. this is what makes trolling this forum fun. – Trevor_G Oct 19 '17 at 18:57
• @Trevor I'd wanted to toss out a thought that would be junk box parts and accommodate discrete BJT variations using a pot. Then someone (who shall remain nameless) mentioned matched BJT pairs. Which is of course better, though boutique and less likely as junk box parts, and might mean that a design without a pot could be achieved. So I had to go Spice the darned thing to add more. Then someone (still nameless) mentioned the need for flyback protection -- good point by the way. Still more. So... maybe I toss these out as a minor comment to your answer next time... ;) – jonk Oct 19 '17 at 19:08
Since this is automotive I have to assume the wire is on the +12V side. As such you need to use high side current sensing. There are numerous devices that do this like the LTC6101.
Whichever sense method you chose, you would then need to feed the analog value into an appropriate comparator circuit and have it turn on an LED when current is detected.
Like this perhaps.
simulate this circuit – Schematic created using CircuitLab
Choose values for R2 and R3 to set the threshold voltage to turn on the light when the current is say over 1A.
NOTE: Since this circuit takes it's power from the solenoids power line it consumes zero power when that is turned off. | 2019-09-18 09:31:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4890974462032318, "perplexity": 1062.643571112711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00526.warc.gz"} |
https://www.rdocumentation.org/packages/MASS/versions/7.3-51.3/topics/plot.lda | # plot.lda
0th
Percentile
##### Plot Method for Class 'lda'
Plots a set of data on one, two or more linear discriminants.
Keywords
multivariate, hplot
##### Usage
# S3 method for lda
plot(x, panel = panel.lda, …, cex = 0.7, dimen,
abbrev = FALSE, xlab = "LD1", ylab = "LD2")
##### Arguments
x
An object of class "lda".
panel
the panel function used to plot the data.
additional arguments to pairs, ldahist or eqscplot.
cex
graphics parameter cex for labels on plots.
dimen
The number of linear discriminants to be used for the plot; if this exceeds the number determined by x the smaller value is used.
abbrev
whether the group labels are abbreviated on the plots. If abbrev > 0 this gives minlength in the call to abbreviate.
xlab
label for the x axis
ylab
label for the y axis
##### Details
This function is a method for the generic function plot() for class "lda". It can be invoked by calling plot(x) for an object x of the appropriate class, or directly by calling plot.lda(x) regardless of the class of the object.
The behaviour is determined by the value of dimen. For dimen > 2, a pairs plot is used. For dimen = 2, an equiscaled scatter plot is drawn. For dimen = 1, a set of histograms or density plots are drawn. Use argument type to match "histogram" or "density" or "both".
##### References
Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth edition. Springer.
pairs.lda, ldahist, lda, predict.lda | 2020-10-20 11:51:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49100518226623535, "perplexity": 5690.381640481866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00651.warc.gz"} |
https://dispersivewiki.org/DispersiveWiki/index.php?title=Zakharov-Schulman_system | # Zakharov-Schulman system
The Zakharov-Schulman system is described by the equations
${\displaystyle iu_{t}+L_{1}u=\phi u}$
${\displaystyle L_{2}\phi =L_{3}(|u|^{2})}$
where L_1, L_2, L_3 are various constant coefficient differential operators; these describe the interactions of small amplitude, high frequency waves with acoustic waves ZkShl1980. Using energy methods and gauge transformations, local existence for smooth data was established in KnPoVe1995b; see also GhSau1992.
The Davey-Stewartson system can be viewed as a special case of this system. | 2018-12-18 11:59:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160285949707031, "perplexity": 1091.0622942804557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829140.81/warc/CC-MAIN-20181218102019-20181218124019-00288.warc.gz"} |
https://repository.uantwerpen.be/link/irua/76432 | Publication
Title
A new mixed-valence ferrite with a cubic structure, $YBaFe_{4}O_{7}$: spin-glass-like behavior
Author
Abstract
A new mixed-valence ferrite, YBaFe4O7, has been synthesized. Its unique cubic structure, with a = 8.9595(2) Å, is closely related to that of the hexagonal 114 oxides YBaCo4O7 and CaBaFe4O7. It consists of corner-sharing FeO4 tetrahedra, forming triangular and kagome layers parallel to (111)C. In fact, the YBaFe4O7 and CaBaFe4O7 structures can be described as two different ccc and chch close packings of [BaO3]∞ and [O4]∞ layers, respectively, whose tetrahedral cavities are occupied by Fe2+/Fe3+ cations. The local structure of YBaFe4O7 is characterized by a large amount of stacking faults originating from the presence of hexagonal layers in the ccc cubic close-packed YBaFe4O7 structure. In this way, they belong to the large family of spinels and hexagonal ferrites studied for their magnetic properties. Differently from all the ferrites and especially from CaBaFe4O7, which are ferrimagnetic, YBaFe4O7 is an insulating spin glass with Tg = 50 K.
Language
English
Source (journal)
Chemistry of materials / American Chemical Society. - Washington, D.C.
Publication
Washington, D.C. : 2009
ISSN
0897-4756
Volume/pages
21:6(2009), p. 1116-1122
ISI
000264310900019
Full text (Publisher's DOI)
UAntwerpen
Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address | 2017-08-23 17:37:26 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5047407746315002, "perplexity": 9826.93157788372}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123312.44/warc/CC-MAIN-20170823171414-20170823191414-00646.warc.gz"} |
https://hub.taotesting.com/articles/administrator-guide/server-migration | This guide is intended to help you move TAO 3.1 from one server to another without data loss.
For older installations see:
• Server Migration 2.5.
• Server Migration 3.0.
## Requirements
• This guide assumes you have not heavily modified your configuration. If you are using alternative storage implementations (such as NoSql), the migration might be more complex.
• This guide assumes that the technology stack did not change. There might be additional issues if you would like to migrate between different Operating Systems (Windows->Linux) or Databases (MySql => Postgres)
## Old Server
• Copy the entire TAO data and config directory from the old server (by default found in config and data).
• Create a dump of the database making sure to include the routines (if you are using mysql, use mysqldump -R).
## New Server
### File migration
* Replace the config and data directory and to set the correct file owner/rights.
* Modify config/generis.conf.php to reflect your new domain and directories. (Do NOT change GENERIS_INSTANCE_NAME or LOCAL_NAMESPACE.)
* watch out for the trailing slashes when writing paths !
define('ROOT_PATH','/var/www/newTaoDirectory/');
define('ROOT_URL','http://www.newTaoDomain.com/');
.
.
define('FILES_PATH','/opt/newTaoDataDirectory/');
* Modify config/generis/filesystem.conf.php to reflect your new file paths. For this change the filesPath and the root of each adapter:
return new oat\oatbox\filesystem\FileSystemService(array(
'filesPath' => 'NEW_FILE_DIRECTORY_HERE',
'class' => 'Local',
'options' => array(
)
),
'public' => array(
'class' => 'Local',
'options' => array(
'root' => 'NEW_FILE_DIRECTORY_HERE/tao/public'
)
),
.
.
* Modify config/generis/persistences.conf.php to reflect your new database configuration.
'default' => array(
'driver' => 'pdo_mysql',
'host' => 'localhost',
'dbname' => 'DbName',
),
* Update the directory path value in config/tao/websource_[CODE].conf.php to point to your new TAO data directory. (Do NOT modify fsUri.)
return array(
'className' => 'oat\\tao\\model\\websource\\TokenWebSource',
'options' => array(
'secret' => 'a0c2ef52398c24d5347109f930d907d3',
); | 2019-07-21 11:11:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29867231845855713, "perplexity": 13891.34476598568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526948.55/warc/CC-MAIN-20190721102738-20190721124738-00132.warc.gz"} |
https://answers.opencv.org/answers/86583/revisions/ | # Revision history [back]
the Y stands for the grayscale pixel value, not for alpha (which is irrelevant for computer vision, and thus just gets discarded)
the formula is actually a weighted sum, gray = (0.299 * red) + (0.587 * green) + (0.114 * blue).
maybe it gets more obvious, if you just add up the factors: 0.299 + 0.587 + 0.114 = 1
the Y stands for the grayscale pixel value, not for alpha (which is irrelevant for computer vision, and thus just gets discarded)
the formula is actually a weighted sum,
gray = (0.299 * red) + (0.587 * green) + (0.114 * blue).
maybe it gets more obvious, if you just add up the factors: factors:
0.299 + 0.587 + 0.114 = 1 | 2019-10-18 01:51:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348777890205383, "perplexity": 1243.9207908427923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00417.warc.gz"} |
https://www.physicsforums.com/threads/center-of-mass.791542/ | # Center of mass
1. Jan 10, 2015
### oreo
Is center of mass a vector quantity. If so then how? Is it directed towards Earth's center?
2. Jan 10, 2015
### Andrew Mason
The centre of mass is a point. As such, it is expressed as a displacement vector from the origin of the reference frame that is being used. If it coincides with the origin, it is the vector (0, 0, 0).
AM
3. Jan 10, 2015
### oreo
Thanks a lot.
4. Jan 10, 2015
### Staff: Mentor
The center of mass is a position. Technically position is an affine space, not a vector space. At least in non relativistic physics.
5. Jan 10, 2015
### Andrew Mason
Shayan,
Just to follow up on this, the centre of mass of a mass distribution is conveniently expressed as the sum of each of the point masses in the system multiplied by their displacement vector from the origin divided by the total mass:
$$\vec{R} =\frac{1}{\sum_{i}m_i} \sum_{i} m_i\vec{r}_i$$
See, for example, Barger & Olson, Classical Mechanics, A Modern Perspective, first ed., ch. 5-1, p. 156-160
AM
Last edited: Jan 10, 2015 | 2017-11-25 05:07:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5935731530189514, "perplexity": 975.7603173915413}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809392.94/warc/CC-MAIN-20171125032456-20171125052456-00159.warc.gz"} |
https://cpt.hitbullseye.com/Impetus-Reasoning-Test.php | # Impetus Reasoning Question
Views:2122
NOTE: This section is not present in this company's placement process. If you still want to practise, some questions are provided below.
DIRECTIONS for the question 1-4: Read the information given below and answer the questions given below that accordingly.
During a certain week a sales representative must call on five customers - J, K, L, M and N.
She will call on each customer only once, and Monday through Friday, according to the following restrictions:
She cannot call on L on Monday.
She must call on J before she calls on M.
She must call on K before she calls on N.
# Complete Test Series for Wipro Bag your Dream Job Today Comprehensive Online Tests Inclusive Prep for All Placement Exams Acquire Essential Domain Skills
1. Which of the following is an acceptable schedule for the sales representative's calls?
1. J, M, L, K, N
2. K, M, J, N, L
3. L, K, N, J, M
4. L, M, J, K, N
5. N, L, K, J, M
As per conditions : (1) L - Mon X (2) J-M (3) K-N
Only choice A satisfy all conditions.
2. Which of the following two calls CANNOT be scheduled after L?
1. J and K
2. J and N
3. K and M
4. K and N
5. M and N
As per conditions : (1) L - Mon X (2) J-M (3) K-N
As per (1) L can be on Tue and then we are left with only 3 days so can't adjust 4 people.
3. If the sales representative calls on L earlier than J. which of the following must be true of her schedule?
1. K is first
2. L is second
3. J is third
4. N is second
5. M is fifth
As per conditions : (1) L - Mon X (2) J-M (3) K-N
If L is before J then we are left with only 3 days so can't adjust 4 people.
4. If L, K and N are scheduled on consecutive days, in that order, then M could be scheduled for which day?
1. Either Monday or Tuesday
2. Either Monday or Wednesday
3. Either Tuesday or Friday
4. Either Wednesday or Thursday
5. Either Thursday or Friday
As per conditions : (1) L - Mon X (2) J-M (3) K-N If L,K,N are consecutive they have to be Tue onwards. So day left is Fri. But if L is on Wed then day left is Tue as per 3rd condition.
DIRECTIONS for the question 5-7: Study the following letter-symbol-number sequence carefully and answer the questions given below. H U 8 * C P 1 T L Q @ M B 2 • £ X K 6 Δ G 3 $V F A 7 Z N D 1 β R 5. Four of the following five are alike in respect to their position in the above sequence and hence form a group. Which one does not belong to the group?. 1. M• 6G 2. PLB• 3. XΔ$A
4. Δ$7Z 5. H*TQ Answer: Option C. The pattern is +2, +3,+1,which is not followed in C. It should have been V instead of$.
6. If every alternate element starting from your left (H is dropped first) is dropped from the above sequence, which of the following will indicate numbers of symbols, letters and numbers, respectively which will remain in the sequence?
1. 4, 10, 3
2. 3, 10, 4
3. 5, 8, 4
4. 4, 7, 6
5. None of these
Just drop every alternate element from H, only (U*PTQM2£K∆3VAN1R) are left.
7. What is the total number of the "symbol immediately preceded by letter" and the "letter immediately preceded by symbol" together in the above sequence?
1. Four
2. Five
3. Six
4. Seven
5. None of these
H U 8 * C P 1 T L Q @ M B 2 • £ X K 6 ∆ G 3 \$ V F A 7 Z N D 1 β R Check out all letter-symbol and symbol-letter combinations. They are 8.
8. REPORT is related to UGSQUV in the same way as LONGER is related to
1. OQPIHT
2. OQQFGS
3. OQQIHT
4. QPQHIT
5. None of these
The pattern is +3 +2 +3 +2 and so on .
9. Four of the following five are similar in relation to their position in the English alphabet and hence form a group. Which one does not belong to the group?
1. BDHN
2. FHLR
3. QTYC
4. JLPV
5. NPTZ
The pattern is +2 +4 +6 for all options except C.
DIRECTIONS for the question 10: Study the following information carefully and answer the questions given below:
P + Q means P is the daughter of Q P - Q means P is the husband of Q
P x Q means P is the brother of Q P ÷ Q means P is the sister of Q
10. If A + B - C ÷ D, which of the following is true?
1. C is the mother of A
2. D is the father of A
3. A is the brother of C
4. A is the uncle of C
5. None of these | 2019-06-19 21:36:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7077474594116211, "perplexity": 3150.150178423718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00126.warc.gz"} |
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.18/share/doc/Macaulay2/DiffAlg/html/_homogenize_lp__Diff__Alg__Element_rp.html | # homogenize(DiffAlgElement) -- homogenize a differential form or vector field
## Synopsis
• Function: homogenize
• Usage:
homogenize e
• Inputs:
• Outputs:
• an instance of the type DiffAlgElement, the homogenization of e with respect to a new variable. The resulting form or vector field is homogeneous.
## Description
i1 : w = newForm("2*x_0*dx_0+x_1^2*dx_1") 2 o1 = 2x dx + x dx 0 0 1 1 o1 : DiffAlgForm i2 : homogenize w 2 o2 = 2x x dx + x dx 0 2 0 1 1 o2 : DiffAlgForm
i3 : homogenize newField ("ax_0+x_1*ax_2+a*ax_1") o3 = x ax + a*x ax + x ax 3 0 3 1 1 2 o3 : DiffAlgField
## Caveat
The homogenization process of a form adds one variable to the given element. | 2021-09-21 22:36:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6617178320884705, "perplexity": 2360.3860669937103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00181.warc.gz"} |
http://crypto.stackexchange.com/questions/14855/64-bit-elliptic-curve-key | # 64 bit Elliptic Curve key?
For a simple proof of concept project i'm (attempting!) to do, i've started looking into openSSL elliptic curve cryptography.
However instead of the standard key lengths, 160-512. I'm interested in a resultant key of just 64 bits (less secure I know but that's my projects purpose). Is it possible using elliptic curve cryptography?
I'm quite a novice, is there anything fundamentaly wrong with instead generating say a 256 key and cutting it size to 64bit?
Any advice would be greatly appreicated thanks.
-
Apart from being broken in a few minutes on a standard computer, there is nothing wrong with using a 64 bit field. The bigger problem is that you need to find curve parameters which requires understanding the mathematics behind ECC (Unless there are some standard SAGE scripts which do it for you). – CodesInChaos Mar 7 at 11:43
Point counting on such a small curve might not be that difficult - you could use a pretty naive/simple algorithm and not worry about it taking years to complete. – pg1989 Mar 8 at 2:14
I'm not sure if I get your question correctly. Do you like to use a 256 bit curve to generate a (secret) key of this size and later drop 192 bits? If this is the case your 64 bits will be useless because you secret key will not have any relation with the public. Furthermore, in a magic case that this works, your field operations will continue living in a 256 finite field.
If you like to use a 64 bits EC cryptosystem, first thing you need is to stablish the "setup": a tuple that 'configures' this set up. You may like to check the IEEE P1363 for details, but to have a cryptographically good curve (ignoring the fact that 64 bit is very few). As an example for a finite field:
1. Generate $p$: random big prime of your 64 bits.
2. Generate $a$ and $b$: in $\mathbb{F}_{p}$
3. Check non zero discriminant: $\Delta\neq 0$
4. Calculate the cardinal $|E(\mathbb{F}_p)|$
5. Check this cardinal is in the hasse interval
6. Check this cardinal factorises in $n$ and $h$ where $h<<<n$ and $h\in[1,2,4]$
If steps 3, 5 and 6 fails, go back to step 2 or even 1.
At this point you have a cryptographically good curve, then you need the cyclic subgroup where the ECDLP will protect your secrets.
1. Generate a random $x\in_R \mathbb{F}_p$ until you find a valid point in your curve. (for a valid $x$ there will be two $y$, select at random).
2. Check $[n]G=\mathcal{O}_E$
Same than before, if you random point doesn't pass check 2, repeat. But the probability to find a good one is bigger as your cofactor $h$ is smaller.
As you can see, this process is not light and it becomes heavier and heavier (exponentially) when you increase your 64 bits.
-
Thanks yeah I quickly realized trimming the private key would mess things up! Thanks for your help, i'll look into this. – Steven Tilling Mar 7 at 18:23
No nothing. Elliptic curve cryptography can be done on any discrete field. Hence, if you choose a 64-bit key, the DL-problem in the group induced by elliptic curves might be not that hard, but still it is valid to do.
- | 2014-08-21 12:24:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44865643978118896, "perplexity": 899.3525419969766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815991.16/warc/CC-MAIN-20140820021335-00016-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.azdictionary.com/definition/monochromator | • Definition for "monochromator"
• An optical product, consisting of a number of…
• Sentence for "monochromator"
• I remember my excitement when I…
• Etymologically Related for "monochromator"
• monochromatic
• Same Context for "monochromator"
• diapson
# monochromator definition
• noun:
• An optical product, consisting of a number of slits, that chooses a narrow band of wavelengths from a broader spectrum. | 2017-05-29 19:19:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938252329826355, "perplexity": 9313.95237354452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00328.warc.gz"} |
http://biostats.bepress.com/jhubiostat/paper249/ | Johns Hopkins University, Dept. of Biostatistics Working Papers
Abstract
For smoothing covariance functions, we propose two fast algorithms that scale linearly with the number of observations per function. Most available methods and software cannot smooth covariance matrices of dimension J x J with J>500; the recently introduced sandwich smoother is an exception, but it is not adapted to smooth covariance matrices of large dimensions such as J \ge 10,000. Covariance matrices of order J=10,000, and even J=100,000$are becoming increasingly common, e.g., in 2- and 3-dimensional medical imaging and high-density wearable sensor data. We introduce two new algorithms that can handle very large covariance matrices: 1) FACE: a fast implementation of the sandwich smoother and 2) SVDS: a two-step procedure that first applies singular value decomposition to the data matrix and then smoothes the eigenvectors. Compared to existing techniques, these new algorithms are at least an order of magnitude faster in high dimensions and drastically reduce memory requirements. The new algorithms provide instantaneous (few seconds) smoothing for matrices of dimension J=10,000 and very fast ($<\$ 10 minutes) smoothing for J=100,000. Although SVDS is simpler than FACE, we provide ready to use, scalable R software for FACE. When incorporated into R package {\it refund}, FACE improves the speed of penalized functional regression by an order of magnitude, even for data of normal size (J <500). We recommend that FACE be used in practice for the analysis of noisy and high-dimensional functional data.
Disciplines
Biostatistics | Statistical Methodology
flash_audio
COinS | 2017-11-23 16:52:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6686912775039673, "perplexity": 1713.2452550886426}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806844.51/warc/CC-MAIN-20171123161612-20171123181612-00316.warc.gz"} |
https://md-datasets-public-files-prod.s3.eu-west-1.amazonaws.com/ec2e4976-7385-4783-85c5-58aa9af51bee | This is a vegnette for R package BFDCA. BFDCA is a comprehensive tool of using Bayes factor for Differential Co-expression (DC) analysis. BFDCA contains three main functions: (1) clustering condition-specific genes into functional DC subunits; (2) quantitatively characterizing the regulatory impact of genes based on their differential connectivity within DC structures; and (3) providing a DC-based prediction model to predict case/control phenotypes by taking DC significant gene pairs as markers.
### System prerequisite:
• R version 3.0.0 or higher.
• Linux systems. It has been tested on several linux distributions, including CentOS release 6.7, Red Hat Enterprise Linux Server release 5.11, Fedora21.
• The GNU Scientific Library (GSL). It’s included in most GNU/Linux distributions. If your system doesn’t contain it, please refer to “https://www.gnu.org/software/gsl/” for installation. As an example, in Fedora system, you can easily install GSL by:
yum install gsl_devel
### Installation of BFDCA:
Before install the BFDCA package, the following packages are required to be installed: fastcluster, WGCNA and its depends, igraph, and dynamicTreeCut (>=1.62). For some linux systems you may need to install Rcpp. To install the required packages, in R environment, simply type:
source("http://bioconductor.org/biocLite.R")
biocLite(c("AnnotationDbi", "impute", "GO.db", "preprocessCore"))
install.packages("flashClust")
install.packages("WGCNA")
install.packages("igraph")
install.packages("dynamicTreeCut")
install.packages("Rcpp")
Then install package BFDCA by typing the followings in R environment:
install.packages("<path/to/BFDCA>/BFDCA_1.0.tar.gz") # <path/to/BFDCA> is the path to the directory where you save BFDCA_1.0.tar.gz.
### Analyses of simulation data (step-by-step):
library(BFDCA) # load the BFDCA package.
data(SimulationSmall)
#load the simulation data "SimulationSmall" which is a matrix containing class information and expression data. The first column corresponds to class labels, other columns correspond to gene expressions, and the rows correspond to samples. Details of the data are in the manual of BFDCA.
class<-SimulationSmall[,1];
#extract class from SimulationSmall.
gene<-SimulationSmall[,2:dim(SimulationSmall)[2]];
#extract expression data from SimulationSmall.
• Step2, estimate the strength of pair-wise differential co-expression by Bayes factor from an expression matrix with labels of classes.
bfmatrix<-Compute_bf(gene,class,classlabel=c("1","2"),bfthr=NULL,echo=TRUE);
#bfmatrix is a data frame containing the information of pair-wise Bayes factors. It can be used as an input for other functions, like BF_WGCNA.
#argument bfthr is set as NULL, so all the gene pairs will be kept for the following steps.
#details are in the manual of BFDCA.
save(bfmatrix,file="bfmatrix.Rdata");
#save pair-wise Bayes factors into "bfmatrix.Rdata".
• Step3, identify differential co-expression modules through pair-wise Bayes factors.
in_groupgenes=20;
truemodule<-rep(c(rep("grey",2*in_groupgenes),rep("yellow",in_groupgenes),rep("blue",in_groupgenes),rep("red",in_groupgenes),rep("pink",in_groupgenes),rep("brown",in_groupgenes),rep("grey",100)));
#truemodule is a vector represents the colors assigned to the real DC modules in the simulation data.
obt<-BF_WGCNA(gene,bfmatrix,bfthr=0,keepedges=0,plotTree=TRUE,plotfile="Gene_dendrogram_and_module_colors.pdf",trueModule=truemodule,minClusterSize=5,softPower=4,deepSplit=2);
#in this step, WGCNA package is applied on the Bayes factor matrix (bfmatrix) to infer DC modules. After running this function, a BFobt object will be generated and stored in obt. A plot (Figure 1) of a hierarchical clustering dendrogram and color annotations of modules will be in file "Gene_denfrogram_and_module_colors.pdf".
Figure 1: Gene dendrogram with color annotations of real DC modules from file “Gene_denfrogram_and_module_colors.pdf”. Note that the color assigned by BFDCA is not exactly as the same as the color assigned to the real DC modules, the color is only used to distinguish different module assignments.
• Step4, output the differential co-expression modules and some information of gene-gene interactions.
bfoutput<-BF_output_networks(gene,class,classlabel=c("1","2"),obt,mst2file="MST2.txt",bfthr=6,corthres=0.3);
#the function returns a list containing:
#(1) information for DC gene nodes, including module assignment and weight assignment.
#(2) information for DC gene pairs, including a full network built based on DC modules.
#the essential links among genes represented by a union of First and Second Minimal Spanning Tree (MST2) will be outputed into file "MST2.txt" specified by argument mst2file.
#Other arguments and details can be found in manual of BFDCA.
write.table(bfoutput$genegroups,file="gene_groups.txt",append=FALSE,row.names=FALSE,col.names=TRUE,quote=FALSE,sep="\t"); #output the DC gene nodes with information of DC modules into "gene_groups.txt". write.table(bfoutput$network,file="gene_network.txt",append=FALSE,row.names=FALSE,col.names=TRUE,quote=FALSE,sep="\t");
#output the DC gene pairs into "gene_network.txt".
• Step5, select significant DC gene pairs.
sigdc<-sigDCpair_st1(obt,mst2="MST2.txt",bfthr=6,weight_cutoff=0.8);
#in this function, we set Bayes factor threshold bfthr as 6 and the threshold of end-node weights as 0.8, each edge in "MST2.txt" will be searched, and those that meet both of the two criterions will be remained. The result is a data frame which contains three columns, the first represents the ids for gene1, the second represents the ids for gene2 and the last represents the scores for each gene pair in a descending order. Edges in sigdc served as the candidate significant DC gene pairs.
sigDCpair_SFS(gene,class,c("1","2"),sigDC=sigdc,DC_acc="DC_pair_acc.txt",by=20,LOOCV=TRUE);
# in this function, a generalized sequential forward selection (SFS) algorithm is used on the candidate significant DC gene pairs "sigdc". Start with an initial subset of gene pairs with size N=1, the next by=20 gene pairs are repeatedly added to the subset of gene pairs (N=N+by). The goodness of top N subset of gene pairs are characterized by the leave-one-out cross-validation (LOOCV) to monitor the accuracy for the top N gene pairs, and the accuracy is calculated by DC-based prediction model. The results are stored in file "DC_pair_acc.txt". It consists of two columns, the first indicates the number of top gene pairs and the second indicates the LOOCV accuracy.
top_edges<-sigdc[1:21,];
#according to "DC_pair_acc.txt", the top 21 gene pairs is a tradeoff between maximizing the accuracy and minimizing the edge number.
write.table(top_edges,file="selected_DC_pair.txt",append=FALSE,row.names=FALSE,col.names=FALSE,quote=FALSE,sep="\t");
# output the results into file "selected_DC_pair.txt".
An example of function BFtrain and BFtest
Function BFtrain trains a DC-based model on traing data by using the pre-selected DC gene pairs as features. Function BFtest predicts the class labels for testing data. Here, we randomly sample 100 samples from SimulationSmall data as the training data, and the remaining samples are used as the testing data. According to step5, the 21 gene pairs in “selected_DC_pair.txt” are used as features.
library(BFDCA)
data(SimulationSmall)
# load the simulation data SimulationSmall.
class<-SimulationSmall[,1];
# extract class labels from SimulationSmall.
gene<-SimulationSmall[,2:dim(SimulationSmall)[2]];
# extract expression data from SimulationSmall.
train_index<-sample(dim(gene)[1],100,replace=FALSE);
# randomly sample 100 samples form simulation data.
train<-gene[train_index,];
train_class<-class[train_index];
# generate training data.
test<-gene[-train_index,];
test_class<-class[-train_index];
# generate testing data.
# use the selected 21 gene pairs as features.
model<-BFtrain(train,train_class,c("1","2"),edges,bfthr=6);
# train a DC-based prediction model.
tclass<-BFtest(test,model);
# predict the class labels for testing data.
accuracy<-sum(tclass==test_class)/length(test_class);
print(accuracy);
# calculate the accuracy.
Examples of plotting DC gene pairs.
Function BFplot is used to plot gene expression patterns of DC gene pairs. It will generate a dot plot for the DC gene pairs with x-axis and y-axis represent gene expression levels for each gene in the DC gene pair. Dots in different colors and shapes represent samples from different classes. Circles in different colors represent the 95% contours of the estimated bivariate normal density from different classes.
library(BFDCA)
data(SimulationSmall)
# load the simulation data SimulationSmall.
class<-SimulationSmall[,1];
# extract class labels from SimulationSmall.
gene<-SimulationSmall[,2:dim(SimulationSmall)[2]];
# extract expression data from SimulationSmall.
BFplot(gene,class,c("1","2"),63,65);
BFplot(gene,class,c("1","2"),84,91);
BFplot(gene,class,c("1","2"),101,121);
# the user can also plot multiple DC gene pairs into a file in pdf format by function BFplot2files.
BFplot2files(gene,class,c("1","2"),"selected_DC_pair.txt",plotfilename="Plotedges.pdf")
# it will plot a figure for all the gene pairs in file "selected_DC_pair.txt" and merge these figures into file "Plotedges.pdf".
### Analyses on Acute Lymphoblastic Leukemia (ALL) dataset
The following steps show how to apply BFDCA on experimental expression data. Here, we take ALL dataset as an example. For the dataset with 8638 genes, it requirs at least 3 GB of memory. Some of the following steps may take one to several hours, vary with computational resources.
library(BFDCA);# load the BFDCA package.
data(ALL);# load the ALL data, which is a list containing class information and expression after preprocessing. ALL$class is a vector containing class information, with "1" indicates BCR/ABL mutation and "2" indicates no cytogenetic abnormalities. ALL$data is a data frame containing expression data. Details are in the manual of BFDCA.
bfmatrix<-Compute_bf(ALL$data,ALL$class,classlabel=c("1","2"),bfthr=6,echo=FALSE); # Compute_bf is the most time-consuming one among all the steps, it may take several hours to complete.
save(bfmatrix,file="bfmatrix.Rdata");
obt<-BF_WGCNA(ALL$data,bfmatrix,plotfile="Gene_dendrogram_and_module_colors_ALL.pdf",minClusterSize=20,deepSplit=2); # in this application, we used hard thresholding. The argument softPower is set as 1, and argument keepedges is used as the default value which is equal to the number of genes involved in ALL$data.
bfoutput<-BF_output_networks(ALL$data,ALL$class,classlabel=c("1","2"),obt,mst2file="MST2_ALL.txt",bfthr=6,corthres=0.2);
write.table(bfoutput$genegroups,file="gene_groups_ALL.txt",append=FALSE,row.names=FALSE,col.names=TRUE,quote=FALSE,sep="\t"); # output the DC gene nodes with information of DC modules into "gene_groups_ALL.txt". write.table(bfoutput$network,file="gene_network_ALL.txt",append=FALSE,row.names=FALSE,col.names=TRUE,quote=FALSE,sep="\t");
# output the DC gene pairs into "gene_network_ALL.txt".
sigdc<-sigDCpair_st1(obt,mst2="MST2_ALL.txt",bfthr=6,weight_cutoff=0.8); # first step of selecting significant DC gene pairs.
save(sigdc,file="sigdc_ALL.Rdata");# save results to "sigdc_ALL.Rdata".
sigDCpair_SFS(ALL$data,ALL$class,c("1","2"),sigDC=sigdc,DC_acc="DC_pair_acc_ALL.txt",by=10,LOOCV=TRUE);# second step of selecting significant DC gene pairs.
top_edges<-sigdc[1:211,];
# according to "DC_pair_acc_ALL.txt", the top 211 gene pairs is a tradeoff between maximizing the accuracy and minimizing the edge number.
write.table(top_edges,file="selected_DC_pair_ALL.txt",append=FALSE,row.names=FALSE,col.names=FALSE,quote=FALSE,sep="\t");# output the results into file "selected_DC_pair_ALL.txt".
Test the prediction power of selected gene pairs for ALL dataset
The 211 gene pairs in “selected_DC_pair_ALL.txt” are used as features. 60 samples from ALL dataset are randomly sampled as the training data, and the remaining samples are used as the testing data.
library(BFDCA);# load the BFDCA package.
train_index<-sample(dim(ALL$data)[1],60,replace=FALSE); # randomly sample 60 samples form ALL$data.
train=ALL$data[train_index,]; train_class=ALL$class[train_index];
# generate training data.
test=ALL$data[-train_index,]; test_class=ALL$class[-train_index];
# generate testing data.
# use the selected 211 gene pairs as features.
model<-BFtrain(train,train_class,c("1","2"),edges,bfthr=6);
# train a DC-based prediction model
tclass<-BFtest(test,model);
# predict the class labels for testing data
accuracy<-sum(tclass==test_class)/length(test_class);
# calculate the accuracy
print(accuracy);
Ploting examples of DC gene pairs for ALL dataset
We show how to use function BFplot to plot gene expression patterns of DC gene pairs. It will generate a dot plot for the DC gene pairs with x-axis and y-axis represent gene expression levels for each gene in the DC gene pair. Dots in different colors and shapes represent samples from different classes. Circles in different colors represent the 95% contours of the estimated bivariate normal density from different classes.
library(BFDCA);# load the BFDCA package.
BFplot(ALL$data,ALL$class,c("1","2"),"EMR1","SPINK2");
BFplot(ALL$data,ALL$class,c("1","2"),"AZU1","KLK2");
BFplot2files(ALL$data,ALL$class,c("1","2"),"selected_DC_pair_ALL.txt",plotfilename="Plotedges_ALL.pdf")
# for all the gene pairs in "selected_DC_pair_ALL.txt", it will plot a figure to show the gene expression patterns of the gene pair, and all these figures will be merged into file "Plotedges_ALL.pdf". | 2020-04-05 14:06:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2520916163921356, "perplexity": 3597.4239431230126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00252.warc.gz"} |
https://www.homebuiltairplanes.com/forums/threads/650-hp-rotary-time-to-climb-record-attempt.25242/page-4 | # 650 HP Rotary Time To Climb record attempt
### Help Support HomeBuiltAirplanes.com:
#### wsimpso1
##### Super Moderator
Staff member
Log Member
Jarno was likely asking about the loose fuel plumbing and undercowl fire potential, but just in case he was talking about WM's comment...
As to gunpowder in outhouse, yes if it is black powder, no if it is smokeless.
Black powder will burn at high order, so if you have a keg or more, it should be in an outbuilding with big "4" painted on each side. If somehow the outhouse burns, get eveyone way back, like over 200 m, don't face it, and cover your ears. It will go BANG!
8 pound kegs of smokeless powder will not burn at high order, but degrades more rapidly with elevated temperature. Stored in original containers in a metal cabinet in the AC is best for smokeless powder. By the time a house fire ignites that, the added fuel will be of little consequence.
This thread drift brought to you by WM and engineering experience in the gun/ammunition business...
Billski
#### Billrsv4
##### Well-Known Member
Thanks for posting this Will. I tend to ignore Paul's site because of the dreaming and not enough doing aspect plus a lot of dubious info there but the fact that this is actually built and running moves it up a notch in credibility. It's a cool project whether it takes the records or not. We learn as much or more from failing with an experiment as succeeding. I do hope they succeed here and do well.
Ross,
You may have found out about this by now but I will pass it along. Paul didn't build the plane he is advising on the engine. Second: they are running alcohol, (methanol) fuel. Really helps bring some internal engine cooling to the table. Paul claims they will have a higher power to weight ratio than the Bearcat. I have no way of verifying that claim, but the rotary will be a potent combination. As to the durability of the Bell planetary, I believe they are working out the HP at a much higher shaft RPM than original input. I recall that Paul had Dave Graber's push pull Wankel powered Reno unlimited racer in his hands. Don't know if he bought it or it was given to him, but it was running 2 of the Bell gearboxes. I think they are 3:1 or a bit more so a p-port rotary could be turning nearly 9000 RPM and bring the prop in under 3k. 1000 HP x 5250/9000 = torque of 583 Ft/Lbs. Sounds doable for the Bell. Just my 2 cents. If that huge prop they got for it will leave that engine mounted to the firewall during full throttle climb, that I don't know. I wish them a SAFE flight regardless of outcome.
Bill
#### rv6ejguy
##### Well-Known Member
Ross,
You may have found out about this by now but I will pass it along. Paul didn't build the plane he is advising on the engine. Second: they are running alcohol, (methanol) fuel. Really helps bring some internal engine cooling to the table. Paul claims they will have a higher power to weight ratio than the Bearcat. I have no way of verifying that claim, but the rotary will be a potent combination. As to the durability of the Bell planetary, I believe they are working out the HP at a much higher shaft RPM than original input. I recall that Paul had Dave Graber's push pull Wankel powered Reno unlimited racer in his hands. Don't know if he bought it or it was given to him, but it was running 2 of the Bell gearboxes. I think they are 3:1 or a bit more so a p-port rotary could be turning nearly 9000 RPM and bring the prop in under 3k. 1000 HP x 5250/9000 = torque of 583 Ft/Lbs. Sounds doable for the Bell. Just my 2 cents. If that huge prop they got for it will leave that engine mounted to the firewall during full throttle climb, that I don't know. I wish them a SAFE flight regardless of outcome.
Bill
I knew Paul didn't build the plane as he hasn't really built much of anything and certainly not much actual experience building and running turbocharged rotary engines either.
Rare Bear was making at least 4000hp and perhaps 4500 hp for its climb record (91.9 seconds to 3000M). Empty weight is under 6600 lbs. Let's split the difference in hp and say 1.55lbs./hp. 7200 pounds with some fuel, ADI and a pilot for 1.69 lbs./hp
I have a hard time believing this Rocket even with chopped wings and no cowling, weighs 900 pounds- over 200 pounds lighter than my RV6 and 400 less than a 540 Rocket. If 900 is true then maybe 1250 with pilot, ADI and fuel for 1.92 lbs./hp. Add the FP prop to that which will hurt initial acceleration and climb a bunch compared to a C/S prop and I don't see the math coming out in favor of the Wankel Rocket.
Nevertheless, a very interesting and exciting project. Hats off to them for getting this far and we'll have to see how it performs. We have a saying in racing- when the flag drops, the BS stops. If they can do it in 90 seconds, they'll have the record, and I'll applaud.
Last edited:
#### Billrsv4
##### Well-Known Member
I knew Paul didn't build the plane as he hasn't really built much of anything and certainly not much actual experience building and running turbocharged rotary engines either.
Rare Bear was making at least 4000hp and perhaps 4500 hp for its climb record (91.9 seconds to 3000M). Empty weight is under 6600 lbs. Let's split the difference in hp and say 1.55lbs./hp. 7200 pounds with some fuel, ADI and a pilot for 1.69 lbs./hp
I have a hard time believing this Rocket even with chopped wings and no cowling, weighs 900 pounds- over 200 pounds lighter than my RV6 and 400 less than a 540 Rocket. If 900 is true then maybe 1250 with pilot, ADI and fuel for 1.92 lbs./hp. Add the FP prop to that which will hurt initial acceleration and climb a bunch compared to a C/S prop and I don't see the math coming out in favor of the Wankel Rocket.
Nevertheless, a very interesting and exciting project. Hats off to them for getting this far and we'll have to see how it performs. We have a saying in racing- when the flag drops, the BS stops. If they can do it in 90 seconds, they'll have the record, and I'll applaud.
Yep Ross, Well aware of the BS Stopping. Built cars DSR, CSR, and Formula C. Raced motorcycles myself. I understand they had a problem with their injectors and their dry sump pump. The should have bought your EFI! I am surprised about the oil pump. Before i get off the ground I'm checking every bolt twice and safety wiring anything i don't want to fall off!
Bill
#### rv7charlie
##### Well-Known Member
Ross,
On the weight issue, it *might* be doable. Here are some data points. My O-320 powered wood prop -4 is 910 lbs empty. Bare bones VFR, but no particular attempts seem to have been made to keep it light. The Mazda Renesis FWF I'm installing on my -7 project weighed 310 lbs dry, including mount adapters,radiator, oil cooler, ducts, etc, which is likely a bit lighter than the FWF weight of my current Lyc. A 13B core with aluminum center & end housings would be ~60 lbs lighter than a stock Renesis core, which weighs ~185 lbs). Offsetting that weight savings would be the turbo, external oil pump, water pump, water injection tank, bigger cooling system, etc. If it has Rocket gear legs, they are titanium (no idea how much weight difference that would be). Missing cowl, well, I've never weighed mine but again, it's something.
HP claims for short term operation are probably reasonable, as well. Whether it will cool with the big knob forward is another question, of course....
Charlie
#### Will Aldridge
##### Well-Known Member
Log Member
The aircraft made it's first true flight (only crow hopped previously). There was a video attached to the email showing the aircraft landing. Mr Lamar doesn't like youtube and I'm not sure how to attach a video like that.
#### Will Aldridge
##### Well-Known Member
Log Member
I guess I should mention that over the past few months they removed that hole in the exhaust and added a manual wastegate and a blowoff valve that opens at 71" hg the so the boost doesn't run away on them.
#### rv6ejguy
##### Well-Known Member
I guess I should mention that over the past few months they removed that hole in the exhaust and added a manual wastegate and a blowoff valve that opens at 71" hg the so the boost doesn't run away on them.
I wonder what their aversion to using automatic wastegates is? The rest of the world has embraced these for 4 decades. A blowoff valve to limit boost is a bad idea as it causes a huge increase in compressor discharge temperatures. Boost control should be done with wastegates. They can do boost limiting through the MoTec easily to prevent engine damage.
#### Will Aldridge
##### Well-Known Member
Log Member
I think their pilot wanted the manual wastegate, but as I've mentioned I don't follow it too closely.
#### pictsidhe
##### Well-Known Member
A blowoff valve won't stop the turbo running away.
#### Will Aldridge
##### Well-Known Member
Log Member
A blowoff valve won't stop the turbo running away.
If this is a distinction without a difference forgive me since I'm pretty ignorant of all things turbo, but I said boost not turbo.
If there is actually a difference I wouldn't mind someone expounding on it.
#### pictsidhe
##### Well-Known Member
A blowoff valve is a pressure sensitive valve that vents excess inlet manifold pressure. Extra air is simply 'blown off'. Think safety valve on a workshop air compressor.
A wastegate bypasses exhaust gas around the turbo, less gas and pressure through the turbine reduces its ability to compress intake air.
#### TFF
##### Well-Known Member
A blowoff valve relieves the intake pressure, but the turbo is still spinning away until the engine throttles down either from closing it or loss of pressure. To close the blowoff valve it has to go below the rated pressure and then be brought back up. Automatic or manual waste gate, either will work assuming the automatic will close all the way. At some point on such a climb to altitude, the engine will get to the critical altitude and loose boost even if waste gate is fully closed. Unlike down low, hopefully they have a heated intake and turbo. That is what stopped Bruce Bohannon from breaking the absolute world record.
#### rv6ejguy
##### Well-Known Member
A blowoff valve relieves the intake pressure, but the turbo is still spinning away until the engine throttles down either from closing it or loss of pressure. To close the blowoff valve it has to go below the rated pressure and then be brought back up. Automatic or manual waste gate, either will work assuming the automatic will close all the way. At some point on such a climb to altitude, the engine will get to the critical altitude and loose boost even if waste gate is fully closed. Unlike down low, hopefully they have a heated intake and turbo. That is what stopped Bruce Bohannon from breaking the absolute world record.
With the single stage turbo setup, Bruce Bohannon simply ran out of manifold pressure to go any higher. Nothing to do with heated intake and turbos since they are very well heated by the compressor air at that pressure ratio, even after intercooling. The canopy was severely iced up if I recall though.
Bruce needed staged turbos to get that bit higher but CHTs and CDTs s were oof the clock at 47,000 feet already.
#### pictsidhe
##### Well-Known Member
With the single stage turbo setup, Bruce Bohannon simply ran out of manifold pressure to go any higher. Nothing to do with heated intake and turbos since they are very well heated by the compressor air at that pressure ratio, even after intercooling. The canopy was severely iced up if I recall though.
Bruce needed staged turbos to get that bit higher but CHTs and CDTs s were oof the clock at 47,000 feet already.
A quick Google revealed that the manual wastegate bracket failed...
I'm trying to think of a good reason for a manual wastegate:
1. Makes money for someone.
2. It's easier than making an adjustable automatic wastegate.
Does anyone have a good reason to use a manual wastegate?
#### rv6ejguy
##### Well-Known Member
A quick Google revealed that the manual wastegate bracket failed...
I'm trying to think of a good reason for a manual wastegate:
1. Makes money for someone.
2. It's easier than making an adjustable automatic wastegate.
Does anyone have a good reason to use a manual wastegate?
A decent TIAL automatic wastegate is around $350 http://www.tialsport.com/index.php/tial-products/wastegates and you can make it cockpit adjustable for about$15 via a miniature air regulator. Seems like on an aircraft worth tens of thousands of dollars it would be a good investment. With a time to climb aircraft, you're gonna be busy flying it, not much time to monitor MP and adjust a wastegate every couple thousand feet which you are going through every 10-15 seconds. You just don't need additional pilot workload in this attempt.
I've said it before in this thread and others, PL doesn't listen to other people with more experience in a field and tends to do things his own way regardless of whether it's a good idea or not. Some would say he's thinking outside the box, others would say he has to learn the hard way.
Hard way or not, I hope they get things worked out safely and take a good swipe at this record. Would be a real accomplishment to beat the existing time.
#### pictsidhe
##### Well-Known Member
I suspect the first serious flight will have a 'problem' with the manual wastegate and it will get replaced.
Out of interest, I just had a look at the feasibility of far103 to altitude. 55knots straight up is about 100 seconds. A slippery 103 and around 150hp should do it. Sponsors, form an orderly queue!
#### otter13805
##### New Member
A quick Google revealed that the manual wastegate bracket failed...
I'm trying to think of a good reason for a manual wastegate:
1. Makes money for someone.
2. It's easier than making an adjustable automatic wastegate.
Does anyone have a good reason to use a manual wastegate?
Because we could not tune the **** thing without a manual one ... that's why -- Paul Lamar
#### pictsidhe
##### Well-Known Member
Because we could not tune the **** thing without a manual one ... that's why -- Paul Lamar
We have a winner!Keep us appraised of progress, or we'll be forced to speculate some more | 2021-05-17 21:29:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2825717031955719, "perplexity": 3549.2517704175584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00232.warc.gz"} |
http://jsxgraph.fineview.com/ | 1. ##### Question 214
Coordinate Geometry - Circles - Geometry - Investigate Equation of Circle: ${\left( {x - a} \right)^2} + {\left( {y - b} \right)^2} = {r^2}$
2. ##### Question 4006
was: Coordinate Geometry - Circles - Geometry - Equation of a circle 2
now: Coordinate Geometry - Circles - Geometry - Matching Circles
3. ##### Question 1684
was: Coordinate Geometry - Tangents and normals - The parabola
now: Coordinate Geometry - Tangents and normals - Investigate Equation of Parabola
4. ##### Question 1686
was: Coordinate Geometry - Tangents and normals - The ellipse
now: Coordinate Geometry - Tangents and normals - Investigate Equation of Ellipse
5. ##### Question 1688
was: Coordinate Geometry - Tangents and normals - The hyperbola
now: Coordinate Geometry - Tangents and normals - Investigate Equation of Hyperbola
6. ##### Question 1690
was: Coordinate Geometry - Tangents and normals - The rectangular hyperbola
now: Coordinate Geometry - Tangents and normals - Investigate Equation of Rectangular Hyperbola
7. ##### Question 1177
was: Coordinate Geometry - Tangents and normals - The circle
now: retired
8. ##### Question 1653
was: Coordinate Geometry - Conics - The circle
now: merge with Question 214
9. ##### Question 1652
was: Coordinate Geometry - Conics - The family of conics
now: Coordinate Geometry - Conics - Family of Conics using Polar Equations
10. ##### Question 1176
was: Coordinate Geometry - Tangents and normals - The parabola
now: Coordinate Geometry - Tangents and normals - The parabola
11. ##### Question 1202
was: Coordinate Geometry - Tangents and normals - The ellipse
now: Coordinate Geometry - Tangents and normals - The ellipse
12. ##### Question 1204
was: Coordinate Geometry - Tangents and normals - The hyperbola
now: Coordinate Geometry - Tangents and normals - The hyperbola
13. ##### Question 1203
was: Coordinate Geometry - Tangents and normals - Other curves
now: Coordinate Geometry - Tangents and normals - Other curves
14. ##### Question 1178
Coordinate Geometry - Tangents and normals - Investigate the Rectangular Hyperbola: $xy = {c^2}$ or $\left( {ct,{c \over t}} \right)$
15. ##### Question 1654
was: Coordinate Geometry - Conics - The parabola
now: Coordinate Geometry - Conics - The parabola
16. ##### Question 1655
was: Coordinate Geometry - Conics - The ellipse
now: Coordinate Geometry - Conics - The ellipse
17. ##### Question 1656
was: Coordinate Geometry - Conics - The hyperbola
now: Coordinate Geometry - Conics - The hyperbola
18. ##### Question 1657
was: Coordinate Geometry - Conics - Rectangular hyperbola
now: Coordinate Geometry - Conics - Rectangular hyperbola
19. ##### Question 1181
was: Coordinate geometry - Parametric curves - Various curves
now: Coordinate geometry - Parametric curves - Various curves
20. ##### Question 1182
was: Coordinate geometry - Parametric curves - Various curves 2
now: Coordinate geometry - Parametric curves - Various curves 2
21. ##### Question 1183
was: Coordinate geometry - Parametric curves - Various curves 3
now: Coordinate geometry - Parametric curves - Various curves 3
22. ##### Question 1184
was: Coordinate geometry - Parametric curves - Various curves 4
now: Coordinate geometry - Parametric curves - Various curves 4
23. ##### Question 1666
was: Coordinate geometry - Parametric curves - The parabola
now: Coordinate geometry - Parametric curves - The parabola
24. ##### Question 1668
was: Coordinate geometry - Parametric curves - The ellipse
now: Coordinate geometry - Parametric curves - The ellipse
25. ##### Question 1669
was: Coordinate geometry - Parametric curves - The hyperbola
now: Coordinate geometry - Parametric curves - The hyperbola
26. ##### Question 1670
was: Coordinate geometry - Parametric curves - The rectangular hyperbola
now: Coordinate geometry - Parametric curves - The rectangular hyperbola
27. ##### Question 1671
was: Coordinate geometry - Parametric curves - Cycloids
now: Coordinate geometry - Parametric curves - Cycloids
28. ##### Question 1672
was: Coordinate geometry - Parametric curves - More cycloids
now: Coordinate geometry - Parametric curves - More cycloids | 2017-09-26 02:00:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19188399612903595, "perplexity": 10215.612251354565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693940.84/warc/CC-MAIN-20170926013935-20170926033935-00287.warc.gz"} |
https://physics.stackexchange.com/questions/242727/could-we-curve-the-flight-path-of-a-photon | # Could we curve the flight path of a photon?
I was wondering about photon's interaction with matter, and why photons dont slow down. They seem to always bounce in a straight line at the same speed (I think), as if some force is charging them forward after the bounce. First, what is this? I've heard of experiments where we actually did slow them down, and they can be absorbed and bounced by matter, so I wanted to know if they interact with matter in such a way that we could spin one, like a curve ball thrown by a pitcher, and make them curve their flight path.
Can someone explain this, in English (back it up by math if you need to but I'm not a physicist, I just like to learn about the fundamental ideas)?
• Photons don't move, at all. They are simply quanta of the electromagnetic field and they only exist where you measure them/where an actual interaction occurs. It's the field that "moves". How the field "moves" is very well known. In the classical limit it's given by Maxwell's equations and in the quantum limit we have the equations of quantum electrodynamics, which, of course, can couple to charged matter. – CuriousOne Mar 10 '16 at 23:31
• @CuriousOne Hmm, so if you measure the same one in two different places doesn't that mean it has moved? – J.Todd Mar 10 '16 at 23:33
• @Viziionary There is no way to even know, the two photons you measured are the same ones, as photons are not individuals but indistinguishable quanta. Being the same is just not a question you can ask about photons (just as you cannot ask whether two units of currency on your bank account are the same). – Sebastian Riese Mar 10 '16 at 23:37
• You can't write "Kilroy was here!" on a photon (that is really a quantum state, rather than a particle), so you can't distinguish "them". The entire mental model of little hard balls flying trough the universe is 100% wrong. – CuriousOne Mar 10 '16 at 23:38
• @SebastianRiese but couldn't I measure the same location again and check whether the photon is still there? If there is one at one instant, and then at that very precise very small location, in the next very small fraction of a nano second, if there are no photons filling that point of space, can't we assume that photons do move? – J.Todd Mar 10 '16 at 23:42
The lenses in my reading glasses bend the path of photons, as does gravity. In matter photons move at slower speed than $$c$$. In Bose-Einstein condensates they can even be brought to a halt.
so I wanted to know if they interact with matter in such a way that we could spin one, like a curve ball thrown by a pitcher, and make them curve their flight path.
Photons interact with matter with the electromagnetic interaction. There can be elastic scattering of a photon of energy E with a charged particle and only the angle will change , but it happens at one point in space and time , not a continuous curve.
So individual photons do not act like a classical ball, which should be expected as they are quantum mechanical entities.
Light , which is composed out of an innumerable number of photons does display a "curve" in space collectively in a lattice, by additions of the behavior of the collective photon ensemble, as in optical fibers.
There also exists gravitational lensing of light :
A gravitational lens refers to a distribution of matter (such as a cluster of galaxies) between a distant source and an observer, that is capable of bending the light from the source, as it travels towards the observer.
Individual photons are elastically scattering but the light wave displays curvature. | 2021-01-26 09:16:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.622429370880127, "perplexity": 477.6352397758773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00479.warc.gz"} |
https://wiki.q-researchsoftware.com/wiki/Weighting_-_Apply_Weight_to_Column_Span | # Weight - Apply Weight to Column Span
This rule applies a weight to all the columns within a column span of a table. This is achieved by generating a new copy of the table with the weight applied and copying the statistics into the main table.
## Technical details
The significance testing results for the weighted columns are taken from the weighted table and no attempt is made to reconcile tests between the weighted and unweighted portions of the table.
## How to apply this rule
### For the first time in a project
• Select the table(s)/chart(s) that you wish to apply the rule to.
• Start typing the name of the Rule into the Search features and data box in the top right of the Q window.
• Click on the Rule when it appears in the QScripts and Rules section of the search results.
OR
• Select Automate > Browse Online Library.
• Choose this rule from the list.
### Additional applications of the rule
• Select a table or chart that has the rule and any table(s)/chart(s) that you wish to apply the rule to.
• Click on the Rules tab (bottom-left of the table/chart).
• Select the rule that you wish to apply.
• Click on the Apply drop-down and choose your desired option.
• Check New items to have it automatically applied to new items that you create. Use Edit > Project Options > Save as Template to create a new project template that automatically uses this rule.
## Removing the rule
• Select the table(s)/chart(s) that you wish to remove the rule from.
• Press the Rules tab (bottom-right corner).
• Press Apply next to the rule you wish to remove and choose the appropriate option.
## How to modify the rule
• Click on the Rules tab (bottom-left of the table/chart).
• Select the rule that you wish to modify.
• Click Edit Rule and make the desired changes. Alternatively, you can use the JavaScript below to make your own rule (see Customizing Rules).
## JavaScript
You can find a simpler version of this code, which does not contain the controls, here.
table.requireNumericTable();
table.requireOriginalRowsColumns();
includeWeb('Table JavaScript Utility Functions');
includeWeb('JavaScript Array Functions');
includeWeb('QScript Utility Functions');
if (!table.columnLabels)
form.ruleNotApplicable('the table only has a single column.');
let spans = table.columnSpans;
let span_labels = spans.map(function(s) { return s.label });
if (arrayHasDuplicateElements(span_labels));
span_labels = enumerateDuplicatesInStringArray(span_labels, '(', ')');
if (span_labels.length < 1)
form.ruleNotApplicable('no column spans exist for this table');
if (!table.blueQuestion.dataFile.equals(table.brownQuestion.dataFile))
form.ruleNotApplicable(correctTerminology('table has questions from different data files'));
let variables = table.blueQuestion.dataFile.variables;
let weight_variables = variables.filter(function(v) { return v.question.isWeight; });
let weight_names = weight_variables.map(function(v) { return v.name; });
let weight_labels = weight_variables.map(function(v) { return v.question.name; });
if (weight_variables.length < 1)
form.ruleNotApplicable('no weight variables have been found.');
// Set up controls for user input.
let label_column = form.newLabel('Column span to weight:');
let combobox_column = form.newComboBox('column', span_labels);
combobox_column.setDefault(span_labels[0]);
let label_weight = form.newLabel('Weight:');
let combobox_weight = form.newComboBox('weight', weight_labels);
combobox_weight.lineBreakAfter = true;
let description = form.newLabel('This rule applies a weight to all the columns within a specified column span of a table.');
description.lineBreakAfter = true;
form.setInputControls([description, label_column, combobox_column, label_weight, combobox_weight]);
// Prevent Statistics - Right and Statistics - Below
if (belowTableExists())
if (fileFormatVersion() <= 8.41 && below_table.statistics.length > 0)
table.suppressOutput('This table cannot be displayed because Statistics - Below have been selected for this ' +
'table, and this is not compatible with this Rule. Either remove the Statistics - ' +
'Below selection (right-click on the table) or remove the Rule.');
if (rightTableExists())
if (right_table.statistics.length > 0)
table.suppressOutput('This table cannot be displayed because Statistics - Right have been selected for this ' +
'table, and this is not compatible with this Rule. Either remove the Statistics - ' +
'Right selection (right-click on the table) or remove the Rule.');
let span = combobox_column.requireValue();
let selected_weight_label = combobox_weight.requireValue();
let weight = weight_names[weight_labels.indexOf(selected_weight_label)];
// Obtain the column indices for the selected span
let spandex = span_labels.indexOf(span);
let columns = spans[spandex].indices;
let rule_name = 'Weight column span: "' + span + '" by "' + selected_weight_label + '"';
form.setSummary(rule_name);
let weighted_table = calculateTable(table.blue, table.brown, ['!UseQFilters'], weight);
// Copy the weighted statistics for the column to the current table
let stats = table.statistics;
prepareAllTableStats(table);
for (let i = 0; i < columns.length; i++) {
let column = columns[i];
for (let stat = 0; stat < stats.length; stat++) {
let weighted_values = weighted_table.get(stats[stat]);
let main_values = table.get(stats[stat]);
for (let row = 0; row < table.numberRows; row++)
main_values[row][column] = weighted_values[row][column];
// Set the altered values to the main table.
table.set(stats[stat], main_values);
}
}
// Copy the Significance Testing from the weighted table
let cell_arrows = table.cellArrows;
let cell_font_colors = table.cellFontColors;
let cell_significance = table.cellSignificance;
let weight_cell_arrows = weighted_table.cellArrows;
let weight_cell_font_colors = weighted_table.cellFontColors;
let weight_cell_significance = weighted_table.cellSignificance;
for (let i = 0; i < columns.length; i++) {
let column = columns[i];
for (let row = 0; row < table.numberRows; row++) {
cell_arrows[row][column] = weight_cell_arrows[row][column];
cell_font_colors[row][column] = weight_cell_font_colors[row][column];
cell_significance[row][column] = weight_cell_significance[row][column];
}
}
table.cellArrows = cell_arrows;
table.cellFontColors = cell_font_colors;
table.cellSignificance = cell_significance;
// In newer versions of Q, also copy Statistics - Below
if (fileFormatVersion() > 8.41 && belowTableExists()) {
let weighted_marginal_table = Q.calculateTable(table.blue, table.brown, ['!UseQFilters'], weight, 'Below');
// Prepare marginal statistics so that they are all available
prepareAllTableStats(below_table);
// Significance testing for statistics - below
// Copy the Significance Testing from the weighted table
let weight_cell_arrows = weighted_marginal_table.cellArrows;
let weight_cell_font_colors = weighted_marginal_table.cellFontColors;
let weight_cell_significance = weighted_marginal_table.cellSignificance;
let cell_arrows = below_table.cellArrows;
let cell_font_colors = below_table.cellFontColors;
let cell_significance = below_table.cellSignificance;
for (let i = 0; i < columns.length; i++) { //looping through the columns in the selected span
let column_index = columns[i];
weighted_marginal_table.availableStatistics.forEach(function (stat) {
let temp_stats = weighted_marginal_table.get(stat);
let current_stats = below_table.get(stat);
// Copy stats depending on the orientation of the table that has been returned by table.get
if (temp_stats.length == 1) {
current_stats[0][column_index] = temp_stats[0][column_index];
} else {
current_stats[column_index][0] = temp_stats[column_index][0];
}
below_table.set(stat, current_stats);
});
if (cell_arrows.length == 1) {
cell_arrows[0][column_index] = weight_cell_arrows[0][column_index];
cell_font_colors[0][column_index] = weight_cell_font_colors[0][column_index];
cell_significance[0][column_index] = weight_cell_significance[0][column_index];
} else {
cell_arrows[column_index][0] = weight_cell_arrows[column_index][0];
cell_font_colors[column_index][0] = weight_cell_font_colors[column_index][0];
cell_significance[column_index][0] = weight_cell_significance[column_index][0];
}
}
below_table.cellArrows = cell_arrows;
below_table.cellFontColors = cell_font_colors;
below_table.cellSignificance = cell_significance;
}
let footers = table.extraFooters;
footers.push(span + ' weighted by ' + selected_weight_label);
table.extraFooters = footers;
// Returns true if Statistics - Right are available for this table.
// This function needs to be kept in the main body of the rule to ensure
// that right_table is included properly before the rule code is
// executed.
function rightTableExists() {
let exists = true;
try {
right_table.statistics;
} catch (e) {
exists = false;
}
return exists;
}
// Force Q to calculate all of the available statistics in the table before continuing
function prepareAllTableStats(table) {
if (table.availableStatistics.indexOf('Minimum') >= 0)
table.get('Minimum');
} | 2023-02-01 05:46:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24581025540828705, "perplexity": 9912.475014852696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00879.warc.gz"} |
https://testbook.com/objective-questions/bn/mcq-on-instruction-pipelining--5eea6a0a39140f30f369dae9 | # Instruction Pipelining MCQ Quiz in বাংলা - Objective Question with Answer for Instruction Pipelining - বিনামূল্যে ডাউনলোড করুন [PDF]
Last updated on Oct 27, 2022
পাওয়া Instruction Pipelining उत्तरे आणि तपशीलवार उपायांसह एकाधिक निवड प्रश्न (MCQ क्विझ). এই বিনামূল্যে ডাউনলোড করুন Instruction Pipelining MCQ কুইজ পিডিএফ এবং আপনার আসন্ন পরীক্ষার জন্য প্রস্তুত করুন যেমন ব্যাঙ্কিং, এসএসসি, রেলওয়ে, ইউপিএসসি, রাজ্য পিএসসি।
## Top Instruction Pipelining MCQ Objective Questions
#### Instruction Pipelining Question 1:
A five-stage pipeline has stage delays of 150, 120, 150, 160 and 140 nanoseconds. The registers that are used between the pipeline stages have a delay of 5 nanoseconds each.
The total time to execute 100 independent instructions on this pipeline, assuming there are no pipeline stalls, is ______ nanoseconds.
#### Instruction Pipelining Question 1 Detailed Solution
Data:
Number of Instructions = n = 100
Number of stages = 5;
Stage delay = 5 ns
Calculation:
Time taken by five stage pipeline processor of singel instruction = T = Max (150, 120, 150, 160,140) + stages delay
= 160 + 5 = 165 ns
The time required to execute n instructions with pipeline = [k + (n – 1)]T
= (5 + (100 - 1))×165 = 17160 ns
#### Instruction Pipelining Question 2:
Instruction pipelining improves CPU performance due to
1. reduced memory access time
3. use of additional processor unit
4. efficient utilization of the processor hardware.
Option 4 : efficient utilization of the processor hardware.
#### Instruction Pipelining Question 2 Detailed Solution
Concept:
• The main goal of pipelining is to balance the length of each pipelined stage.
• If the stages are perfectly balanced, then the time for instruction on the pipelined machine is reduced.
• Pipelining does not decrease the time for individual instruction execution. Instead, it increases instruction throughput.
• This is achieved by using the processor hardware efficiently.
#### Instruction Pipelining Question 3:
Which one of the following is false about Pipelining?
1. Increases the CPU instruction throughput
2. Reduces the execution time of an individual instruction
3. Increases the program speed
4. 1 and 2
Option 2 : Reduces the execution time of an individual instruction
#### Instruction Pipelining Question 3 Detailed Solution
Concept:
Pipelining is an implementation technique whereby multiple instructions are overlapped in execution. It is like an assembly line.
Explanation:
In pipelining, each step operates parallel with other steps. It stores and executes instructions in an orderly manner.
The main advantages of using pipeline are :
• It increases the overall instruction throughput.
• Pipeline is divided into stages and stages are connected to form a pipe-like structure.
• We can execute multiple instructions simultaneously.
• It makes the system reliable.
• It increases the program speed.
• It reduces the overall execution time but does not reduce the individual instruction time.
Therefore option 2 is the false statement about Pipelining
#### Instruction Pipelining Question 4:
Consider a 5- stage instruction pipeline where the delay of S4 is half to that of S1. S2 has a half delay to S3. S1 is having a delay of 10 ns. S5 and S3 have the same delay as S1 . What will be the speed up achieved in this?
1. 4
2. 3
3. 2.75
4. 2
Option 1 : 4
#### Instruction Pipelining Question 4 Detailed Solution
Formula:
$$Speed\;up = \frac{{execution\;time\;without\;pipeline}}{{execution\;time\;with\;pipeline}}$$
Explanation:
Given:
Delay of stage S1 = 10 ns
Stage delays are represented as :
S1 S2 S3 S4 S5 10 5 10 5 10
Execution time with pipeline (Tp) = max {all stage delays}
= 10 ns
Execution time without pipeline(Tn) = sum of all stage delays
= 10 + 5 + 10 + 5 + 10 = 40 ns
$$Speed\;up = \;\frac{{40}}{{10}} = 4$$
#### Instruction Pipelining Question 5:
The parallel transmission of digital data:
1. is much slower than the serial transmission of data.
2. requires only one signal line between sender and receiver.
3. requires as many signal lines between sender and receiver as there are data bits.
4. is less expensive than the serial method of data transmission.
Option 3 : requires as many signal lines between sender and receiver as there are data bits.
#### Instruction Pipelining Question 5 Detailed Solution
Data transmission:
• It is a process of transferring data between two or more digital devices
• Data is transmitted from one device to another in digital or analogue format
• There are two methods used to transmit data between digital devices: serial transmission and parallel transmission
Serial transmission:
• Transferring one bit at a time, therefore, it needs only one wire
• It reduces costs for wire but also slows the speed of transmission
Parallel transmission:
• Multiple bits are sent on different channels (wires) simultaneously within the same cable
• Data can be sent much faster.
• It is more expensive
#### Instruction Pipelining Question 6:
A non-pipelined system takes 30 ns to process a task. The same task can be processed in a four-segment pipeline with a clock cycle of 10 ns. Determine the speed up of the pipeline for 100 tasks.
1. 3
2. 4
3. 3.91
4. 2.91
Option 4 : 2.91
#### Instruction Pipelining Question 6 Detailed Solution
Concept:
$${\rm{Speed\;up}} = \frac{{{\rm{n}}\; \times \;{{\rm{t}}_{\rm{n}}}}}{{\left( {{\rm{n}}\; +\; {\rm{k}}\; - \;1} \right){{\rm{t}}_{\rm{p}}}}}$$
Given:
For a non-pipelines system,
time to process a task, tn = 30 ns
For a pipelined system,
number of segments, k = 4
clock cycle of each segment, tp = 10 ns
Calculation:
$${\rm{Speed\;up}} = \frac{{100\; \times \;30}}{{\left( {100\; + \;4\; - \;1} \right)10}}$$
∴ speed up = 2.91
#### Instruction Pipelining Question 7:
A pipelined processor executing with a constant clock rate has 5 stages. The five stages are Fetch, Decode, Execute, Memory Access and Write Back. Latency of the stages are 100, 80, 120, 150 and 140 nanoseconds respectively. If a register which has a delay of 10 ns is used between the different stages of the pipelined processor. The time taken to execute 2001 instruction for a pipelined processor is _____ microseconds.
#### Instruction Pipelining Question 7 Detailed Solution
In pipeline, time taken 1 instruction = Max(100, 80, 120, 150 and 140) + 10 = 160 ns
1st instruction time taken = 1 × 5 × 160 ns
Remaining 2000 instruction = 2000 × 1600 ns
Total time taken = (1 × 5 × 160 + 2000 × 160) ns
Total time taken = 2005 × 160 = 320800 ns
320800 = 320.8 μs ≈ 321 μs
#### Instruction Pipelining Question 8:
Consider an instruction pipeline having 5 stages:
IF = Instruction Fetch stage
ID = Instruction Decode stage
OF = Operand Fetch stage
EX = Execute stage
WB = Write Back stage
Now consider the following instructions:
I1 : ADD R1, R9, R10 R1← R9 + R10 I2 : DIV R4, R2, R3 $$R4\leftarrow \frac{R2}{R3}$$ I3 : MUL R5, R4, R1 R5← R4× R1 I4 : ADD R6, R4, R5 R6 ← R4 + R5 I5 : SUB R8, R6, R7 R8← R6 - R7
Each stage takes 1 clock cycle for all the instructions. If x is the number of clock cycles required without operand forwarding and y is the number of clock cycles required with operand forwarding, then find the value of x/y (Corrected up to 2 decimal places). Here operand is forwarded from EX stage to OF stage.
#### Instruction Pipelining Question 8 Detailed Solution
Here I3 is dependent both on I1 and I2.
I4 is dependent both on I2 and I3.
I5 is dependent on I4 only.
NOTE: - We can fetch both the operands in one cycle only, one with the corrected value and other with some dummy value. The operand with dummy value will be updated after Write-Back stage.
Without operand-forwarding:
Instruction number\Clock cycle number 1 2 3 4 5 6 7 8 9 10 11 12 I1 IF ID OF EX WB I2 IF ID OF EX WB I3 IF ID - OF EX WB I4 IF ID - OF -EX WB I5 IF ID - OF - - EX WB
Without operand forwarding
Total clock cycles = 12
x = 12
With operand-forwarding:
Instruction number \Clock cycle number 1 2 3 4 5 6 7 8 9 I1 I F I D O F EX W B I2 IF ID O F EX W B I3 IF ID OF EX W B I4 IF ID OF EX W B I5 IF ID OF EX W B
With operand forwarding
Total clock cycles = 9
y = 9
$$\frac{X}{Y}$$$$=\frac{12}{9}=1.3333$$
= 1.33 (Corrected up to 2 decimal places)
#### Instruction Pipelining Question 9:
A non-pipeline system takes 50 ns to process a task. The same task can be processed in a six-segment pipeline with a clock cycle of 10 ns. The maximum speedup that can be achieved using a pipeline system is
1. 0.2
2. 1
3. 5
4. 10
Option 3 : 5
#### Instruction Pipelining Question 9 Detailed Solution
Concept:
Speed up factor is defined as the ratio of time required for non-pipelined execution to that of time received for pipelined execution.
$$S = \frac{{{t_n}}}{{{t_p}}}$$
Where, tn = time for non-pipelined execution
tp = time for pipelined execution
S = speed up factor
Calculation:
Given, tn = 50 ns
tp = 10 ns
$$S = \frac{{50}}{{10}} = 5$$
Extra Information:
Speed up factor is also defined as:
$$S = \frac{{nk}}{{n + \left( {k - 1} \right)}}$$
n = number of instructions
k = number of stages in the pipeline.
#### Instruction Pipelining Question 10:
The speed gained by an 'n' segment pipeline executing 'm' tasks is:
1. $$\frac {(n + m - 1)}{mn}$$
2. $$\frac {mn}{(n + m - 1)}$$
3. $$\frac {n+m}{(mn - 1)}$$
4. $$\frac {n + m}{(mn + 1)}$$
Option 2 : $$\frac {mn}{(n + m - 1)}$$
#### Instruction Pipelining Question 10 Detailed Solution
Data:
number of stage (segment) = n
Non-pipeline
Assume each stage take 1 unit of time
Time taken (Twp) = number of stage × number of instructions
Twp = n.m
For pipeline
Only 1st instruction takes n unit time then every (m - 1) instruction takes 1 unit time
Time taken(Tp) = (n + m - 1)
$$S = \frac{T_{wp}}{T_{p}} = \frac{nm}{n+m-1}$$
Therefore option 2 is correct | 2023-01-27 11:26:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25706812739372253, "perplexity": 5464.382790374472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00579.warc.gz"} |
https://gateoverflow.in/810/gate-cse-2002-question-1-6 | 6,267 views
Which of the following is true?
1. The set of all rational negative numbers forms a group under multiplication.
2. The set of all non-singular matrices forms a group under multiplication.
3. The set of all matrices forms a group under multiplication.
4. Both B and C are true.
How is B correct. Unless we fix the size of a nonsingular matrix to say n*n only then will the multiplication operator be closed. For a set of all non singular matrices, multiplication operator will not be even defined for some pairs.
we know under binary operation *
1.Groupoid→ closed
2.Semi group ->Associative
3.Monoide->Identity should present
4.Group->Inverse should present
5.Abelian group→ commutative
For remind this order i use (Ground se mat Gharjao app)
matrix “A” is non singular then $_{A}-1$ will present
For make a matrix a group we need to give guarantee that inverse is present but for every type of matrix we can’t give guarantee but only for non singular.
### Subscribe to GO Classes for GATE CSE 2022
1. False. Multiplication of two negative rational numbers give positive number. So, closure property is not satisfied.
2. True. Matrices have to be non-singular (determinant !=0) for the inverse to exist.
3. False. Singular matrices do not form a group under multiplication.
4. False as C is false.
two non singular matrics multiplication is always non-singular??
if $A$ and $B$ are non-singular matrices, i.e. $det(A)$ $\neq$$0 and det(B) \neq$$0$
then, $det(AB)$ =$det(A)$ $det(B)$ $\neq$$0$
so, $AB$ is also non-singular always
Sir, if the order of matrices is not applicable to multiplication...like Anxm Bpxq then how it can be a group sir?
edited
If matrices are rectangular then how can we check for singular or non sigular..
So i think sigular/non singular matrices means that they are square mateices.. and there is no rectangular matrics (Anxm).
Option b
by
C is false because singular matrices doesnot have inverse,
But why option A is wrong?
Multiplication of two negative ll be a positive no.. so given set is not even closed.. thats why A is not a group..
1. If a relation is group then it must be
1)Closed
2)Associative
3)Identity
4)Inverse
if a matrix is non-singular then inverse dose not exist. So option c is wrong.
### 1 comment
If a matrix is Singular(Determinant=0) then Inverse DOES NOT EXIST. | 2021-09-27 21:32:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146587014198303, "perplexity": 1816.1722152241007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00135.warc.gz"} |
https://www.meczennicyploccy.pl/en/oxygen/01-Jan/11252.html | # japan oxygen can be found within two
Production Environment
Cooperation partner
Minimum Oxygen Concentration for Human Breathing | Sciencing- japan oxygen can be found within two ,Mar 10, 2018·Humans need oxygen to live, but not as much as you might think. The minimum oxygen concentration in the air required for human breathing is 19.5 percent. The human body takes the oxygen breathed in from the lungs and transports it to the other parts of the body via the body's red blood cells. Each cell uses and requires oxygen to thrive.Should Japan dump radioactive water from Fukushima into ...Oct 23, 2020·Around 1.2 million tonnes of water contaminated by radioactive substances from the 2011 Fukushima disaster will be dumped in the Pacific ocean, under …
### Oxygen Transport - Haemoglobin - Bohr Shift ...
May 20, 2021·Transport of Oxygen. Oxygen is transported in the blood in two ways: Dissolved in the blood (1.5%).; Bound to haemoglobin (98.5%).; Bound to Haemoglobin. Once oxygen has entered the blood from the lungs, it is taken up by haemoglobin (Hb) in the red blood cells.. Haemoglobin is a protein found in red blood cells that is comprised of four subunits: two alpha subunits and two beta subunits.
### Clarification of OSHA's requirement for breathing air to ...
Apr 02, 2007·At oxygen levels of 10 to 14 percent, faulty judgment, intermittent respiration, and exhaustion can be expected even with minimal exertion (Exs. 25-4 and 150). Breathing air containing 6 to 10 percent oxygen results in nausea, vomiting, lethargic movements, and perhaps unconsciousness.
### 3 Ways to Increase Oxygen Levels in Your Home - wikiHow
Oct 22, 2019·There are 20 references cited in this article, which can be found at the bottom of the page. This article has been viewed 90,236 times. If you’re struggling with a chronic illness like COPD, heart failure, or sleep apnea, increasing your oxygen at home can help improve your symptoms. [1]
### What Causes Ocean "Dead Zones"? - Scientific American
Sep 25, 2012·The cause of such “hypoxic” (lacking oxygen) conditions is usually eutrophication, an increase in chemical nutrients in the water, leading to excessive blooms of …
### How to Make Oxygen and Hydrogen from Water Using Electrolysis
Apr 13, 2020·The process of splitting water (H 2 O) into its atomic components (hydrogen and oxygen) using electricity is known as electrolysis. This experiment has significant implications in terms of what these 2 gases can be used for in their own right, with hydrogen being one of the cleanest sources of energy we have access to.
### Polyethylene terephthalate - Wikipedia
Plastic bottles made from PET are widely used for soft drinks (see carbonation).For certain specialty bottles, such as those designated for beer containment, PET sandwiches an additional polyvinyl alcohol (PVOH) layer to further reduce its oxygen permeability.. Biaxially oriented PET film (often known by one of its trade names, "Mylar") can be aluminized by evaporating a thin film of metal ...
### 10 Uses for Oxygen | Sciencing
Jan 28, 2020·Oxygen makes up about 21 percent of the atmosphere. It is vital in industrial, medical and engineering settings. It is needed in home oxygen therapy, aerospace and metallurgy applications. The many functions of oxygen are often masked by the fact that most people regard it …
### Mammalian enteral ventilation ameliorates respiratory ...
May 14, 2021·O 2-PFD can be recycled after collection into an outlet bag (Argyle TM Dennis TM colorectal tube, CardinalHealth TM, Tokyo, Japan) placed in the rectum of the pig. Blood pressure, peripheral arterial oxygen saturation (SpO 2), endo tidal carbon dioxide (EtCO 2) and arterial blood gas analysis were estimated during the experiments. We used iSTAT ...
### Transport of Oxygen in the Blood | Biology for Majors II
The oxygen-carrying capacity of hemoglobin determines how much oxygen is carried in the blood. In addition to $\text{P}_{\text{O}_2}$, other environmental factors and diseases can affect oxygen carrying capacity and delivery. Carbon dioxide levels, blood pH, and body temperature affect oxygen-carrying capacity (Figure 2).
### Clarification of OSHA's requirement for breathing air to ...
Apr 02, 2007·At oxygen levels of 10 to 14 percent, faulty judgment, intermittent respiration, and exhaustion can be expected even with minimal exertion (Exs. 25-4 and 150). Breathing air containing 6 to 10 percent oxygen results in nausea, vomiting, lethargic movements, and perhaps unconsciousness.
### COVID-19 Strategy: The Japan Model – The Diplomat
Apr 24, 2020·According to Ministry of Health, Labour and Welfare statistics, 11,772 Japanese had been infected with the COVID-19 coronavirus as of April 23, with 287 total deaths. These numbers have been ...
### Japan researchers say ozone effective in neutralising ...
Aug 26, 2020·Ozone, a type of oxygen molecule, is known to inactivate many pathogens, and previously experiments have shown that high concentrations, between 1-6 ppm, were effective against the coronavirus but ...
### Obligate anaerobe - Wikipedia
Oxygen can also damage obligate anaerobes in ways not involving oxidative stress. Because molecular oxygen contains two unpaired electrons in the highest occupied molecular orbital, it is readily reduced to superoxide (O − 2) and hydrogen peroxide (H 2 O 2) within cells.
### Molecular oxygen - Energy Education
Molecular oxygen (O 2) is a diatomic molecule that is composed of two oxygen atoms held together by a covalent bond. Molecular oxygen is essential for life, as it is used for respiration by many organisms. It's also essential for fossil fuel combustion.. Molecular oxygen is very chemically reactive, and tends to form oxides by reaction with other elements and compounds quite easily.
### Anal oxygen supply to treat worst-case breathing woes ...
May 15, 2021·COVID-19 Update. Visit this page for the latest news on Japan’s battle with the novel coronavirus pandemic. Reporter’s COVID-19 Notebook. A mother of two …
### Hydrogen fuel - Wikipedia
Hydrogen fuel is a zero carbon fuel burned with oxygen. It can be used in fuel cells or internal combustion engines.It has begun to be used in commercial fuel cell vehicles, such as passenger cars, and has been used in fuel cell buses for many years. It is also used as a fuel for spacecraft propulsion.. As of 2018, the majority of hydrogen (∼95%) is produced from fossil fuels by steam ...
### How oxygen is made - material, manufacture, making ...
Worldwide the five largest oxygen-producing areas are Western Europe, Russia (formerly the USSR), the United States, Eastern Europe, and Japan. Raw Mcatericals Oxygen can be produced from a number of materials, using several different methods.
### Fig 2. Remarkable improvement of fatal hypoxemia by ...
May 14, 2021·Tokyo, Japan - Oxygen is crucial to many forms of life. Its delivery to the organs and tissues of the body through the process of respiration is vital for most biological processes. Now ...
### Transport of Oxygen in the Blood | Biology for Majors II
The oxygen-carrying capacity of hemoglobin determines how much oxygen is carried in the blood. In addition to $\text{P}_{\text{O}_2}$, other environmental factors and diseases can affect oxygen carrying capacity and delivery. Carbon dioxide levels, blood pH, and body temperature affect oxygen-carrying capacity (Figure 2).
### Japan's battery startups take the world beyond lithium ion ...
Aug 02, 2020·Now Japan's hopes to remain among the global heavyweights in a market expected to be worth more than 2.7 trillion yen (\$25 billion) by 2035 rest on the ability of its engineers.
### oxygen | Discovery, Symbol, Properties, Uses, & Facts ...
Oxygen, a colorless, odorless, tasteless gas essential to living organisms, being taken up by animals, which convert it to carbon dioxide; plants, in turn, utilize carbon dioxide as a source of carbon and return the oxygen to the atmosphere. Oxygen forms compounds by reaction with practically any other element.
### Exoplanets could have lots of oxygen but no life - Big Think
Apr 20, 2021·Under these conditions, the molten surface of a young exoplanet can freeze while the limited water supply is still found only as steam (vapor) in the atmosphere. This prevents oxygen …
### Exoplanets could have lots of oxygen but no life - Big Think
Apr 19, 2021·If an exoplanet houses life, it almost certainly will have gaseous oxygen. But a new study modeling the development of rocky planets identifies three scenarios in which oxygen can …
### It's Elemental - The Element Oxygen
Oxygen is the third most abundant element in the universe and makes up nearly 21% of the earth's atmosphere. Oxygen accounts for nearly half of the mass of the earth's crust, two thirds of the mass of the human body and nine tenths of the mass of water. Large amounts of oxygen can be extracted from liquefied air through a process known as ... | 2021-09-22 02:30:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4646261930465698, "perplexity": 4969.722579188521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00634.warc.gz"} |
https://www.iacr.org/cryptodb/data/author.php?authorkey=10962 | ## CryptoDB
### Gaëtan Cassiers
#### Publications
Year
Venue
Title
2021
TCHES
There exists many masking schemes to protect implementations of cryptographic operations against side-channel attacks. It is common practice to analyze the security of these schemes in the probing model, or its variant which takes into account physical effects such as glitches and transitions. Although both effects exist in practice and cause leakage, masking schemes implemented in hardware are often only analyzed for security against glitches. In this work, we fill this gap by proving sufficient conditions for the security of hardware masking schemes against transitions, leading to the design of new masking schemes and a proof of security for an existing masking scheme in presence of transitions. Furthermore, we give similar results in the stronger model where the effects of glitches and transitions are combined.
2021
CRYPTO
Proving the security of masked implementations in theoretical models that are relevant to practice and match the best known attacks of the side-channel literature is a notoriously hard problem. The random probing model is a good candidate to contribute to this challenge, due to its ability to capture the continuous nature of physical leakage (contrary to the threshold probing model), while also being convenient to manipulate in proofs and to automate with verification tools. Yet, despite recent progresses in the design of masked circuits with good asymptotic security guarantees in this model, existing results still fall short when it comes to analyze the security of concretely useful circuits under realistic noise levels and with low number of shares. In this paper, we contribute to this issue by introducing a new composability notion, the Probe Distribution Table (PDT), and a new tool (called STRAPS, for the Sampled Testing of the RAndom Probing Security). Their combination allows us to significantly improve the tightness of existing analyses in the most practical (low noise, low number of shares) region of the design space. We illustrate these improvements by quantifying the random probing security of an AES S-box circuit, masked with the popular multiplication gadget of Ishai, Sahai and Wagner from Crypto 2003, with up to six shares.
2020
TCHES
Higher-order masking countermeasures provide strong provable security against side-channel attacks at the cost of incurring significant overheads, which largely hinders its applicability. Previous works towards remedying cost mostly concentrated on local'' calculations, i.e., optimizing the cost of computation units such as a single AND gate or a field multiplication. This paper explores a complementary global'' approach, i.e., considering multiple operations in the masked domain as a batch and reducing randomness and computational cost via amortization. In particular, we focus on the amortization of $\ell$ parallel field multiplications for appropriate integer $\ell > 1$, and design a kit named {\it packed multiplication} for implementing such a batch. Higher-order masking countermeasures provide strong provable security against side-channel attacks at the cost of incurring significant overheads, which largely hinders its applicability. Previous works towards remedying cost mostly concentrated on local'' calculations, i.e., optimizing the cost of computation units such as a single AND gate or a field multiplication. This paper explores a complementary global'' approach, i.e., considering multiple operations in the masked domain as a batch and reducing randomness and computational cost via amortization. In particular, we focus on the amortization of $\ell$ parallel field multiplications for appropriate integer $\ell > 1$, and design a kit named {\it packed multiplication} for implementing such a batch. For $\ell+d\leq2^m$, when $\ell$ parallel multiplications over $\mathbb{F}_{2^{m}}$ with $d$-th order probing security are implemented, packed multiplication consumes $d^2+2\ell d + \ell$ bilinear multiplications and $2d^2 + d(d+1)/2$ random field variables, outperforming the state-of-the-art results with $O(\ell d^2)$ multiplications and $\ell \left \lfloor d^2/4\right \rfloor + \ell d$ randomness. To prove $d$-probing security for packed multiplications, we introduce some weaker security notions for multiple-inputs-multiple-outputs gadgets and use them as intermediate steps, which may be of independent interest. As parallel field multiplications exist almost everywhere in symmetric cryptography, lifting optimizations from local'' to global'' substantially enlarges the space of improvements. To demonstrate, we showcase the method on the AES Subbytes step, GCM and TET (a popular disk encryption). Notably, when $d=8$, our implementation of AES Subbytes in ARM Cortex M architecture achieves a gain of up to $33\%$ in total speeds and saves up to $68\%$ random bits than the state-of-the-art bitsliced implementation reported at ASIACRYPT~2018. | 2022-05-28 19:53:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47517842054367065, "perplexity": 1591.2795810265836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00067.warc.gz"} |
https://stefvanbuuren.name/AGD/ | The AGD package implements various tools that aid in the analysis of growth data.
## Installation
The AGD package can be installed from CRAN as follows:
install.packages("AGD")
install.packages("devtools")
devtools::install_github(repo = "stefvanbuuren/AGD")
## Minimal example
library(AGD)
# What is the SDS of a height of 115 cm at age 5 years
# relative to Dutch references?
# Calculate for boys and girls:
y2z(y = c(115, 115), x = 5, sex = c("M", "F"))
#> [1] 0.424 0.706
# What are the SDS of the IOTF BMI cut-off values for
# overweight (boys 2-18) relative to Dutch references?
cutoff <- c(
18.41, 17.89, 17.55, 17.42, 17.55, 17.92, 18.44, 19.10,
19.84, 20.55, 21.22, 21.91, 22.62, 23.29, 23.90, 24.46,
25.00)
age <- 2:18
z <- y2z(y = cutoff, x = 2:18, sex = "M", ref = nl4.bmi)
plot(age, z, type = "b", xlab = "Age (years",
ylab = "SDS IOTF (on Dutch reference)") | 2022-10-03 14:04:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32117560505867004, "perplexity": 14960.909514536808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00529.warc.gz"} |
https://ask.sagemath.org/question/62302/jacobi-theta-q-series/ | # Jacobi theta q series
I'd like to express the classical Jacobi theta series in Sage:
• $\Theta_2(z) = \displaystyle{\sum_{n\in{\mathbb{Z} + \frac{1}{2}}} q^{\frac{1}{2}n^2}}$
• $\Theta_3(z) = \displaystyle{\sum_{n\in{\mathbb{Z}}} q^{\frac{1}{2}n^2}}$
• $\Theta_4(z) = \displaystyle{\sum_{n\in{\mathbb{Z}}} (-1)^nq^{\frac{1}{2}n^2}}$
where $q=e^{2 \pi i z}$. Preferably, these functions would play nicely with other q series such as
E4 = eisenstein_series_qexp(4,5, normalization = 'constant')
type(E4)
<class 'sage.rings.power_series_poly.PowerSeries_poly'>
I'd like to be able to take rational combinations of these theta functions and the Eisenstein series and analyze/compare Fourier coefficients in the q-series. However, it appears that working with non-integer powers of q may be problematic. For instance, if we start with the theta_qexp function (which is $\Theta_3(2z)$ from above), we see that
theta_qexp(10,'q',ZZ)
1 + 2*q + 2*q^4 + 2*q^9 + O(q^10)
But if we let f = theta_qexp(10,'q',ZZ) then f*q^(-1/2) throws an error, presumably because of the non-integer power.
edit retag close merge delete
@Daniel L : I tried to reformat the LaTeX code of your question, but the third definition notation is inconsistent with the first two. Could you clarify ?
( 2022-05-06 10:52:15 +0200 )edit
There is a $\Theta_4$ now in there.
( 2022-05-06 17:40:06 +0200 )edit
Sort by » oldest newest most voted
If i am correctly understanding the question, the programming issue is related to finding a common ring to work in for the following objects:
• power series in $t$, where $t$ is a nome for $\exp(\pi i\tau)$, and
• power series in $q$, where $q$ is a nome for $\exp(2\pi i\tau)$, and use already implemented modular functions already implemented in sage for this nome.
Obviously, we should have the bridge $q=t^2$. Since $q$ can be algebraically obtained from $t$, but not conversely, we will work with $t$. So we work in the power series ring over $\Bbb Q$ in the variable $t$.
Here is a way to initialize the above objects, and work with them in a common world.
q_PREC = 8
t_PREC = 2*q_PREC
R.<t> = PowerSeriesRing(QQ, default_prec=t_PREC)
E4 = eisenstein_series_qexp(4, prec=q_PREC, var='q', normalization = 'constant')
RQ = E4.parent()
RQ.inject_variables() # this is defining the variable q used by default in E4
print(f'E4 = E4(q) is {E4}')
print(f'E4(t^2) is {E4(t^2)}')
Results:
Defining q
E4 = E4(q) is 1 + 240*q + 2160*q^2 + 6720*q^3 + 17520*q^4 + 30240*q^5 + 60480*q^6 + 82560*q^7 + O(q^8)
E4(t^2) is 1 + 240*t^2 + 2160*t^4 + 6720*t^6 + 17520*t^8 + 30240*t^10 + 60480*t^12 + 82560*t^14 + O(t^16)
I hope the way the programming strategy would work is now clear. Let us define in $\Bbb Q[[t]]$ (modulo $t^{2\cdot 8}$) the following objects:
E4t = E4(t^2) + O(t^t_PREC)
Theta3 = theta_qexp(q_PREC)(t^2) + O(t^t_PREC)
print(f'E4t * Theta3 is:\n{E4t * Theta3}')
g = E4t - Theta3
gdic = g.dict()
print(f'E4t - Theta3 is:\n{g}')
print(f'The coefficients of E4t - Theta3 are as in the dictionary:\n{gdic}')
print('The first coefficients are:')
for k in [0..10]:
print(f'Coefficient in degree {k} is {gdic.get(k, 0)}')
The results are:
E4t * Theta3 is:
1 + 242*t^2 + 2640*t^4 + 11040*t^6 + 30962*t^8 + 65760*t^10 + 125280*t^12 + 216960*t^14 + O(t^16)
E4t - Theta3 is:
238*t^2 + 2160*t^4 + 6720*t^6 + 17518*t^8 + 30240*t^10 + 60480*t^12 + 82560*t^14 + O(t^16)
The coefficients of E4t - Theta3 are as in the dictionary:
{2: 238, 4: 2160, 6: 6720, 8: 17518, 10: 30240, 12: 60480, 14: 82560}
The first coefficients are:
Coefficient in degree 0 is 0
Coefficient in degree 1 is 0
Coefficient in degree 2 is 238
Coefficient in degree 3 is 0
Coefficient in degree 4 is 2160
Coefficient in degree 5 is 0
Coefficient in degree 6 is 6720
Coefficient in degree 7 is 0
Coefficient in degree 8 is 17518
Coefficient in degree 9 is 0
Coefficient in degree 10 is 30240
The coefficients of a power series are not really obtained in a natural way. The method g.coefficients() shows them, but you have to guess the degrees. You can try [coeff for coeff in g] to have the coefficients quickly in a list.
sage: [coeff for coeff in g]
[0, 0, 238, 0, 2160, 0, 6720, 0, 17518, 0, 30240, 0, 60480, 0, 82560]
An other way is to associate the dictionary gdic as above, since for sparse power series it may be useful. For instance:
sage: theta2_qexp(300).dict()
{1: 1, 9: 1, 25: 1, 49: 1, 81: 1, 121: 1, 169: 1, 225: 1, 289: 1}
sage: theta2_qexp(300)(t^2).dict()
{2: 1, 18: 1, 50: 1, 98: 1, 162: 1, 242: 1, 338: 1, 450: 1, 578: 1}
Note that for missing degrees, we have to provide the default value, zero. For instance:
sage: th2_dic = theta2_qexp(300)(t^2).dict()
sage: th2_dic.get(18)
1
sage: th2_dic.get(19)
sage: type(th2_dic.get(19))
<class 'NoneType'>
sage: th2_dic.get(19, 0)
0
sage: type(th2_dic.get(19, 0))
<class 'sage.rings.integer.Integer'>
I hope the above way to proceed works for the purpose, good luck in your research projects!
more
## Stats
Seen: 34 times
Last updated: May 06 | 2022-05-23 11:59:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6888189911842346, "perplexity": 3404.825722212959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00233.warc.gz"} |
https://algebra-calculators.com/some-applications-of-trigonometry-class-10-maths-formulas/ | # Some Applications of Trigonometry Class 10 Maths Formulas
For those looking for help on Some Applications of Trigonometry Class 10 Math Concepts can find all of them here provided in a comprehensive manner. To make it easy for you we have jotted the Class 10 Some Applications of Trigonometry Maths Formulae List all at one place. You can find Formulas for all the topics lying within the Some Applications of Trigonometry Class 10 Some Applications of Trigonometry in detail and get a good grip on them. Revise the entire concepts in a smart way taking help of the Maths Formulas for Class 10 Some Applications of Trigonometry.
## Maths Formulas for Class 10 Some Applications of Trigonometry
The List of Important Formulas for Class 10 Some Applications of Trigonometry is provided on this page. We have everything covered right from basic to advanced concepts in Some Applications of Trigonometry. Make the most out of the Maths Formulas for Class 10 prepared by subject experts and take your preparation to the next level. Access the Formula Sheet of Some Applications of Trigonometry Class 10 covering numerous concepts and use them to solve your Problems effortlessly.
The height or length of an object or the distance between two distinct objects can be determined with the help of trigonometric ratios.
Line of Sight
When an observer looks from a point E (eye) at an object O then the straight line EO between the eye E and the object O is called the line of sight.
Horizontal
When an observer looks from a point E (eye) to another point Q which is horizontal to E, then the straight line, EQ between E and Q is called the horizontal line.
Angle of Elevation
When the eye is below the object, then the observer has to look up from the point E to the object O. The measure of this rotation (angle θ) from the horizontal line is called the angle of elevation.
Angle of Depression
When the eye is above the object, then the observer has to look down from the point E to the object. The horizontal line is now parallel to the ground. The measure of this rotation (angle θ) from the horizontal line is called the angle of depression.
How to convert the above figure into the right triangle.
Case I: Angle of Elevation is known
Draw OX perpendicular to EQ.
Now ∠OXE = 90°
ΔOXE is a rt. Δ, where
OE = hypotenuse
OX = opposite side (Perpendicular)
Case II: Angle of Depression is known
(i) Draw OQ’parallel to EQ
(ii) Draw perpendicular EX on OQ’.
(iii) Now ∠QEO = ∠EOX = Interior alternate angles
ΔEXO is an rt. Δ. where
EO = hypotenuse
Sin θ = $$\frac { Perpendicular }{ Hypotenuse }$$
Cos θ = $$\frac { Base }{ Hypotenuse }$$
Tan θ = $$\frac { Perpendicular }{ Base }$$ | 2022-01-17 10:56:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.660618245601654, "perplexity": 1138.585237885282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00288.warc.gz"} |
https://math.stackexchange.com/questions/8347/two-var-function/8414 | # Two Var Function
I hate min and max questions.
I have a function $$G(x,y)=\frac{1}{x}+\frac{4}{y}+\frac{9}{4-x-y}.$$
I need to prove that $G(x,y)$ has a minimum value of $9$.
I've partially derived them for $x$ and $y$ giving $$\frac{1}{x^2}+\frac{9}{(4-x-y)^2}=0\tag 1$$ $$\frac{4}{y^2}+\frac{9}{(4-x-y)^2}=0\tag 2.$$
This is where I get stuck I'm not entirely sure how to continue. I need to find $x = ?$ and $y = ?$ but this simultaneous equation stumps me.
• Get the LCM of both terms in both equations, bring together, turn into a quadratic in x and y, eliminate... okay, where exactly are you stuck? – J. M. is a poor mathematician Oct 30 '10 at 12:34
• As it stands, your system has no solutions (a sum of two positive terms can't be zero). But that's because you made a sign error when computing the derivatives; you need to change the sign of the first term in both (1) and (2). Then note that you must have $1/x^2=4/y^2$, so that $y=\pm 2x$. From there it shouldn't be too hard, I hope! – Hans Lundmark Oct 30 '10 at 13:03
• What are possible values of x and y? e.g. G(2,3) = -7.166... < 9. – kennytm Oct 30 '10 at 13:16
As always, to find the maximum and minimum of a function, you have to calculate first the critical points, that is
$$\frac{\partial G(x,y)}{\partial x}=0$$ $$\frac{\partial G(x,y)}{\partial y}=0$$
In this case, the respective equations are
$$\frac{\partial G(x,y)}{\partial x}=-\frac{1}{x^{2}}+\frac{9}{(4-x-y)^{2}}$$ $$\frac{\partial G(x,y)}{\partial y}=-\frac{4}{y^{2}}+\frac{9}{(4-x-y)^{2}}$$
Simplifying you should obtain $x=1-\displaystyle \frac{y}{4}$ and $\displaystyle \frac{5y}{2}=4-x$ which lead to the solution $(x,y)=\bigg\{\displaystyle \frac{2}{3},\displaystyle \frac{4}{3}\bigg\}$
Then you substitute those values in your original function $G(\frac{2}{3},\frac{4}{3})=9$.
As I said, such a point is just a critical point (which means, it could be a local minimum, a local maximum or a saddle point). In order to figure out, we have to use the Second partial derivative test.
You can see the details in the Wikipedia article. As you already know what you want to prove (that $(\frac{2}{3},\frac{4}{3})$ is a local minimum), we expect that $M(\frac{2}{3},\frac{4}{3})>0$ and $f_{xx}(\frac{2}{3},\frac{4}{3})>0$.
I think I'll leave it there. You can do the rest of the calculations.
By the way, the aforementioned procedure works to obtain only a local minimum, maximum or saddle point, not global extrema.
You cannot prove that the minimum is 9, because the function certainly takes on lower values: for example $G=0$ along a the parabola $(x,y)=(-2t+6t^2,8t+12t^2)$, and clearly $G\rightarrow 0$ as $x,y \rightarrow\infty$. What you have discovered is that there is only one LOCAL minimum, and it is at $(2/3, 4/3)$. | 2020-02-22 02:12:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397191166877747, "perplexity": 152.4713818474753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00119.warc.gz"} |
http://math.stackexchange.com/questions/296392/behaviour-of-fracx3-yx6-y2-around-origin | # Behaviour of $\frac{x^{3} y}{x^{6} + y^{2}}$ around origin
Lets study the limit
$$\lim_{(x,y)\to(0,0)} \frac{x^3 y}{x^6 + y^2}$$
If we look at the limit along any straight line eg $y = mx$ we find that the limit tends to $0$.
Studying the limit closer we test for every curve $y = x^k$, where $k$ is some integer number. This gives that for every $k\neq 3$, the limit tends to $0$. If $k=3$, the limit tends to $1/2$.
Now this shows that no matter what straigt line, or curve (except $y=x^3$) the limit tends to zero. I also tried a few other polynomials and they all tend to zero.
My assumption is the following:
The limit $$L = \lim_{(x,y)\to(0,0)} \frac{x^3 y}{x^6 + y^2}$$ Is equal to zero if $y$ is any polynomial except $y = m x^3\,,\ x \in \mathbb{R}$.
Is the claim true? If so can anyone help proving it?
-
– Jack Feb 6 '13 at 16:53
Got something from an answer below? – Did Feb 9 '13 at 10:14
It's not true. Take for example $y=x^3+x^4$. Then
$$f(x,y) = \frac{1+x}{2+2x+x^2}$$
and the limit along that curve will be $1/2$.
-
Indeed! But what curves then will tend to something different than $0$? – N3buchadnezzar Feb 6 '13 at 16:37
For curves $y=f(x)$, where $f(x)=ax^3+o(x^3)$, and $a\ne 0$, the limit is not equal to $0$. I do not know a full answer. – André Nicolas Feb 6 '13 at 16:57
The set of limit points is exactly $[-\frac12,\frac12]$. In particular, the limit at $(0,0)$ does not exist.
Every value in $[-\frac12,\frac12]$ is a limit point since, for every fixed $a$, $f(x,ax^3)=\frac{a}{a^2+1}$ for every nonzero $x$.
On the other hand, for every $(x,y)$, $2|x^3y|\leqslant x^6+y^2$ hence $|f(x,y)|\leqslant\frac12$ for every $(x,y)\ne(0,0)$.
- | 2015-05-30 12:39:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935598969459534, "perplexity": 199.51077470389532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207931085.38/warc/CC-MAIN-20150521113211-00135-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-chapter-test-page-527/6 | ## Algebra 1
$-x^{3}+10x^{2}+4x-3$
Simplify. Write each answer in standard form. $(-7x^{3}+4x-6)+(6x^{3}+10x^{2}+3)$ $-x^{3}+10x^{2}+4x-3$ | 2018-10-18 00:59:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6451164484024048, "perplexity": 5136.012617565325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511365.50/warc/CC-MAIN-20181018001031-20181018022531-00515.warc.gz"} |
https://www.hpmuseum.org/forum/printthread.php?tid=10961 | modular exponentiation? - Printable Version +- HP Forums (https://www.hpmuseum.org/forum) +-- Forum: HP Calculators (and very old HP Computers) (/forum-3.html) +--- Forum: General Forum (/forum-4.html) +--- Thread: modular exponentiation? (/thread-10961.html) Pages: 1 2 3 modular exponentiation? - Bill Duncan - 06-23-2018 11:49 PM A colleague recently posed a question for hiring new programmers.. "What is the least significant 10 digits of the series: 1^1+2^2+3^3 .. 1000^1000" ? Fairly easy, and I wrote about my solutions here: billduncan.org. I flunked the test as my two solutions weren't the java code he was looking for.. It was interesting that "dc" (the command line "reverse polish calculator") had a modular exponentiation operator while the algebraic "bc" command didn't. (At one time, "bc" was a wrapper around "dc", which suggests that this was a recent addition.) I also got thinking about how I'd solve it on my calculators.. HP-48 wouldn't be a big deal, but it might be tricky on earlier calcs like the HP-41.. Thoughts? One other thing came to mind, is there a way to insert LaTeX code for formulaes in this forum similar to the way I did it in my blog post? RE: modular exponentiation? - Thomas Klemm - 06-24-2018 12:21 AM (06-23-2018 11:49 PM)Bill Duncan Wrote: One other thing came to mind, is there a way to insert LaTeX code for formulaes in this forum similar to the way I did it in my blog post? Yes. Just write it in escaped brackets: Code: $...$ $\LARGE\Bigg(\sum_{n=1}^{1000} n^{n} \bmod 10^{10}\Bigg) \bmod 10^{10}$ But you have a typo in your formula: $$1^{10}$$ should rather be $$10^{10}$$. RE: modular exponentiation? - Thomas Klemm - 06-24-2018 12:33 AM (06-23-2018 11:49 PM)Bill Duncan Wrote: I flunked the test as my two solutions weren't the java code he was looking for.. With my solution using Python I'd probably flunked as well: Code: >>> sum((n**n for n in range(1, 1001))) % 10**10 9110846700L RE: modular exponentiation? - Thomas Okken - 06-24-2018 12:48 AM Flunking the test, Free42 style: Code: 00 { 46-Byte Prgm } 01▸LBL "BD" 02 0 03 STO 00 04 1ᴇ3 05 STO 01 06▸LBL 01 07 RCL 01 08 STO 02 09 1 10▸LBL 02 11 RCL× 01 12 1ᴇ10 13 MOD 14 DSE 02 15 GTO 02 16 STO+ 00 17 DSE 01 18 GTO 01 19 RCL 00 20 1ᴇ10 21 MOD 22 END RE: modular exponentiation? - mfleming - 06-24-2018 02:46 AM APL $1e10|+/(\iota1000)*\iota1000$ RE: modular exponentiation? - mfleming - 06-24-2018 02:58 AM (06-24-2018 02:46 AM)mfleming Wrote: APL $1e10|+/(\iota1000)*\iota1000$ Or, generalized via an anonymous function $1e10\ \{\alpha|+/(\iota\omega)*\iota\omega\}\ 1000$ No, I didn't drop my keyboard on the floor RE: modular exponentiation? - Bill Duncan - 06-24-2018 03:20 AM (06-24-2018 12:21 AM)Thomas Klemm Wrote: (06-23-2018 11:49 PM)Bill Duncan Wrote: One other thing came to mind, is there a way to insert LaTeX code for formulaes in this forum similar to the way I did it in my blog post? Yes. Just write it in escaped brackets: Code: $...$ $\LARGE\Bigg(\sum_{n=1}^{1000} n^{n} \bmod 10^{10}\Bigg) \bmod 10^{10}$ But you have a typo in your formula: $$1^{10}$$ should rather be $$10^{10}$$. Indeed! Thank you!! RE: modular exponentiation? - Paul Dale - 06-24-2018 03:42 AM The WP 34S makes this easy, it has a modular exponentiation function in integer mode. Unfortunately, the summation function doesn't work in integer mode. Regardless, the program is fairly short: Code: 01: LBL A 02: BASE 10 03: STO I 04: Clx 05: LBL 00 06: RCL I 07: RCL I 08: # 10 09: 10^x 10: ^MOD 11: RCL+ Y 12: # 10 13: 10^x 14: MOD 15: DSZ I 16: GTO 00 17: END 1000 XEQ A -> 9110846700 Pauli RE: modular exponentiation? - J-F Garnier - 06-24-2018 08:41 AM The HP-71B can compute the least 9 significant digits easily (well at least in Emu71 -for the speed): Code: 10 S=0 @ M=10^9 20 FOR I=1 TO 999 25 X=1 30 FOR N=1 TO I 40 X=MOD(X*I,M) 50 NEXT N 60 S=MOD(S+X,M) 70 NEXT I 80 DISP S 110846700 J-F RE: modular exponentiation? - Didier Lachieze - 06-24-2018 09:18 AM On the HP Prime in CAS mode: $\LARGE irem( \sum_{n=1}^{1000} (powmod(n,n,10^{10})), 10^{10})$ or, if you want to copy/paste to the emulator: irem(Σ(powmod(n,n,10^10),n,1,1000),10^10) RE: modular exponentiation? - J-F Garnier - 06-24-2018 11:32 AM A slightly modified version of the HP-71B program, that provides the full requested 10 digits: Code: 10 S=0 @ M=10^10 20 FOR I=1 TO 999 25 X=1 @ K=MOD(I,10) 30 FOR N=1 TO I 40 X=MOD(MOD(X*(I-K),M)+X*K,M) 50 NEXT N 60 S=MOD(S+X,M) 70 NEXT I 80 DISP S 9110846700 J-F RE: modular exponentiation? - Valentin Albillo - 06-24-2018 09:08 PM . Hi, all: (06-23-2018 11:49 PM)Bill Duncan Wrote: A colleague recently posed a question for hiring new programmers.. "What is the least significant 10 digits of the series: 1^1+2^2+3^3 .. 1000^1000" ? [...] I flunked the test as my two solutions weren't the java code he was looking for.. A few comments: 1) Executing this command-line expression (not even a program) in interpreted BASIC t=0:m=10^10:for i=1 to 1000:t+=modpow(i,i,m):next i:print t@m will output 9110846700 in 1 millisecond. For the least 20 digits it outputs 67978[...]46700 in 4 ms, the least 100 digits (69769[...]46700) take 10 ms and the least 1000 digits (39747[...]46700) just 0.1 seconds. 2) I find somewhat lame that if he was looking for Java code he wouldn't tell so beforehand, or else if he actually specified that he was after a Java programmer then I don't get why people taking the test wouldn't produce exactly that or else leave to avoid wasting everybody's time. 3) Actually, I've been sometimes in charge of selecting programmers to hire and if I were to pose this same question to the people applying for the job and some of them would produce code which computed the integer powers N^N by performing N multiplications in a loop, both their code and their application would've been thrown to the garbage bin at once, for being so utterly inefficient. I not only expect correct code from people who want a job as professional programmers, I would also expect that the code is efficient and runs as fast as posssible. I take it for granted that such people were taught to compute powers using binary multiplication chains to dramatically minimize the number of multiplications needed, and failing to use the technique in this particular test would immediately disqualify them for the job. For comparative purposes, I ran a typical solution similar to the ones posted here counting the total number of multiplications computing each N^N by performing N multiplications in a loop (500,500 multiplications in all) or else by using binary multiplication chains (12,925 mults in all) The difference between performing more than half a million multiplications or less than 13 thousand means the code runs 38x faster and I wouldn't settle for less when hiring. V. . RE: modular exponentiation? - sasa - 06-24-2018 10:21 PM (06-24-2018 09:08 PM)Valentin Albillo Wrote: A few comments: I just prepared to comment similar, thus I will just add several other issues: 4. It is not clear what is asked for applicant - to be junior or senior programmer. It is a big difference. For junior Java programmer may be asked to know Java and many packages - thus BigInteger package is trivial approach to satisfy that requirement for this task. If searching for a skilled senior programmer this task formulation is certainly unclear and this may be taken as a challenge to make approach from ground, without any specialized library and take care to optimization etc. In that case, Java have several basic flaws in design and relevant here is lack of unsigned types. Thus, to make optimize approach from ground Valentine noted in point (3), this may not be that simple task... 5. Note that integer overflow means big problem, not only in Java with all his lack in design, but also in C/C++ and other languages. Program will usually continue to work, however result will be incorrect and unpredictable. 6. Recursion. Something used everywhere and without real needs. As well very good reason for rejection. 7. Brute force approach perhaps can be tolerant for a junior programmer, otherwise it is clear sign for rejection. However, many companies does not care too much about what is actually there and priority is loyality to the company and signed permission to work overtime and during weekends without extra paying. And actually many of them forces technical mediocrities. It is pointless to show concrete code and practice in some examples I have personally found in commercial software from someone earn \$70-100.000 or more per year. even in critical software (car industry, medical equipment...). RE: modular exponentiation? - brickviking - 06-25-2018 02:04 AM Man, am I glad I've never tried to apply for a professional programmer's job. I've never even heard of binary multiplication chains. I've certainly heard of powers-of-two multiplication by bit-shifting though. I don't know if this is the same thing, nor would I have any idea how to apply that to the OP. I'm a bit lost as to what modular exponentiation is, too. I guess I haven't been kicking around the maths courses long enough. (Post 249) RE: modular exponentiation? - Thomas Okken - 06-25-2018 02:44 AM (06-25-2018 02:04 AM)brickviking Wrote: Man, am I glad I've never tried to apply for a professional programmer's job. I've never even heard of binary multiplication chains. I've certainly heard of powers-of-two multiplication by bit-shifting though. I don't know if this is the same thing, nor would I have any idea how to apply that to the OP. It's not the same thing. The idea is that, for example, a^10 = (a^5)^2 = (a*a^4)^2 = (a*(a^2)^2)^2 meaning: you can compute integral powers by a sequence of multiplications and squares. This is more efficient than a sequence of multiplications; the repeated-squaring approach takes O(log n) operations, as opposed to n-1 multiplications for the simplistic approach. Is this the kind of knowledge that you would require a job applicant to have, where you would toss their application in the trash if they don't implement an integral power this way? I doubt it. I've never been asked this kind of question in a job interview (and I am a programmer, not a scientist or engineer who dabbles in programming). The only time in my life where knowing the repeated-squaring algorithm was useful to me was when I implemented Y^X in Free42. Your mileage may vary. (06-25-2018 02:04 AM)brickviking Wrote: I'm a bit lost as to what modular exponentiation is, too. I guess I haven't been kicking around the maths courses long enough. Taking something to the power of something, and then taking the remainder of that, modulo something else. When the exponents get large, this requires clever algorithms to get accurate results. Again, not something programmers tend to encounter. Mathematicians, maybe, but anyone else? Seems like a weird interview topic to me, for any job that isn't pure mathematics. (06-25-2018 02:04 AM)brickviking Wrote: (Post 249) I have to ask: why do you do that? RE: modular exponentiation? - cyrille de brébisson - 06-25-2018 06:02 AM Hello, >I've never even heard of binary multiplication chains. >I've certainly heard of powers-of-two multiplication by bit-shifting though. >I don't know if this is the same thing Yes, they are. When you do "powers-of-two multiplication by bit-shifting", you do things like: res= 0 while a!=0 if a is odd then res= res+b a= a shift right (divide by 2) b= b+b (or b shift left: multiply by 2) end power of 2 power by bit shifting is res= 0 while a!=0 if a is odd then res= res*b // Note the + changed into a * a= a shift right (divide by 2) // no changes there b= b*b // Multiply by 2 changed into a square (the power equivalent of +) end So, same algo, different functions... Where this is interesting and applies to the original "question" is that breaking the power loops allows you to add the modulo function in the innerloop and never end up with large numbers. The net result is that you will have to perform sum(ln(n)*1.5) multiplications and modulo calculations (=~8800) assuming that bits are odd with 50% probability... With some luck your CPU does have a build in / or % function... The main issue is that you need to do calculations with numbers that can be as big as 10^20 (2 10 digits number being multiplied) which takes 67 bits... and this overflows a 64 bit register :-( So you will have to create a longer precision arithmetics for the multiplication and modulo... Cyrille Cyrille RE: modular exponentiation? - Maximilian Hohmann - 06-25-2018 12:30 PM Hello! (06-25-2018 02:04 AM)brickviking Wrote: Man, am I glad I've never tried to apply for a professional programmer's job. I've never even heard of binary multiplication chains. I've certainly heard of powers-of-two multiplication by bit-shifting though. I don't know if this is the same thing, nor would I have any idea how to apply that to the OP. I'm a bit lost as to what modular exponentiation is, too. I guess I haven't been kicking around the maths courses long enough. You beat me with your answer because this is 100% of what I had in mind to write :-) Luckily they didn't ask about such things when I was contracted to write software for money some 30 years ago. The only thing they actually asked was "Is our offer good enough in terms of hourly pay and would it be possible that you start as early as next week?". Those were the days! Nobody ever criticised my waste of computer cycles by inefficient programming in the years to follow. Regards Max RE: modular exponentiation? - Bill Duncan - 06-25-2018 02:49 PM Hey everyone, When did this forum get so serious? AFAIK, it's a place that we can share ideas, pass on a little knowledge, learn new things and most of all, have some fun! To answer a few questions, - I wasn't actually taking the test, so I didn't actually "flunk". I was trying to be funny.. - I believe my colleague actually did stipulate that the bigint library couldn't be used - He was using it as one of many interview questions, and simply looking for the mental leap. He wasn't looking for the most efficient, just something that worked. Yes, modular exponentiation can be done more efficiently. Used in cryptography etc. Some information here: Wikipedia: Modular exponentiation There is also some code for different languages on this site, rosettacode.org Recursion inefficient? Sure. Was it fun and did it work? Yes. Recursion is often not very efficient, agreed.. Some problems are way easier to design recursively however. My AWK sudoku solver I think is a good example of something far easier with a recursive design. Inefficient? Perhaps. Fast enough? Works for me. And, it was easy to translate to C for more speed. I wrote some articles about the process referenced here: awk and C sudoku solvers So seriously, let's get back to having fun with the friendly exchange of ideas in the forum? Thanks! RE: modular exponentiation? - Thomas Okken - 06-25-2018 03:43 PM (06-25-2018 02:49 PM)Bill Duncan Wrote: When did this forum get so serious? AFAIK, it's a place that we can share ideas, pass on a little knowledge, learn new things and most of all, have some fun! [...] So seriously, let's get back to having fun with the friendly exchange of ideas in the forum? I honestly have no clue what you are complaining about. Sure, sometimes discussions get heated and even name-calling may happen... but I see no trace of anything like that in this thread. Where do you think we went wrong? RE: modular exponentiation? - Bill Duncan - 06-25-2018 04:06 PM (06-25-2018 03:43 PM)Thomas Okken Wrote: (06-25-2018 02:49 PM)Bill Duncan Wrote: When did this forum get so serious? AFAIK, it's a place that we can share ideas, pass on a little knowledge, learn new things and most of all, have some fun! [...] So seriously, let's get back to having fun with the friendly exchange of ideas in the forum? I honestly have no clue what you are complaining about. Sure, sometimes discussions get heated and even name-calling may happen... but I see no trace of anything like that in this thread. Where do you think we went wrong? Sorry, maybe I'm being a little oversensitive with the recent incidents. I thought I detected some subtle undertones, but it was probably just my misinterpretation. Thanks. | 2021-06-15 12:50:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4986985921859741, "perplexity": 1839.8632939605081}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00035.warc.gz"} |
https://ufdc.ufl.edu/UFE0044095/00001 | Citation
## Material Information
Title:
Performance of Greedy Scheduling Algorithm in Wireless Networks
Creator:
Li, Bo
Place of Publication:
[Gainesville, Fla.]
Florida
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (114 p.)
## Thesis/Dissertation Information
Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Computer Engineering
Computer and Information Science and Engineering
Committee Chair:
Xia, Ye
Committee Co-Chair:
Thai, My Tra
Committee Members:
Chen, Shigang
Banerjee, Arunava
Dobra, Alin
Fang, Yuguang
8/11/2012
## Subjects
Subjects / Keywords:
Algorithms ( jstor )
Graph theory ( jstor )
Linear programming ( jstor )
Mathematical vectors ( jstor )
Matrices ( jstor )
Optimal solutions ( jstor )
Regional identity ( jstor )
Scheduling ( jstor )
Topology ( jstor )
Underestimates ( jstor )
Computer and Information Science and Engineering -- Dissertations, Academic -- UF
channel-fading -- interference -- local-pooling -- longest-queue-first -- stability -- wireless-network-scheduling
Genre:
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Computer Engineering thesis, Ph.D.
## Notes
Abstract:
One of the major challenges in wireless networking is how to optimize the link scheduling decisions under interference constraints. Recently, a few algorithms have been introduced to address the problem. However, solving the problem to optimality for general wireless interference models typically relies on a solution of an NP-hard sub-problem. To meet the challenge, one stream of research currently focuses on finding simpler sub-optimal scheduling algorithms and on characterizing the algorithm performance. In the first piece of our work, we investigate the performance of a specific scheduling policy called Longest Queue First (LQF), which has gained significant recognition lately due to its simplicity and high efficiency in empirical studies. There has been a sequence of studies characterizing the guaranteed performance of the LQF schedule, culminating at the construction of the $\sigma$-local pooling concept by Joo et al. We refine the notion of $\sigma$-local pooling and use the refinement to capture a larger region of guaranteed performance. In the second piece of our work, we deeply analyze the performance guarantee of the LQF algorithm in order to shape the stability region furthermore. The contribution of this study is to describe three new achievable rate regions, which are larger than the previously known regions. In particular, the new regions include all the extreme points of the capacity region and are not convex in general. We also discover a counter-intuitive phenomenon in which increasing the arrival rate may sometime help to stabilize the network. This phenomenon can be well explained using the theory developed in this study. In the third piece of our work, we study the performance of LQF algorithm based on a more practical wireless network model, where channel fading effect is considered. Unlike the previous discussed network model, the wireless channel state is time varying under the effect of channel fading. As a result, the corresponding interference relationship among links could change under different channel states. Moreover, a subset of links, which determined by the current channel state, may be prohibited to transmit data and hence be pre-exclude from the scheduling process. We adopt a more generic channel fading model than that studied by Reddy et al, so that the variation of underlying interference relationship is allowed. We derive a larger stability region $\Sigma^*(G) \Lambda$, compared to the existed result, where $\Sigma^*(G)$ is a diagonal matrix. We also propose an estimation algorithm of $\Sigma^*(G)$, which provides a performance lower bound of LQF under any given channel fading structure. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2012.
Local:
Local:
Statement of Responsibility:
by Bo Li.
## Record Information
Source Institution:
UFRGP
Rights Management:
Copyright Li, Bo. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
857772222 ( OCLC )
Classification:
LD1780 2012 ( lcc ) | 2020-07-16 14:45:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5058619976043701, "perplexity": 3018.5119035756356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00044.warc.gz"} |
https://cs.stackexchange.com/questions/linked/9556?sort=newest&pagesize=50 | 32 views
### A be an NP-complete problem, and if B be an NP-hard problem. If A is polynomial time solvable then is B is polynomial time solvable?
if A be an NP-complete problem, and if B be an NP-hard problem. If A is polynomial time solvable then is B is polynomial time solvable? on the contrary, if A be an NP-complete problem, and B be an NP-...
741 views
### How exactly is the process of showing a problem to be NP-Complete a proof by contradiction?
The steps involved in proving that a problem is NP-Complete are fairly straightforward to follow, it's the logic behind why the proof is valid that's really throwing me for a loop. Okay so an easy one:...
20 views
### Understanding P, NP with an example decision problem
I was reading the definitions of p vs np in [this post] (What is the definition of P, NP, NP-complete and NP-hard?) and I was wondering about how to classify the example decision problem where you ...
29 views
### Is Knapsack-optimization problem NP-hard while Knapsack-search problem NP-complete?
I am learning Computational Complexity. Is Knapsack-optimization problem (find an arrangement to maximize the value) known to be NP-hard, while Knapsack-search problem (find an arrangement so that ...
37 views
### What is the difference between NP hard and NP complete?
What is the definition of P, NP, NP-complete and NP-hard? here is a good answer but it really doesn't answer mym question. np-hard-: a problem A is NP hard if for all B$\in$NP, B is polynomial time ...
616 views
### Are there problems in NP that do not reduce in polynomial time to any problem in NP?
As the title says: are there problems in $\mathbf{NP}$ that do not reduce in polynomial time to any problem in $\mathbf{NP}$?
152 views
### determine Eulerian or Hamiltonian
I am a beginner in graph theory and just found this question in a book after completing few topics and I was wondering how you approach this questions. For eulerian, I can say that the graph has ...
143 views
18 views
### Proving a problem is NP [duplicate]
I've seen in many textbooks if say we have a problem $Q$, we write a non-deterministic algorithm in polynomial time to solve problem $Q$, and then from that point it results that $Q\in NP$. Why is ...
43 views
### Confusion about P versus NP [duplicate]
I'm sure that in my following question my reasoning is extremely simplistic and flawed, but I think if someone answered this it would help me understand what the P vs NP conundrum is. So here is my ...
43 views
### What are the differences between NP-Complete and NP-Hard? [duplicate]
What are the differences between NP, NP-Complete and NP-Hard? I am aware of many resources all over the web. I'd like to read your explanations, and the reason is they might be different from what's ...
238 views
### How to prove NP-hardness from scratch?
I am working on a problem of whose complexity is unknown. By the nature of the problem, I cannot use long edges as I please, so 3SAT and variants are almost impossible to use. Finally, I have decided ...
545 views
### P/NP - Polynomial Reduction vs Certificate
I am learning about the P/NP problem right now, and I don't understand when to use polynomial reduction and when to use a certificate. How I understand polynomial reduction is that you can use it to ...
470 views
### What's the purpose of the non-deterministic Turing machine?
(*) Acronyms NTDM := non-deterministic Turing machine. TM := deterministic Turing machine. (*) Consider the following idea The NTDM is able to follow, in parallel, all paths of the tree of the ...
169 views
### Pseudo-Proof of Constrained Sudoku is co-np
The definition of CO-NP A decision problem X is a member of co-NP if and only if its complement X is in the complexity class NP. In other words, co-NP is the class of problems for which there ... | 2022-01-22 16:37:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380079865455627, "perplexity": 664.7168027589098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00698.warc.gz"} |
https://mersenneforum.org/showpost.php?s=b5339e3bc3ed0166df4387202cb0cd39&p=119822&postcount=2 | View Single Post
2007-12-03, 18:03 #2
jasonp
Tribal Bullet
Oct 2004
1101110011102 Posts
Quote:
Originally Posted by drido i need to solve a linear system modulo a composite number. if this number is n = p*q where p and q are prime, i know i have to solve the system modulo p and modulo q and combine the solutions with the chinese remainder theorem. but what if this number is a power of a prime (n = p^k) or a composition of powers (n = p1^k1 * p2^k2) ?
Use Hensel's lemma; an answer mod p uniquely specifies an answer mod p^k, if the latter exists. Repeat for each p^k then use the CRT as before. The presence of powers in the CRT modulus doesn't matter (CRT only cares that the p^k are coprime to each other), though it requires finding inverses mod p^k which also needs Hensel's lemma. | 2021-01-25 15:14:24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555823564529419, "perplexity": 791.7094025669703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00301.warc.gz"} |
https://www.gamedev.net/articles/business/business-and-law/everything-you-ever-wanted-to-know-about-authenticode-signing-r4043/ | • ### Remove ads and support GameDev.net for only $3. Learn more: The New GDNet+: No Ads! • # Everything You Ever Wanted to Know About Authenticode Signing Business and Law • Posted By Brain # Introduction As part of releasing your game to the public, something which is often overlooked is code signing. Code signing is a cryptographic process whereby your game's executables and/or installer are marked as authentic, so that the person running the executable (or anyone else for that matter) can ensure that: • The executable has not been changed since it was signed • The executable was created on a specific date at a specific time • The executable was signed by a known, trackable entity (company or individual) responsible for the code within These give some definite advantages (as well as introducing some disadvantages) as shown below: ## Advantages of code signing • Signing your executables provides tracability of your code, allowing anyone to see who is responsible for the program • Signing adds authenticity which makes your game and your company (if there is one) more reputable and trustworthy • It will give positive weight to systems such as smartscreen filter and many anti-malware programs, which are more permissive of signed executables than unsigned. ## Disadvantages of code signing • There is an up-front cost involved in aquiring a certificate for code signing • If you do not have the required forms of identification or business documentation, obtaining a certificate can be hard to impossible • There is a learning curve to understanding how certificates work (which this article hopes to address) # The steps involved in signing your code To properly sign your code, you must follow several steps, which must be completed in a strict order. These steps are: ## Select a certificate authority Before you can sign your program code, you first need to select a certificate authority. The cost of object code signing has come down massively in price over the past few years. You will need to search for a certificate authority that will provide you with a type of certificate known as an "object code certificate" or "authenticode certificate". Here are some possible choices, this list is by no means exhaustive and I encourage you to search for additional sources of certificates before parting with any money: • StartSSL - You will need to pay for "StartSSL Verified" at$59.90 per year. Certificates last two years after which they must be renewed.
• Comodo - This costs $119.95 per year, however if you are a member of Tucows this can be reduced to$75 per year simply by purchasing through Tucows as a member.
• Verisign/Symantec - Traditionally the most expensive choice but popular with big business. Starts at $795 per year. Remember to shop around as many different resellers of certificates offer their product at a much lower price through third parties, for example as a business user you can get brand name certificates at a much lower price via RapidSSL. Also remember that a lot of the time, you are paying for brand names. All certificates I have listed here are equally trusted by the Windows operating system, so there isn't much point in paying$795 per year for a certificate when one you pay \$59.90 a year for will function identically.
Once the certificate authority has provided you with a link to download your certificate, you will then have in your possession one or more small encrypted files. You will either have (depending on the authority you selected) a seperate .crt and .key file, or a .pfx (or p7k) file, which is the .crt and .key files combined into one. You should make sure that these files are backed up securely, as if you lose them you may have to pay for re-issue of your certificate which can be costly. My advice is to move them immediately to a DVD-ROM and lock them away wherever you keep your paper driving license and home insurance, or whatever else holds value to you.
## Saving the certificate file
If your certificate authority has provided you with a .cer and .key file, I advise that before you continue, you convert it to a .pfx file as it is easier to work with on Windows. There are several ways to convert your files, and your certificate authority might provide you with an online tool or a simple download of your certificate in .pfx form. If they do, I suggest you use this feature as it will be more straightforward. If they do not provide such a facility, you can use the openssl toolkit to convert your .cer and .key file into .pfx using the command line below, for which you will need to install the openssl toolkit onto your PC, which is a free open source download: openssl pkcs12 -export -out yourcert.pfx -inkey yourkey.key -in yourcert.cer The program will prompt you for a password, as part of the process I strongly recommend you enter a strong one as this will protect your certificate from misuse if it is obtained by any third party! Once you have the .pfx file, simply double click it and windows will prompt you to add it to your registry: You should mark the certificate as "not exportable" which will stop someone from simply extracting the certificate from your registry at a later date. Following through the wizard will prompt you for the password you set on the file, simply enter it, and continue clicking through the wizard accepting the defaults. Once complete, you will receive a message saying the certificate was successfully imported into your registry, which means you are now ready to sign executables! Please remember that the certificate you have purchased is valid for signing files until its expiry date so you only have to buy the certificate once every one or two years (or however long the certificate is valid for) and with this one purchase you can sign as many executables as you like, whenever you like. After this, the sky is literally the limit!
# Signing your executables, and timestamping
We now finally have the correct configuration and the correct files to be able to sign our executables. It is important to note however that there is one important difference between signing an executable, and putting an SSL certificate onto a website or most other uses of security certificates. Binary code may be timestamped. What this means, in simple terms, is that the signed executable can still be considered valid even if your certificate has expired, you just wouldn't be able to sign any new files with an expired certificate. To prove my point find any signed executable on your disk which is over three years old. The chances are, by now the certificate which was used to sign this file has expired (you can see this by right clicking on the file and choosing properties, then the 'security' tab) however if the file is timestamped, when you double click the file it will still be considered valid. Timestamping is a process done automatically when you sign your file. It involves contacting a third party server which counter-signs your file with a special value which references back to the certificate issuer's servers. This value can then be used to verify that the certificate was valid at the time of signing the file rather than right now. Because of this, you should always use your certificate authorities own timestamp server which you can easily find on Google. Armed with this information, signing your code is quite straightforward: "C:\Program Files (x86)\Windows Kits\8.0\bin\x64\signtool.exe" sign /d "Your games name" /tr http://www.startssl.com/timestamp /a path\to\your\executable.exe In the command above we are using the signtool.exe binary, which comes with the Windows 8 development kit. There will likely be several copies of this executable on your disk and any one of them will do fine for this task. We specify the "friendly name" of our program using the /d parameter, as shown above, and the /tr parameter specifies the timestamp server as we discussed above. The command above can be used not only to sign executables, but also DLL files and OCX files, driver files, CLR bytecode, and just about any other type of windows executable you can imagine. Specifying the /a parameter to the signtool command simply tells it to use the first valid code signing certificate held within your registry to sign the file. If you followed this article to the letter this is where your code signing certificate and key will currently reside. I store my code signing certificate here as it is generally a secure place to put it, where you don't risk accidentally putting it into your code repository or into your network drives, encrypted or decrypted. Now you have finished the process, you can test your executable by double clicking it, and if your executable requires elevation (which most install packages etc do) then you will be presented with the friendly blue prompt:
# Summary
## Article Update Log
21 Apr 2015: Started work on article 7 May 2015: Initial release
Report Article
## User Feedback
Thank you for the step by step instructions. Looks very easy to follow. I'm off looking for CA now.
##### Share on other sites
You're welcome :) please let me know if anything is unclear so I can update the article. This is a pretty niche subject I think, but I'm hoping there are others who might suggest improvements :)
##### Share on other sites
Thanks, very good and detailed description. I am also pure hobbyist game developer, and was interested in this step.
The price tag however makes it a no go for me unfortunately, as I give my games away for free.
Out of curiousity, do you intend to do something similar for web certificates? (https). As much as I could figure out you wouldn't have to have a fully verified certificate in that case.
##### Share on other sites
@Endurion, If you go to startssl.com you can register for free and get fully recognised server certificates and email certificates. I used these for years until I also needed to do code signing which is where you start paying. You can find lots of tutorials on the Web about how to install that free certificate. Stay away from cacert though as their certificates although free are not properly recognised by windows. Have fun!
Thanks! :)
##### Share on other sites
Very nice article. Thanks for the hard work. :)
- Eck
##### Share on other sites
Thank you. This was very informative and I learned a lot from it.
## Create an account
Register a new account
• 0
• 0
• 1
• 1
• 3
• 9
• 33
• 13
• 13
• 10
• ### Similar Content
• Game developers will be able to become pioneers in the development of decentralized games for the gambling industry using DAO.Casino protocol.
On September 17, 2018, DAO.Casino is opening Sandbox for developers, independent teams and game development studios that choose to harness the power of the rapidly developing DApp industry.
Starting today everyone may submit their application for Sandbox on the official Sandbox page.
The Sandbox project is designed by DAO.Casino developers. Participants of Sandbox will learn the basics of decentralized applications development on DAO.Casino protocol. Developers participating in Sandbox will learn to create, design and deploy decentralized games and applications on Ethereum blockchain.
DAO.Casino is planning to reward most active developers for their constructive feedback on the improvement and optimization of the SDK and related documentation. The company will separately announce the details of the rewards program later this fall.
“We are confident that the Sandbox project will play an important role in our collaboration with studios and independent game developers. We cannot wait to see our product helping developers unleash their creative and entrepreneurial talents and apply those to one of the most groundbreaking technologies of the XXI century. — states Ilya Tarutov, CEO, DAO.Casino. – I am sure that the products we’re developing will transform the online gambling into a fair and transparent industry for all of the involved parties: casino operators, developers, and affiliate marketers. “
“We are launching the Sandbox with the goal of enabling as many developers as possible to learn to create decentralized games. We have achieved an important milestone by starting to accept applications from developers all around the world who share our idea to make online gambling fair and transparent. With our technology, developers can take the whole gambling industry to the next level” – says Alexandra Fetisova from DAO.Casino.
DAO.Casino is disrupting the online gambling industry by developing the protocol based on Ethereum blockchain technology. The protocol ensures the automation of transactions and facilitates interactions between all the industry participants: casino operators, game developers, and affiliate marketers. DAO.Casino team is fully dedicated to developing the best products and making the gambling industry a better place.
View full story
• Game developers will be able to become pioneers in the development of decentralized games for the gambling industry using DAO.Casino protocol.
On September 17, 2018, DAO.Casino is opening Sandbox for developers, independent teams and game development studios that choose to harness the power of the rapidly developing DApp industry.
Starting today everyone may submit their application for Sandbox on the official Sandbox page.
The Sandbox project is designed by DAO.Casino developers. Participants of Sandbox will learn the basics of decentralized applications development on DAO.Casino protocol. Developers participating in Sandbox will learn to create, design and deploy decentralized games and applications on Ethereum blockchain.
DAO.Casino is planning to reward most active developers for their constructive feedback on the improvement and optimization of the SDK and related documentation. The company will separately announce the details of the rewards program later this fall.
“We are confident that the Sandbox project will play an important role in our collaboration with studios and independent game developers. We cannot wait to see our product helping developers unleash their creative and entrepreneurial talents and apply those to one of the most groundbreaking technologies of the XXI century. — states Ilya Tarutov, CEO, DAO.Casino. – I am sure that the products we’re developing will transform the online gambling into a fair and transparent industry for all of the involved parties: casino operators, developers, and affiliate marketers. “
“We are launching the Sandbox with the goal of enabling as many developers as possible to learn to create decentralized games. We have achieved an important milestone by starting to accept applications from developers all around the world who share our idea to make online gambling fair and transparent. With our technology, developers can take the whole gambling industry to the next level” – says Alexandra Fetisova from DAO.Casino.
DAO.Casino is disrupting the online gambling industry by developing the protocol based on Ethereum blockchain technology. The protocol ensures the automation of transactions and facilitates interactions between all the industry participants: casino operators, game developers, and affiliate marketers. DAO.Casino team is fully dedicated to developing the best products and making the gambling industry a better place.
• By trapazza
Do heavily-loaded compute shaders affect the performance of the other "normal/render" shaders? or do they use a dedicated core?
• Hello, and thanks for taking the time to read my post.
I am not here asking for someone to do my work for me. I am just looking for a mentor who would not mind answering a few of my questions, and give me a little guidance.
I prefer chatting on discord, so if you are interested in helping me get started, please add me. My username is wize1 @8135
• How the heck would I know if I'd make a good game designer?
(I like me some words, so if you feel like skimming, basic stuff is in bold-italics lol):
Here’s my problem:
Dudes, I'm not very skilled at video games.
I played my first MMO ever yesterday with a 2-week-old character and totally sucked at the group play. Like, I got performance anxiety. Bad. Thank God I was a low-level or I would have felt like even more of an arse. Buttons weren't doing what I thought they should; fences were not being jumped over; healing (Lord, I was the healer) was few and far in-between. Um, guys, I couldn't even get the revive button to work. I don't even get test anxiety, but I was having flash backs to high school track and field and they were a bit not good. (*´=∀=) I was the first one to die and everyone ended up waiting on me at the beginning of the level because I thought I was literally just playing with my real-life friend and not two additional strangers--both of whom must have had the patience of saints and the vocabulary of sailors to get through that awful flashpoint. (ノ∀゚*) [Will I regret going into this much detail? Probably. Stick around for lolz].
On one level, being a newb is completely hilarious and inevitable. On another--I just felt deflated. I thought--is is too little, too late? Despite playing games all my life I've never been competitive or cared about the nitty-gritty details of memorizing maps, coming in with gear, chatting with other people on the internet (**shudder**). I'm literally more comfortable giving speeches and talking on the phone with sales people than I am with chatting with fellow players online. Even chat forums are a stretch for me. (Heh).
So, can you be a good game designer while being a mediocre to middling player? How important is it to cater to competitive and multiplayer-based players? Does anybody else get multiplayer anxiety? I’m not a casual player per se--I just enjoy narrative experiences and quick matches (like in fighting games) more than party-based stuff. (As a disclaimer, I could actually see how FUN the MMO parties could be--if I knew what I was doing. But the idea of holding people back while looking like an imbecile who can’t use a mouse is mortifying--even if it is anonymous mortification xD).
Thanks in advance for those who read through (or read all, bless you) of this post. Also, if you’ve been to school for game design and wouldn’t mind sharing what you knew beforehand or wish you’d known beforehand, that’d be great. All I know is I'm going to do the flashpoint again, to just get over the jitters, and hope I get to mastering it a bit. I like the gaming community--but, weirdly enough, it scares the daylights out of me. | 2018-09-20 10:37:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21002891659736633, "perplexity": 2677.0869589883114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156460.64/warc/CC-MAIN-20180920101233-20180920121633-00268.warc.gz"} |
https://zbmath.org/?q=an:0918.46048 | # zbMATH — the first resource for mathematics
Derivations with a hereditary domain. II. (English) Zbl 0918.46048
Let $$A$$ be a complex Banach algebra, $$\text{Rad}_{\mathcal J}(A)$$ its Jacobson radical and $$\text{Rad}_{\mathcal B}(A)$$ its Baer radical, $$B$$ a subalgebra of $$A$$ and $$D: B\rightarrow A$$ a Jordan derivation. Assume $$\dim (\text{Rad}_{\mathcal J}(A)\cap \bigcap_{n=1}^{\infty}B^{n}) < \infty$$ and $$BAB\subset A.$$ The main theorem of this paper asserts that $$B(B\cap {\mathcal S}(D))B\subset \text{Rad}_{\mathcal B}(A),$$ where $${\mathcal S}(D)$$ denotes the separating subspace of $$D.$$ Applications of this result are given.
##### MSC:
46H40 Automatic continuity
Full Text: | 2021-12-05 22:29:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6805521845817566, "perplexity": 643.5050287449689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363226.68/warc/CC-MAIN-20211205221915-20211206011915-00463.warc.gz"} |
https://cstheory.stackexchange.com/tags/cr.crypto-security/hot | # Tag Info
## Hot answers tagged cr.crypto-security
Accepted
### Password hashing using NP complete problems
Unfortunately, this doesn't seem to work (see below for details), and it seems hard to find a way to make this kind of idea yield a provably secure scheme. The problem with your general idea You're ...
• 10.5k
### Complexity classes for proofs of knowledge
This is not an actual answer; I'm just sharing some results (which do not fit in one comment). Goldreich, Micali and Wigderson (J. ACM, 1991) proved that every language in NP has a zero-knowledge ...
• 16.4k
### PPAD and Quantum
Two answers that I learnt while writing a blog post about this question No: In black-box variants, quantum query/communication complexity offer the Grover quadratic speedup, but not more than that. ...
### Why does most cryptography depend on large prime number pairs, as opposed to other problems?
Boaz Barak addressed this in a blog post My takeaway from his post (roughly speaking) is that we only know how to design cryptographic primitives using computational problems that have some amount of ...
• 4,228
Accepted
### State of research on SHA-1 Collision Attacks
SHA-1 was SHattered by Stevens et al. They demonstrated that collisions in SHA-1 are practical. They give the first instance of a collision for SHA-1. It is an identical-prefix collision attack that ...
• 206
### Is there a fast algorithm to quickly evaluate $a^{b^c}$ mod $n$?
There are essentially only two algorithms that I'm aware of: Use repeated-squaring, along the lines you mentioned. Factor $n$ using a state-of-the-art algorithm, then use the Chinese remainder ...
• 10.5k
### PPAD and Quantum
I will attempt to elaborate a bit on why CHKPRR shows that $\mathsf{PPAD}$ is plausibly hard for quantum computers. At a high level, CHKPRR builds a distribution over end-of-line instances where ...
### Knot Recognition as a Proof of Work
If there is an Arthur-Merlin protocol for knottedness similar to the [GMW85] and [GS86] Arthur-Merlin protocols for Graph Non Isomorphism, then I believe such a cryptocurrency proof-of-work could be ...
• 842
### Candidates for One-Way Function
Here is a "canned" answer that might be useful, but has no cryptographic depth (hopefully we get answers with depth as well). What makes for a good candidate OWF? The naive answer tends to boil down ...
• 7,090
Accepted
### Learning with (Signed) Errors
(wow! after three years of time passing, this is now easy to answer. funny how that goes! --Daniel) This "Learning with (Signed) Errors" (LWSE) problem, as invented-and-stated above by me (three ...
• 5,973
Accepted
### Can any computational challenge be transformed to proof-of-work?
(Note: Andreas Björklund suggested a solution in the comments that I believe is better than the one described below. See http://eprint.iacr.org/2017/203, by Ball, Rosen, Sabin, and Vasudevan. In ...
### Is it possible to encrypt a CNF?
The application you mention is called "proof of useful work" in the literature, see for instance this article. You can use a fully homomorphic encryption scheme (where the plaintext is the CNF ...
• 1,551
Accepted
### Quantum Hardness of Approximating Lattice Problems
The answer to your question is the same as with many other such assumptions in cryptography: despite a lot of effort no one has found any substantially faster quantum algorithms for lattice problems. ...
• 4,493
Accepted
### Is it possible to encrypt quantum states under reasonable assumptions?
One can encrypt an n-qubit state using a 2n-bit classical secret key. The idea is to use the key to select a random Pauli operator, and apply that operator to the secret as an encryption. (The inverse ...
• 878
Accepted
### Cryptographic systems that don't leak linear combinations of encrypted bits
Yes, if the encryption algorithm achieves IND-CPA security (semantic security), this implies that an adversary cannot predict any linear combination of encrypted bits better than random guessing. ...
• 10.5k
### Why does most cryptography depend on large prime number pairs, as opposed to other problems?
All of what I am going to say is well-known (all the links are to Wikipedia), but here it goes: The approach used in RSA using pairs of primes can also be applied in a more general framework of ...
• 13.2k
Accepted
### Is a "complete" cipher possible?
Yes, you can use Levin universal search to construct a "universal one-way function" (e.g., these lecture notes). From this one-way function you can then construct symmetric-key encryption primitives (...
• 2,789
Accepted
### Why is the security of lattice cryptosystems not provable from $P \neq NP$?
To expand somewhat on Sasho Nikolov's comment... LWE is at least as hard as finding approximate solutions to SVP, but the approximation factors for which the reduction from SVP to LWE works are ...
• 5,973
Accepted
### Can entropicly secure encryption algorithms be used on low-entropy messages by adding noise
Here is the problem: if $M$ has low entropy (for example, if the attacker has side information that narrows $M$ down to just two possible messages), then conditioned on $M+K$, the key $K$ also has low ...
• 878
Accepted
### Is it possible to encrypt a CNF?
Feigenbaum in, Encrypting Problem Instances, proposes a definition (Def. 1) of encryption function for NP-complete problems which satisfies your requirements. She proves that the NP-complete problem ...
### What is the state of the art in online voting?
This question is probably too broad to be answerable here, because the answer depends on what kinds of security requirements you have, what the threat model is, and what assumptions we're willing to ...
• 10.5k
### Information-theoretic Diffie-Hellman
No, there is no information-theoretic analog that is secure against computationally-unbounded adversaries. To form an analog, we'd need an injection $\varphi$ that maps $x$ in fine representation to \$...
• 10.5k
### Why SHA-224 and SHA-256 use different initial values?
This We called Domain Separation, when we use same algorithm for different output size. Separation is necessary because if i found two messages which have hash value (SH256), differs only in last ...
• 51
Accepted
### Is it possible to MAC a quantum state with a classical key under reasonable assumption?
Howard Barnum, Claude Crepeau, Daniel Gottesman, Adam Smith, Alain Tapp. "Authentication of Quantum Messages", FOCS 2002. http://www.cse.psu.edu/~ads22/pubs/PS-CSAIL/BCGST02-focs-final.pdf As with ...
• 878
### Is a theoretically secure key exchange possible?
I believe you are talking about the existence of information-theoretically (unconditionally) secure key agreement schemes. You can prove that such schemes cannot be achieved with only authenticated ...
Accepted
### If I know pretty well '(a,b)', I know pretty well 'a', or 'b', or 'a xor b'
There are 4 possibilities, name them e1-e4: e1 neither match e2 a only matches e3 b only matches e4 both match Now I restate what you want to prove: Suppose: ...
• 7,090
Accepted
### Q: Trusting program output from an untrusted machine
It is possible in standard cryptographic assumptions (like, existence of cryptographic hash functions), and proofs can be made non-interactive in a random oracle model. Modern zero knowledge proofs ...
### Candidates for One-Way Function
As for your last question, the are several candidates for combinatorial one-way functions. This paper by Kojevnikov and Nikolenko lists three combinatorial complete one-way functions that are based ...
### Is there a candidate for a post-quantum one-way group action?
Yes, there is an old proposal for this due to Couveignes, which was independently rediscovered by Rostovtsev and Stolbunov. In both cases, the set of elliptic curves with some common endomorphism ...
• 156 | 2022-08-14 15:29:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4328460097312927, "perplexity": 1801.9374807266977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00199.warc.gz"} |
https://www.albert.io/ie/ap-statistics/confidence-level-and-dollar1dollar-sided-significance-testing | Free Version
Difficult
# Confidence Level and $1$-sided Significance Testing
APSTAT-X4JEUW
A one sample $t$-test for was used to conduct a test of the null hypotheses:
$${ H }_{ 0 }:\mu =28$$
$${ H }_{ a }:\mu <28$$
The $p$-value was $0.006$. A confidence interval for $\mu$ is to be constructed.
Of the following, which is the largest level of confidence for which the confidence interval will NOT contain $28$?
A
$95\%$
B
$96\%$
C
$97\%$
D
$98\%$
E
$99\%$ | 2017-02-28 10:10:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4857306182384491, "perplexity": 865.8607663001794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00249-ip-10-171-10-108.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.