url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://gilkalai.wordpress.com/2019/02/16/attila-pors-universality-result-for-tverberg-partitions/
|
Attila Por’s Universality Result for Tverberg Partitions
In this post I would like to tell you about three papers and three theorems. I am thankful to Moshe White and Imre Barany for helpful discussions.
a) Universality of vector sequences and universality of Tverberg partitions, by Attila Por;
Theorem (Por’s universality result, 2018): Every long enough sequence of points in general position in $\mathbb R^d$ contains a subsequence of length n whose Tverberg partitions are exactly the so called rainbow partitions.
b) Classifying unavoidable Tverberg partitions, by Boris BukhPo-Shen LohGabriel Nivasch
Theorem (Bukh, Loh, and Nivasch, 2017): Let $H$ be a tree-like $r$-uniform simple hypergraph with $d+1$ edges and $n=(d+1)(r-1)+1$ edges. It is possible to associate to the vertices of each such hypergraph H a set X of n points in $\mathbb R^d$ so that the Tverberg partitions of X correspond precisely to rainbow coloring of the hypergraph H. Moreover, the number of rainbow coloring is $(r-1)!^d$. (Here, we consider two colorings as the same if they differ by a permutation of the colors.)
c) On Tverberg partitions, by Moshe White
Theorem (White, 2015): For any partition $a_1,a_2,\dots ,a_r: 1 \le a_i\le d+1$ of $n$, there exists a set $X \subset \mathbb R^d$ of $n$ points, such that every Tverberg partition of $X$ induces the same partition on $n$ given by the parts $a_1,\dots,a_r$. Moreover, the number of Tverberg’s partitions of $X$ is $(r-1)!^d$
See the original abstracts for the papers at the end of the post.
Radon’s and Tverberg’s theorems and Sierksma’s conjecture
Recall the beautiful theorem of Tverberg: (We devoted two posts (I, II) to its background and proof.)
Tverberg Theorem (1965): Let $x_1,x_2,\dots, x_m$ be points in $R^d$, $m \ge (r-1)(d+1)+1$. Then there is a partition $S_1,S_2,\dots, S_r$ of $\{1,2,\dots,m\}$ such that $\cap _{j=1}^rconv (x_i: i \in S_j) \ne \emptyset$.
The (much easier) case $r=2$ of Tverberg’s theorem is Radon’s theorem.
We devoted a post to seven open problems related to Tverberg’s theorem, and one of them was:
Sierksma Conjecture: The number of Tverberg’s $r$-partitions of a set of $(r-1)(d+1)+1$ points in $R^d$ is at least $((r-1)!)^d$.
Gerard Sierksma’s construction with $(r-1)!^d$ Tverberg’s partition is obtained by taking $(r-1)$ copies of each vertex of a simplex containing the origin in its interior, and adding the origin itself. A configuration of $(r-1)(d+1)+1$ points in $R^d$ with precisely $((r-1)!)^d$ Tverberg partitions to $r$ parts is called a Sierksma Configuration.
White’s Theorem
In 2015 Moshe White proved the following theorem which was an open problem for many years. White’s construction was surprisingly simple.
Theorem 1 (White, 2015): For any partition $a_1,a_2,\dots ,a_r: 1 \le a_i\le d+1$ of $n$, there exists a set $X \subset \mathbb R^d$ of $n$ points, such that every Tverberg partition of $X$ induces the same partition on $n$ given by the parts $a_1,\dots,a_r$. Moreover, the number of Tverberg’s partitions of $X$ is $(r-1)!^d$
Bukh, Loh, and Nivasch’s examples via staircase convexity.
Five tree-like simple hypergraphs that correspond to configurations of 11 points in 4-dimensional space.
Start with a tree-like hypergraph H of d+1 blocks of size r like the five examples in the Figure above. The intersection of every two blocks has at most one element. The union of all blocks has n=(d+1)(r-1)+1 elements.
A rainbow coloring of a r-uniform hypergraph H is a coloring of the vertices of H with r colors so that the vertices of every edge is colored by all r colors.
Theorem 2 (Bukh, Loh, and Nivasch): It is possible to associate to the vertices of each such hypergraph H a set X of n points in $\mathbb R^d$ so that the Tverberg partitions of X correspond precisely to rainbow coloring of the hypergraph H. Moreover, the number of rainbow coloring is $(r-1)!^d$. (Here, we consider two colorings as the same if they differ by a permutation of the colors.)
For a star-like hypergraph where all blocks have a vertex in common we get the original Sierksma’s example. (Example (d) above.) White’s examples are obtained by considering such hypergraphs where there exists an edge $A$ such that all edges have non empty intersection with $A$. (Examples (c), (d), and (e) above).
Rainbow colorings of our five examples
Tverberg’s partitions for stretched points on the moment curve
It is natural to consider $n$ points on the moment curve $x(t)=(t,t^2,\dots, t^d)$. It turns out that the set of Tverberg’s partitions for points on the moment curve depend on the precise location of the points. By stretched points on the moment curve I mean that you take the points $x(t_1), x(t_2), \dots x(t_n)$ where $t_1 << t_2 << \dots t_n$, namely $t_2$ is much much larger than $t_1$ and $t_3$ is much much much much larger than $t_2$, etc. etc. In this case, the configuration corresponds to a path $H$: you let the vertices be $\{1,2,\dots,n\}$ and the edges are sets of the form $\{(k-1)(r-1)+1, (k-1)(r-1)+2,\dots , k(r-1)+1\}$. A stretched configuration of points on the moment curve has the property that every subset is also a stretched configuration of points on the moment curve.
The importance of Tverberg’s partitions for stretched points on the moment curve was realized by Barany and Por, by Bukh, Loh, and Nivasch, and by Perles and Sidron (See their paper Tverberg Partitions of Points on the Moment Curve), and perhaps by others as well.
Por’s universality result
Por’s universality theorem asserts that in terms of Tverberg partitions every large enough configuration of points in general position in $R^d$ contains a configuration whose Tverberg partitions are those of a stretched configuration of $n$ points on the moment curve! Por’s universality result was conjectured independently by Bukh, Loh, and Nivasch, (and they gave some partial results) and by Por himself.
Theorem 3 (Por’s universality result, 2018): Every long enough sequence of points in $\mathbb R^d$ in general position contains a subsequence of length n whose Tverberg partitions are exactly the so called rainbow partitions.
Por actually proved an apparently stronger statement: We can find a subsequence $Y$ so the conclusion holds not only for $Y$ but also for every subsequence $Z$ of $Y$
Staircase Convexity
The work of Bukh, Loh, and Nivasch relied on an important method of “staircase convexity”. An earlier 2001 application of the method (where it was introduced) was for lower bounds on weak epsilon nets by Bukh, Matousek, and Nivasch (Here are links to the paper, and to slides from a talk by Boris Bukh. See also this post and this one of the weak epsilon net problem.) Roughly the idea is this: consider a stretched grid where the sequence of coordinates are very very fast growing. When you choose configuration of points in such a grid, questions regarding their convex hulls translate to purely combinatorial problems.
Stairconvex sets explained by Boris Bukh
Erdos Szekeres in the plane and higher dimensions
The planar case
Let ES(n) be the smallest integer such that any set of ES(n) points in the plane in general position contains n points in convex position. In their seminal 1935 paper, Erdős and Szekeres showed that ES(n) is finite.
The finiteness of ES(n) can be stated as follows: Given a sequence of N points in general position in the plane $x_1,x_2, \dots , x_N$ there is a subsequence $1_i,x_2, \dots , x_n$ such that the line segments $[x_i,x_k]$ and $[x_j,x_\ell ]$ intersect. With this statement, the Erdős and Szekeres’ theorem can be seen as identifying a universal set of points in term of its Radon partitions (or equivalently in terms of its order type).
In high dimensions
In higher dimensions we can define $ES_d(n)$ and replace “in convex position” by “in cyclic position”. The finiteness of $ES_d(n)$ (with terrible bounds) follows easily from various Ramsey results. In a series of papers very good lower and upper bounds where obtained: Imre BaranyJiri MatousekAttila Por: Curves in R^d intersecting every hyperplane at most d+1 timesMarek EliášJiří MatoušekEdgardo Roldán-PensadoZuzana Safernová: Lower bounds on geometric Ramsey functions; Marek Elias, Jiri Matousek: Higher-order Erdos–Szekeres theorems .
Por’s result
Por’s result can be seen as a far-reaching strengthening of the finiteness of $ES_d(n)$.
Further Discussion:
High order order types?
Can you base a higher-order notion of “order types” on Tverberg partitions?
The order type of a sequence of $n$ points affinely spanning $R^d$, is the described by the vector of signs (0, 1 or -1) of volume of simplices described by subsequences of length $d+1$. Equivalently the order type can be described by the minimal Radon partitions of the points.
1. We can first ask if we can base a notion of higher order types on Tverberg’s partitions to $r$ parts where $r>2$.
2. Next we can ask for an associated notion of “higher order oriented matroids.” (Oriented matroids in the usual sense are abstract order types which coincide with Euclidean order types for every subsequence of $d+3$ points.)
3. A natural question regarding these “higher order types is: If a sequence of points in strong general position is Tverberg-equivalent to stretched points on the moment curve, does it apply to all of its subsequences?
Another way to consider “higher” order types is to enlarge the family by to start with a family of points add to it all Radon points of minimal Radon’s partition and consider the order type of the new configuration. (This operation can be repeated $r$ times.) See this paper of Michael Kallay on point sets which contain their Radon points.
Staircase convexity order types
Understanding order types of configuration of points on stretched grids of Bukh et al. is a very interesting problem. It is interesting to understand such configurations that are not in general position as well. (In particular, which matroids are supported on the stretched grid?) Of course, the method may well have many more applications.
Fantastically strong forms of Sierksma’s conjecture
Is the following true: For every sequence $T$ of $n=(r-1)(d+1)+1$ points in $R^d$ there is a Sierksma’s configuration $S$ of $n$ points so that every Tverberg’s partition of $S$ is a Tverberg’s partition of $T$?
An even stronger version is:
Does every sequence $T$ of $(r-1)(d+1)+1$ points in $R^d$ there is a tree-like simple hypergraph so that all the rainbow coloring of $H$ correspond to Tverberg partitions of the sequence? If true this will be a fantastically strong version of Sierksma’s conjecture.
Is the Erdős-Szekeres’ conjecture outrageous?
Erdős and Szekeres proved in 1935 that $ES(n)\le {{2n-4}\choose{n-2}}+1=4^{n-o(n)}$, and in 1960, they showed that $ES(n) \ge 2^{n-2}+1$, and conjectured this to be optimal. Despite the efforts of many researchers, until recently no improvement in the order of magnitude has ever been made on the upper bound over 81 years. A recent breakthrough result by Andrew Suk (Here are links to the paper, and to our post discussing the result) asserts that $ES(n)=2^{n+o(n)}$. Sometime ago I asked over MO a question on outrageous mathematical conjectures and perhaps the conjecture that $ES(n) = 2^{n-2}+1$ on the nose is an example.
Original Abstracts
Universality of vector sequences and universality of Tverberg partitions, by Attila Por;
Classifying unavoidable Tverberg partitions, by Boris Bukh, Po-Shen Loh, Gabriel Nivasch
On Tverberg partitions, by Moshe White
This entry was posted in Combinatorics, Convexity and tagged , , , , , , , . Bookmark the permalink.
6 Responses to Attila Por’s Universality Result for Tverberg Partitions
1. eppstein says:
(Reposted from a comment I left on Gil’s G+ post)
Staircase convexity itself may have been introduced in the Bukh et al paper that you mention, but the correspondence between the “lines” defining it (the boundaries of lower-left and lower-right quadrants) and straight lines can already be found in Middendorf, Matthias, and Pfeiffer, Frank. 1992. The max clique problem in classes of string-graphs. Discrete Math., 108(1–3), 365–372.
There’s also a chapter about the same staircase geometry in my recent book Forbidden Configurations in Discrete Geometry, which includes a polynomial time algorithm for recognizing (general position) order types of configurations of points on stretched grids.
2. dsp says:
I don’t understand precisely why the Erdos-Szekeres result is equivalent to the statement in terms of intersections of line segments between points in a general position set. Could you tell me more about this?
• Gil Kalai says:
To say that $m$ points in general position are in convex position is equivalent to the assertion that all radon partitions of every four of the points are of type (2,2), namely the line segment of two of the points crosses the line segment of the other two.
If we order the points cyclically in their order as vertices of the convex hull then the the (2,2) Radom partition is described by interlacing indices.
3. Gil Kalai says:
Let me mention that while the examples Bukh, Loh, and Nivasch gives more general cases of configurations meeting the Sierksma’s bound it is not clear if they actually extends White examples. (And it is not clear, how to ask the question this seems to require some higher-order-type definition).
Another remarks is that perhaps the paper by Perles and Sidron that deals with the higher notions of “general position” can be the basis for higher notions of order types.
|
2020-03-29 07:11:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 91, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812670886516571, "perplexity": 698.0989335153638}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493818.32/warc/CC-MAIN-20200329045008-20200329075008-00064.warc.gz"}
|
http://medalplus.com/?p=1855
|
# AIO2015 Solution(English)
## Wet Chairs
As luck would have it, it has rained on the morning of the concert. To make matters worse, the staff did a very rushed job drying the seats! Now it is up to you to decide how to seat everyone.
The seats are arranged in a single long line in front of the stage. In particular there are chairs in the line, and each seat is either wet or dry.
However, all is not lost. Out of the N friends you are bringing to the concert K of them are happy to sit on a wet chair. The other N-K of your friends insist on sitting on a dry chair.
Since this concert is best enjoyed with friends, you would also like your group to be seated as close together as possible so that the distance between the leftmost person and rightmost person is as small as possible. Output the smallest distance possible between the leftmost and rightmost friend at the concert.
Your task is to write a program that outputs this smallest possible distance.
### Solution
We use binary search to get the length.Then the question is:
We have the length L,the number of bad points we can allow is K,then how to check it is okay?
Name $S[i]\;=$ prefix sum of chairs.($0$->wet,$1$->dry)
we enumerate $i$ as the start of the chairs we sit.then we can use $S[i]$ to confirm whether it will work.
So the this is an $O(n\log n)$ algorithm.
## Ruckus League
It's that time of year again! The Annual Ruckus World Championships is fast approaching and you are absolutely determined to see Australia victorious. As the name implies, the goal of the competition is to be as loud and obnoxious as possible. Players compete in teams of at least Mpeople (there is no upper limit on team sizes). To make sure teams don't mix, team members must hold hands with each other.
Of course, you've found that there is nothing more obnoxious than spoilt kindergarteners, so you've rounded up N of the loudest ones you could find. You were hoping to send as many teams as possible, but some of the children have already started holding hands with their friends. Being as stubborn as they are, you are finding it rather difficult to convince them to let go of each other, or even to hold hands with anyone else.
Now for any other person, the story would end here; you'd have to send whatever teams their 'friendships' have forged. However, being the social engineering master you are, you've brought K lollipops to use to bribe the children. You can pass a lollipop to the hand of a child, which will make them let go of the hand they are holding.
Using your K lollipops, what is the largest number of teams with at least M children you can form?
### Solution
According to the problem description. It is absolutely that there are only rings and chains.
So we can use greedy algorithm.
First of all,we can suppose there are many directed-graphs.
Secondly,we can name $in[i]$ to correct the edges which direct to it.So we can get the start of the rings and chains.
Third,we use DFS to calc the size of subgraphs.
Obviously,it is better to solve the chains first than the rings.
Before solving the rings,we need to use one lollipop to make it become a chain.So the question is,there is a $S$ vertex chain,what is the number of the maximum contribution we can get?Definetly $\frac{S}{M}-1$.
By the way,it is necessary to solve from large size to small size.
In the end,it is $O(n\log n)$
## Snap Dragons III: Binary Snap
Have you ever heard of Melodramia, my friend? It is a land of forbidden forests and boundless swamps, of sprinting heroes and dashing heroines. And it is home to two dragons, Rose and Scarlet, who, despite their competitive streak, are the best of friends.
Rose and Scarlet love playing Binary Snap, a game for two players. The game is played with a deck of cards, each with a numeric label from 1 toN. There are two cards with each possible label, making 2N cards in total. The game goes as follows:
• Rose shuffles the cards and places them face down in front of Scarlet.
• Scarlet then chooses either the top card, or the second-from-top card from the deck and reveals it.
• Scarlet continues to do this until the deck is empty. If at any point the card she reveals has the same label as the previous card she revealed, the cards are a Dragon Pair, and whichever dragon shouts `Snap!' first gains a point.
After many millenia of playing, the dragons noticed that having more possible Dragon Pairs would often lead to a more exciting game. It is for this reason they have summoned you, the village computermancer, to write a program that reads in the order of cards in the shuffled deck and outputs the maximum number of Dragon Pairs that the dragons can find.
### Solution
We use $dp[i]$ to count the ans of the sequence which starts from $i$
Here I should orz Sam Zhang,He gave me this solution.
$dp[i]\;=\;max\begin{Bmatrix}dp[i+1],dp[nex[i]-1]+1+c(i+1,nex[i]-1)\end{Bmatrix}$
Array $c(i,j)$ is the contribution of $[i,j]$.How to get it?Just use segment tree
If we divide $[i,j]$ into many pieces.We just need to calculate $P(l),P(r),P(last_{l}\;and\;last_{r})$
Segment tree can solve it in $O(n\log n)$
So it is $O(n\log n)$
## 《AIO2015 Solution(English)》上有1条评论
1. Pingback引用通告: 填坑计划一览表 | 一个沙茶的代码库
|
2018-01-23 21:40:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 19, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.270252525806427, "perplexity": 1284.2934296493718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892699.72/warc/CC-MAIN-20180123211127-20180123231127-00759.warc.gz"}
|
https://stats.stackexchange.com/questions/60521/shares-on-total-as-dependent-variable
|
# Shares on total as dependent variable
I would like to ask for your help with the following issue. I am trying to estimate the model where dependent variable is a share of total (e.g. share of the US economy in terms of GDP on total world GDP). I use the panel data for 27 cross sections (countries) and 8 time dimensions (years). By definition, for each year the values of dependent variables across countries sum up to one. The distribution of the dependent variable is highly positively skewed with lots of observations being close to zero (or even zeros).
Usually, authors propose logistic transformation of the dependent variable in terms of $$y*=\ln(y/(1-y))$$ for y being the original dependent variable. However, even after such a transformation the dependent variable remains highly positively skewed. The pooled OLS regression or random-effect panel regression delivers residuals that are not normally distributed and again highly positively skewed. The possible option, I guess, is to go for beta distribution of dependent variable, but not sure whether appropriate for panel data and what type of model then to use to estimate the regression (would rather have random-effect panel regression as by nature of the independent variables the fixed-effect model is not the preferred one).
I don't know about the panel aspect of it, but there's a generalization of the Beta distribution called the Dirichlet distribution where you could have all 27 values sum to one.
You could use it as your response variable in a random effect regression if you used the right software (e.g. it should be possible in either BUGS or STAN).
The other option, of course, is to model GDP directly rather than share of GDP, and calculating the percentages after-the-fact.
Good luck!
|
2020-09-19 06:51:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5571334362030029, "perplexity": 568.3432851228337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400190270.10/warc/CC-MAIN-20200919044311-20200919074311-00704.warc.gz"}
|
http://mathoverflow.net/feeds/question/114744
|
2D visualization of sum of divisors using Cantor pairing - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T02:49:45Z http://mathoverflow.net/feeds/question/114744 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/114744/2d-visualization-of-sum-of-divisors-using-cantor-pairing 2D visualization of sum of divisors using Cantor pairing joro 2012-11-28T08:52:42Z 2012-12-12T09:11:50Z <p>Related to Gerhard's question about <a href="http://mathoverflow.net/questions/77794/ascii-prime-plots-and-prime-rich-quadratic-polynomials" rel="nofollow">ascii plots</a>. On the SeqFan mailing list <a href="http://list.seqfan.eu/pipermail/seqfan/2012-November/010494.html" rel="nofollow">was suggested</a> to plot an integer sequence this way:</p> <p>Let $F(x,y)= (x+y) (x+y+1)/2+y$ be the <a href="https://en.wikipedia.org/wiki/Cantor_pairing_function" rel="nofollow">Cantor pairing</a>. To plot an integer sequence $a(n)$, for a point $(x,y)$ compute $a(F(x,y))$ and assign color to the integer, e.g. in grayscale smaller is darker, for RGB/HSV there are other choices to map to color.</p> <p>When $a(n)=\sigma_0(n)$ where $\sigma_0(n)$ is the number of divisors of $n$, the 2D plot shows some structure (hopefully not caused by visual artifacts).</p> <blockquote> <p>Is there an explanation for the structure in the plot?</p> </blockquote> <p>Color plot of $\sigma_0(F(x,y))$, smaller is darker (grayscale is quite similar):</p> <p><img src="http://s16.postimage.org/9enmfsbyt/cantorpairing_sigma_0.png" alt="sigma_0 and cantor pairing"></p> <p>When examining the integer values there are some large diagonals indeed.</p> http://mathoverflow.net/questions/114744/2d-visualization-of-sum-of-divisors-using-cantor-pairing/114856#114856 Answer by Aaron Meyerowitz for 2D visualization of sum of divisors using Cantor pairing Aaron Meyerowitz 2012-11-29T08:00:59Z 2012-12-12T09:11:50Z <p>Have you tried to find an explanation? </p> <p>The diagonals correspond to numbers $F(x,x+j).$ Every eighth one has $F(x,x+8k)=2x(x+1)+4(8k^2+4xk+3k).$ Since these are all multiples of $4$ that is already a boost. </p> <p>$F(x,x+1)=2(x+1)^2.$ This is the case $q=0$ of $F(x,x-(q^2-1))=2(x-\binom{q}{2}+1-q)(x-\binom{q}{2}+1).$ So these are all pretty composite and every $q$th member is a multiple of $2q^2.$ Probably you can prove that there are no other diagonals which factor algebraically. </p> <p>You will find the horizontal lines $F(\binom{j}{2}-1,y)$ worth examining.</p> <p>Your image also shows possible anti-diagonals $F(x,k-x)$ but I will leave that for someone else to examine (I did not immediately see anything).</p> <p><strong>A few later comments:</strong> Along any line (the ones easily seen are horizontal, vertical and slope $\pm 1$) the values are periodic $\mod p.$ Certain dark lines can be explained by verifying that no member can divide by a small prime. I seem to recall that the lines $F(x,x-(q^2-3))$ contain no multiples of $2,3,5$ and in some cases no multiples of any prime under $30$. Some of this shows in the graphic and some not as much. </p>
|
2013-05-22 02:49:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235478401184082, "perplexity": 1383.7252286386063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701153213/warc/CC-MAIN-20130516104553-00085-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://iccl.inf.tu-dresden.de/web/LATPub535/en
|
# Positive Subsumption in Fuzzy EL with General t-norms
##### Stefan BorgwardtStefan Borgwardt, Rafael PeñalozaRafael Peñaloza
Stefan Borgwardt, Rafael Peñaloza
Positive Subsumption in Fuzzy EL with General t-norms
In Francesca Rossi, eds., Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13), 789-795, 2013. AAAI Press
• KurzfassungAbstract
The Description Logic EL is used to formulate several large biomedical ontologies. Fuzzy extensions of EL can express the vagueness inherent in many biomedical concepts. We study the reasoning problem of deciding positive subsumption in fuzzy EL with semantics based on general t-norms. We show that the complexity of this problem depends on the specific t-norm chosen. More precisely, if the t-norm has zero divisors, then the problem is co-NP-hard; otherwise, it can be decided in polynomial time. We also show that the best subsumption degree cannot be computed in polynomial time if the t-norm contains the Łukasiewicz t-norm.
• Forschungsgruppe:Research Group: Automatentheorie
@inproceedings{ BoPe-IJCAI13,
title = {Positive Subsumption in Fuzzy $\mathcal{EL}$ with General t-norms},
|
2019-08-18 18:03:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7341594099998474, "perplexity": 3065.483786315237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00039.warc.gz"}
|
https://www.groundai.com/project/adabelief-optimizer-adapting-stepsizes-by-the-belief-in-observed-gradients/
|
## Abstract
\setitemize
noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt
## 1 Introduction
To solve the problems above, we propose “AdaBelief”, which can be easily modified from Adam. Denote the observed gradient at step as and its exponential moving average (EMA) as . Denote the EMA of and as and , respectively. is divided by in Adam, while it is divided by in AdaBelief. Intuitively, is the “belief” in the observation: viewing as the prediction of the gradient, if deviates much from , we have weak belief in , and take a small step; if is close to the prediction , we have a strong belief in , and take a large step. We validate the performance of AdaBelief with extensive experiments. Our contributions can be summarized as:
• We propose AdaBelief, which can be easily modified from Adam without extra parameters. AdaBelief has three properties: (1) fast convergence as in adaptive gradient methods, (2) good generalization as in the SGD family, and (3) training stability in complex settings such as GAN.
• We theoretically analyze the convergence property of AdaBelief in both convex optimization and non-convex stochastic optimization.
• We validate the performance of AdaBelief with extensive experiments: AdaBelief achieves fast convergence as Adam and good generalization as SGD in image classification tasks on CIFAR and ImageNet; AdaBelief outperforms other methods in language modeling; in the training of a W-GAN arjovsky2017wasserstein, compared to a well-tuned Adam optimizer, AdaBelief significantly improves the quality of generated images, while several recent adaptive optimizers fail the training.
## 2 Methods
### 2.1 Details of AdaBelief Optimizer
Notations By the convention in kingma2014adam, we use the following notations:
• : is the loss function to minimize, is the parameter in
• : projection of onto a convex feasible set
• : the gradient and step
• : exponential moving average (EMA) of
• : is the EMA of , is the EMA of
• : is the learning rate, default is ; is a small number, typically set as
• : smoothing parameters, typical values are
• are the momentum for and respectively at step , and typically set as constant (e.g.
Initialize , , , While not converged Update Algorithm 1 Adam Optimizer Initialize , , , While not converged Update Algorithm 2 AdaBelief Optimizer
Comparison with Adam Adam and AdaBelief are summarized in Algo. 1 and Algo. 2, where all operations are element-wise, with differences marked in blue. Note that no extra parameters are introduced in AdaBelief. For simplicity, we omit the bias correction step. A detailed version of AdaBelief is in Appendix A. Specifically, in Adam, the update direction is , where is the EMA of ; in AdaBelief, the update direction is , where is the EMA of . Intuitively, viewing as the prediction of , AdaBelief takes a large step when observation is close to prediction , and a small step when the observation greatly deviates from the prediction.
### 2.2 Intuitive explanation for benefits of AdaBelief
Note that we name as the “learning rate” and as the “stepsize” for the th parameter. With a 1D example in Fig. 1, we demonstrate that AdaBelief uses the curvature of loss functions to improve training as summarized in Table 1, with a detailed description below:
(1) In region in Fig. 1, the loss function is flat, hence the gradient is close to 0. In this case, an ideal optimizer should take a large stepsize. The stepsize of SGD is proportional to the EMA of the gradient, hence is small in this case; while both Adam and AdaBelief take a large stepsize, because the denominator ( and ) is a small value.
(2) In region , the algorithm oscillates in a “steep and narrow” valley, hence both and is large. An ideal optimizer should decrease its stepsize, while SGD takes a large step (proportional to ). Adam and AdaBelief take a small step because the denominator ( and ) is large.
(3) In region , we demonstrate AdaBelief’s advantage over Adam in the “large gradient, small curvature” case. In this case, and are large, but and are small; this could happen because of a small learning rate . In this case, an ideal optimizer should increase its stepsize. SGD uses a large stepsize (); in Adam, the denominator is large, hence the stepsize is small; in AdaBelief, denominator is small, hence the stepsize is large as in an ideal optimizer.
To sum up, AdaBelief scales the update direction by the change in gradient, which is related to the Hessian. Therefore, AdaBelief considers curvature information and performs better than Adam.
AdaBelief considers the sign of gradient in denominator We show the advantages of AdaBelief with a 2D example in this section, which gives us more intuition for high dimensional cases. In Fig. 2, we consider the loss function: . Note that in this simple problem, the gradient in each axis can only take . Suppose the start point is near the axis, e.g. . Optimizers will oscillate in the direction, and keep increasing in the direction.
Suppose the algorithm runs for a long time ( is large), so the bias of EMA () is small:
mt =EMA(g0,g1,...gt)≈E(gt), mt,x≈Egt,x=1, mt,y≈Egt,y=0 (2) vt =EMA(g20,g21,...g2t)≈E(g2t), vt,x≈Eg2t,x=1, vt,y≈Eg2t,y=1. (3)
Step 1 2 3 4 5 1 1 1 1 1 -1 1 -1 1 -1 Adam 1 1 1 1 1 1 1 1 1 1 AdaBelief 0 0 0 0 0 1 1 1 1 1
In practice, the bias correction step will further reduce the error between the EMA and its expectation if is a stationary process kingma2014adam. Note that:
st=EMA((g0−m0)2,...(gt−mt)2)≈E[(gt−Egt)2]=Vargt, st,x≈0, st,y≈1 (4)
An example of the analysis above is summarized in Fig. 2. From Eq. 3 and Eq. 4, note that in Adam, ; this is because the update of only uses the amplitude of and ignores its sign, hence the stepsize for the and direction is the same . AdaBelief considers both the magnitude and sign of , and , hence takes a large step in the direction and a small step in the direction, which matches the behaviour of an ideal optimizer.
Update direction in Adam is close to “sign descent” in low-variance case In this section, we demonstrate that when the gradient has low variance, the update direction in Adam is close to “sign descent”, hence deviates from the gradient. This is also mentioned in balles2017dissecting.
Under the following assumptions: (1) assume is drawn from a stationary distribution, hence after bias correction, . (2) low-noise assumption, assume , hence we have . (3) low-bias assumption, assume ( to the power of ) is small, hence as an estimator of has a small bias . Then
In this case, Adam behaves like a “sign descent”; in 2D cases the update is to the axis, hence deviates from the true gradient direction. The “sign update” effect might cause the generalization gap between adaptive methods and SGD (e.g. on ImageNet) bernstein2018signsgd; wilson2017marginal. For AdaBelief, when the variance of is the same for all cooridnates, the update direction matches the gradient direction; when the variance is not uniform, AdaBelief takes a small (large) step when variance is large (small).
Numerical experiments In this section, we validate intuitions in Sec. 2.2. Examples are shown in Fig. 3, and we refer readers to more examples in the supplementary videos for better visualization. In all examples, compared with SGD with momentum and Adam, AdaBelief reaches the optimal point at the fastest speed. Learning rate is for all optimizers. For all examples except Fig. 3(d), we set the parameters of AdaBelief to be the same as the default in Adam kingma2014adam, , and set momentum as 0.9 for SGD. For Fig. 3(d), to match the assumption in Sec. 2.2, we set for both Adam and AdaBelief, and set momentum as for SGD. Videos are available at 1. We summarize these experiments as follows:
1. Consider the loss function and a starting point near the axis. This setting corresponds to Fig. 2. Under the same setting, AdaBelief takes a large step in the direction, and a small step in the direction, validating our analysis. More examples such as are in the supplementary videos.
2. For an inseparable loss, AdaBelief outperforms other methods under the same setting.
3. For an inseparable loss, AdaBelief outperforms other methods under the same setting.
4. We set for Adam and AdaBelief, and set momentum as in SGD. This corresponds to settings of Eq. 5. For the loss , is a constant for a large region, hence . As mentioned in kingma2014adam, , hence a smaller decreases faster to 0. Adam behaves like a sign descent ( to the axis), while AdaBelief and SGD update in the direction of the gradient.
5. Optimization trajectory under default setting for the Beale beale1955minimizing function in 2D and 3D.
6. Optimization trajectory under default setting for the Rosenbrock rosenbrock1960automatic function.
Above cases occur frequently in deep learning Although the above cases are simple, they give hints to local behavior of optimizers in deep learning, and we expect them to occur frequently in deep learning, hence we expect AdaBelief to outperform Adam in general cases. Other works in the literature reddi2019convergence; luo2019adaptive claim advantages over Adam, but are typically substantiated with carefully-constructed examples. Note that most deep networks use ReLU activation glorot2011deep, which behaves like an absolute value function as in Fig. 3(a); considering the interaction between neurons, most networks behaves like case Fig. 3(b), and typically are ill-conditioned (the weight of some parameters are far larger than others) as in the figure. Considering a smooth loss function such as cross entropy or a smooth activation, this case is similar to Fig. 3(c). The case with Fig. 3(d) requires , and this typically occurs at the late stages of training, where the learning rate is decayed to a small value, and the network reaches a stable region.
### 2.3 Convergence analysis in convex and non-convex optimization
Similar to reddi2019convergence; luo2019adaptive; chen2018convergence, for simplicity, we omit the de-biasing step (analysis applicable to de-biased version). Proof for convergence in convex and non-convex cases is in the appendix.
Optimization problem For deterministic problems, the problem to be optimized is ; for online optimization, the problem is , where can be interpreted as loss of the model with the chosen parameters in -th step.
###### Theorem 2.1.
(Convergence in convex optimization) Let and be the sequence obtained by AdaBelief, let , , . Let , where is a convex feasible set with bounded diameter . Assume is a convex function and (hence ) and . Denote the optimal point as . For generated with AdaBelief, we have the following bound on the regret:
###### Corollary 2.1.1.
Suppose in Theorem (2.1), then we have:
T∑t=1[ft(θt)−ft(θ∗)]≤D2∞√T2α(1−β1)d∑i=1s1/2T,i+(1+β1)α√1+logT2√c(1−β1)3d∑i=1∣∣∣∣g21:T,i∣∣∣∣2+D2∞β1G∞2(1−β1)(1−λ)2α
For the convex case, Theorem 2.1 implies the regret of AdaBelief is upper bounded by . Conditions for Corollary 2.1.1 can be relaxed to as in reddi2019convergence, which still generates regret. Similar to Theorem 4.1 in kingma2014adam and corollary 1 in reddi2019convergence, where the term exists, we have . Without further assumption, since as assumed in Theorem 2.1, and is constant. The literature kingma2014adam; reddi2019convergence; duchi2011adaptive exerts a stronger assumption that . Our assumption could be similar or weaker, because , then we get better regret than .
###### Theorem 2.2.
(Convergence for non-convex stochastic optimization) Under the assumptions:
• is differentiable; ; is also lower bounded.
• The noisy gradient is unbiased, and has independent noise, .
• At step , the algorithm can access a bounded noisy gradient, and the true gradient is also bounded. .
Assume , noise in gradient has bounded variance, , then the proposed algorithm satisfies:
as in chen2018convergence, are constants independent of and , and is a constant independent of .
###### Corollary 2.2.1.
If and assumptions for Theorem 2.2 are satisfied, we have:
1TT∑t=1E[α2t∣∣∣∣∇f(θt)∣∣∣∣2]≤1T11H−C1c[C1α2σ2c(1+logT)+C2dα√c+C3dα2c+C4]
Theorem 2.2 implies the convergence rate for AdaBelief in the non-convex case is , which is similar to Adam-type optimizers reddi2019convergence; chen2018convergence. Note that regret bounds are derived in the worst possible case, while empirically AdaBelief outperforms Adam mainly because the cases in Sec. 2.2 occur more frequently. It is possible that the above bounds are loose; we will try to derive a tighter bound in the future.
## 3 Experiments
We performed extensive comparisons with other optimizers, including SGD sutskever2013importance, AdaBound luo2019adaptive, Yogi zaheer2018adaptive, Adam kingma2014adam, MSVAG balles2017dissecting, RAdam liu2019variance, Fromage bernstein2020distance and AdamW loshchilov2017decoupled. The experiments include: (a) image classification on Cifar dataset krizhevsky2009learning with VGG simonyan2014very, ResNet he2016deep and DenseNet huang2017densely, and image recognition with ResNet on ImageNet deng2009imagenet; (b) language modeling with LSTM ma2015long on Penn TreeBank dataset marcus1993building; (c) wasserstein-GAN (WGAN) arjovsky2017wasserstein on Cifar10 dataset. We emphasize (c) because prior work focuses on convergence and accuracy, yet neglects training stability.
Hyperparameter tuning We performed a careful hyperparameter tuning in experiments. On image classification and language modeling we use the following:
SGD, Fromage: We set the momentum as , which is the default for many networks such as ResNet he2016deep and DenseNethuang2017densely. We search learning rate among .
Adam, Yogi, RAdam, MSVAG, AdaBound: We search for optimal among , search for as in SGD, and set other parameters as their own default values in the literature.
AdamW: We use the same parameter searching scheme as Adam. For other optimizers, we set the weight decay as ; for AdamW, since the optimal weight decay is typically larger loshchilov2017decoupled, we search weight decay among .
For the training of a GAN, we set for AdaBelief; for other methods, we search for among , and search for among . We set learning rate as for all methods. Note that the recommended parameters for Adam radford2015unsupervised and for RMSProp salimans2016improved are within the search range.
CNNs on image classification We experiment with VGG11, ResNet34 and DenseNet121 on Cifar10 and Cifar100 dataset. We use the official implementation of AdaBound, hence achieved an exact replication of luo2019adaptive. For each optimizer, we search for the optimal hyperparameters, and report the mean and standard deviation of test-set accuracy (under optimal hyperparameters) for 3 runs with random initialization. As Fig. 4 shows, AdaBelief achieves fast convergence as in adaptive methods such as Adam while achieving better accuracy than SGD and other methods.
We then train a ResNet18 on ImageNet, and report the accuracy on the validation set in Table. 2. Due to the heavy computational burden, we could not perform an extensive hyperparameter search; instead, we report the result of AdaBelief with the default parameters of Adam () and decoupled weight decay as in liu2019variance; loshchilov2017decoupled; for other optimizers, we report the best result in the literature. AdaBelief outperforms other adaptive methods and achieves comparable accuracy to SGD (70.08 v.s. 70.23), which closes the generalization gap between adaptive methods and SGD. Experiments validate the fast convergence and good generalization performance of AdaBelief.
LSTM on language modeling We experiment with LSTM on the Penn TreeBank dataset marcus1993building, and report the perplexity (lower is better) on the test set in Fig. 5. We report the mean and standard deviation across 3 runs. For both 2-layer and 3-layer LSTM models, AdaBelief achieves the lowest perplexity, validating its fast convergence as in adaptive methods and good accuracy. For the 1-layer model, the performance of AdaBelief is close to other optimizers.
Generative adversarial networks Stability of optimizers is important in practice such as training of GANs, yet recently proposed optimizers often lack experimental validations. The training of a GAN alternates between generator and discriminator in a mini-max game, and is typically unstable goodfellow2014generative; SGD often generates mode collapse, and adaptive methods such as Adam and RMSProp are recommended in practice goodfellow2016nips; salimans2016improved; gulrajani2017improved. Therefore, training of GANs is a good test for the stability of optimizers.
We experiment with one of the most widely used models, the Wasserstein-GAN (WGAN) arjovsky2017wasserstein and the improved version with gradient penalty (WGAN-GP) salimans2016improved. Using each optimizer, we train the model for 100 epochs, generate 64,000 fake images from noise, and compute the Frechet Inception Distance (FID) heusel2017gans between the fake images and real dataset (60,000 real images). FID score captures both the quality and diversity of generated images and is widely used to assess generative models (lower FID is better). For each optimizer, under its optimal hyperparameter settings, we perform 5 runs of experiments, and report the results in Fig. 6 and Fig. 7. AdaBelief significantly outperforms other optimizers, and achieves the lowest FID score.
## 4 Related works
Besides first-order methods, second-order methods (e.g. Newton’s method boyd2004convex, Quasi-Newton method and Gauss-Newton method wedderburn1974quasi; schraudolph2002fast; wedderburn1974quasi, L-BFGS nocedal1980updating, Natural-Gradient amari1998natural; pascanu2013revisiting, Conjugate-Gradient hestenes1952methods) are widely used in conventional optimization. Hessian-free optimization (HFO) martens2010deep uses second-order methods to train neural networks. Second-order methods typically use curvature information and are invariant to scaling battiti1992first but have heavy computational burden, and hence are not widely used in deep learning.
## 5 Conclusion
We propose the AdaBelief optimizer, which adaptively scales the stepsize by the difference between predicted gradient and observed gradient. To our knowledge, AdaBelief is the first optimizer to achieve three goals simultaneously: fast convergence as in adaptive methods, good generalization as in SGD, and training stability in complex settings such as GANs. Furthermore, Adabelief has the same parameters as Adam, hence is easy to tune. We validate the benefits of AdaBelief with intuitive examples, theoretical convergence analysis in both convex and non-convex cases, and extensive experiments on real-world datasets.
Optimization is at the core of modern machine learning, and numerous efforts have been put into it. To our knowledge, AdaBelief is the first optimizer to achieve fast speed, good generalization and training stability. Adabelief can be used for the training of all models that can numerically esimate parameter gradient. hence can boost the development and application of deep learning models; yet this work mainly focuses on the theory part, and the social impact is mainly determined by each application rather than by optimizer.
## A. Detailed Algorithm of AdaBelief
Notations By the convention in kingma2014adam, we use the following notations:
• : is the loss function to minimize, is the parameter in
• : the gradient and step
• : is the learning rate, default is ; is a small number, typically set as
• : smoothing parameters, typical values are
• : exponential moving average (EMA) of
• : is the EMA of , is the EMA of
## B. Convergence analysis in convex online learning case (Theorem 2.1 in main paper)
For the ease of notation, we absorb into . Equivalently, . For simplicity, we omit the debiasing step in theoretical analysis as in reddi2019convergence. Our analysis can be applied to the de-biased version as well.
###### Lemma .1.
mcmahan2010adaptive For any and convex feasible set , suppose and , then we have .
###### Theorem .2.
Let and be the sequence obtained by the proposed algorithm, let , , . Let , where is a convex feasible set with bounded diameter . Assume is a convex function and (hence ) and . Denote the optimal point as . For generated with Algorithm 3, we have the following bound on the regret:
T∑t=1ft(θt)−ft(θ∗) ≤D2∞√T2α(1−β1)d∑i=1s1/2T,i+(1+β1)α√1+logT2√c(1−β1)3d∑i=1∣∣∣∣g21:T,i∣∣∣∣2 +D2∞2(1−β1)T∑t=1d∑i=1β1ts1/2t,iαt
Proof:
θt+1=∏F,√st(θt−αts−1/2tmt)=minθ∈F∣∣∣∣s1/4t[θ−(θt−αts−1/2tmt)]∣∣∣∣
Note that since . Use and to denote the th dimension of and respectively. From lemma (.1), using and , we have:
∣∣∣∣s1/4t(θt+1−θ∗)∣∣∣∣2 ≤∣∣∣∣s1/4t(θt−αts−1/2tmt−θ∗)∣∣∣∣2 =∣∣∣∣s1/4t(θt−θ∗)∣∣∣∣2+α2t∣∣∣∣s−1/4tmt∣∣∣∣2−2αt⟨mt,θt−θ∗⟩ =∣∣∣∣s1/4t(θt−θ∗)∣∣∣∣2+α2t∣∣∣∣s−1/4tmt∣∣∣∣2 −2αt⟨β1tmt−1+(1−β1t)gt,θt−θ∗⟩ (1)
Note that and , rearranging inequality (B. Convergence analysis in convex online learning case (Theorem 2.1 in main paper)), we have:
⟨gt,θt−θ∗⟩ ≤12αt(1−β1t)[∣∣∣∣s1/4t(θt−θ∗)∣∣∣∣2−∣∣∣∣s1/4t(θt+1−θ∗)∣∣∣∣2] +αt2(1−β1t)∣∣∣∣s−1/4tmt∣∣∣∣2−β1t1−β1t⟨mt−1,θt−θ∗⟩ ≤12αt(1−β1t)[∣∣∣∣s1/4t(θt−θ∗)∣∣∣∣2−∣∣∣∣s1/4t(θt+1−θ∗)∣∣∣∣2] +αt2(1−β1t)∣∣∣∣s−1/4tmt∣∣∣∣2 +β1t2(1−β1t)αt∣∣∣∣s−1/4tmt−1∣∣∣∣2+β1t2αt(1−β1t)∣∣∣∣s1/4t(θt−θ∗)∣∣∣∣2 (Cauchy-Schwartz and Young's inequality: ab≤a2ϵ2+b22ϵ,∀ϵ>0) (2)
By convexity of , we have:
T∑t=1ft(θt)−ft(θ∗) ≤T∑t=1⟨gt,θt−θ∗⟩ +β1t2αt(1−β1t)∣∣∣∣s1/4t(θt−θ∗)∣∣∣∣2} (By formula (???)) ≤12(1−β1)∣∣∣∣s1/41(θ1−θ∗)∣∣∣∣2α1 (0≤st−1≤st,0≤αt≤αt−1
|
2020-10-26 01:04:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746516108512878, "perplexity": 1467.0275045579433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890108.60/warc/CC-MAIN-20201026002022-20201026032022-00197.warc.gz"}
|
https://learn.careers360.com/engineering/question-help-me-please-magnetic-effects-of-current-and-magnetism-neet/
|
# Two similar coils of radius r are lying concentrically with their planes at right angles to each others. The currents flowing in them are $I \ and\ 2 I$, respectively. The resultant magnetic field induction at the centre will be: Option 1) $\frac{\sqrt{5}\mu_{o}I}{2R}$ Option 2) $\frac{{3}\mu_{o}I}{2R}$ Option 3) $\frac{\mu_{o}I}{2R}$ Option 4) $\frac{\mu_{o}I}{R}$
Magnetic field due to cCrcular Current Carrying arc -
$B=\frac{\mu_{o}}{4\pi}\:\frac{2\pi i}{r}=\frac{\mu_{o}i}{2r}$
- wherein
$B_{1}=\frac{\mu _{oI}}{2r } \:\:\:\:\:B=\sqrt{B_{1}^{2}+B_{2}^{2}}=\sqrt{5}\frac{\mu ^oI}{2r}\\ B_{2}=\frac{\mu_{o}.2I}{2r}$
Option 1)
$\frac{\sqrt{5}\mu_{o}I}{2R}$
This option is correct
Option 2)
$\frac{{3}\mu_{o}I}{2R}$
This option is incorrect
Option 3)
$\frac{\mu_{o}I}{2R}$
This option is incorrect
Option 4)
$\frac{\mu_{o}I}{R}$
This option is incorrect
Exams
Articles
Questions
|
2020-06-05 10:15:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420980215072632, "perplexity": 5449.385198469425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348496026.74/warc/CC-MAIN-20200605080742-20200605110742-00056.warc.gz"}
|
http://intarch.ac.uk/journal/issue8/huggett/jhproc1.html
|
## <!-- navbuttons("jhtoc.html","jhproc.html","jhtoc.html","jhproc2.html") writesectionheader(99,'4.1','Automated structures from basic primitives'); // --> Automated structures from basic primitives
The timber palisade and wall-walk surrounding the bailey and motte lend themselves to automated construction since they consist of a repetitive sequence of timber uprights (see Figure ). Information about the topography is held in an XYZ co-ordinate file and used to locate the palisade posts correctly in relation to the ground surface. The timbers between the main uprights are then filled in by 'walking' from post to post, filling in the spaces between. Basic box and cylinder primitives are used for the structural elements, with the only parameters other than the XYZ co-ordinates being the physical dimensions of the timbers.
A variant of this approach was used in the construction of the bridge (Figure ). Here, a profile was generated between the two endpoints of the bridge which, together with a direction vector, was used to extrude the profile across the ditch. A series of direction vectors are similarly used to construct the bridge supports.
URL: http://intarch.ac.uk/journal/issue8/huggett/jhproc1.html
|
2018-07-19 11:35:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685978651046753, "perplexity": 1831.6319333460494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00132.warc.gz"}
|
http://dallasrealestatedeveloper.com/royal-thai-fmsaw/viewtopic.php?37c748=combinations-with-repeated-elements
|
Receipt Meaning In Arabic, Diy Brunette Balayage, How To Reach Matheran From Thane By Car, Middle Part Bob Wig, Stowe Mountain Resort Wedding, Fallow Deer Hunting Europe, Michelob Ultra Aluminum Bottles 12 Pack, Schwarzkopf Live Colour Refresher, Activa Engine Repair Cost, " />
# combinations with repeated elements
| January 9, 2021
Number of red flags = p = 2. Help with combinations with repeated elements! Two combinations with repetition are considered identical if they have the same elements repeated the same number of times, regardless of their order. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share ⦠Finding combinations from a set with repeated elements is almost the same as finding combinations from a set with no repeated elements: The shifting technique is used and the set needs to be sorted first before applying this technique. This problem has existing recursive solution please refer Print all possible combinations of r elements in a given array of size n link. Combinations with Repetition. To print only distinct combinations in case input contains repeated elements, we can sort the array and exclude all adjacent duplicate elements from it. The number of k-combinations for all k is the number of subsets of a set of n elements. It returns r length subsequences of elements from the input iterable. To know all the combinations with repetition of 5 taken elements in threes, using the formula we get 35: $$\displaystyle CR_{5,3}=\binom{5+3-1}{3}=\frac{(5+3-1)!}{(5-1)!3!}=\frac{7!}{4!3! i put in excel every combination (one by one, put every single combination with "duplicate values" turned ON) possible and I get 1080 different combinations. from a set of n distinct elements to a set of n distinct elements. This gives 2 + 2 + 2 + 1 = 7 permutations. Despite this difference between -permutations and combinations, it is very easy to derive the number of possible combinations () from the number of possible -permutations (). I. is the factorial operator; The combination formula shows the number of ways a sample of ârâ elements can be obtained from a larger set of ânâ distinguishable objects. For example, for the numbers 1,2,3, we can have three combinations if we select two numbers for each combination : (1,2), (1,3) and (2,3). Same as other combinations: order doesn't matter. Combinations with repetition of 5 taken elements in twos: As before$$adab$$,$$ac$$,$$ae$$,$$bc$$,$$bd$$,$$be$$,$$cd$$,$$ce$$and$$de$$, but now also the groups with repeated elements:$$aa$$,$$bb$$,$$cc$$,$$dd$$and$$ee$$. How many different flag combinations can be raised at a time? r = number of elements that can be selected from a set. Given n,k∈{0,1,2,…},n≥k, the following formula holds: The formula is easily demonstrated by repeated application of the Pascal’s Rule for the binomial coefficient. Number of blue flags = q = 2. This combination will be repeated many times in the set of all possible -permutations. When some of those objects are identical, the situation is transformed into a problem about permutations with repetition. The definition is based on the multiset concept and therefore the order of the elements within the combination is irrelevant. The definition generalizes the concept of combination with distinct elements. Combinatorial Calculator. which, by the inductive hypothesis and the lemma, equalizes: Generated on Thu Feb 8 20:35:35 2018 by, http://planetmath.org/PrincipleOfFiniteInduction. Consider a combination of objects from . Example: You walk into a candy store and have enough money for 6 pieces of candy. Finding Combinations from a Set with Repeated Elements. Periodic Table, Elements, Metric System ... of Bills with Repeated ⦠Combinations with repetition of 5 taken elements in threes: As before$$abeabc$$,$$abd$$,$$acd$$,$$ace$$,$$ade$$,$$bcd$$,$$bce$$,$$bde$$and$$cde$$, but now also the groups with repeated elements:$$aab$$,$$aac$$,$$aad$$,$$aae$$,$$bba$$,$$bbc$$,$$bbd$$,$$bbe$$,$$cca$$,$$ccb$$,$$ccd$$,$$cce$$,$$dda$$,$$ddb$$,$$ddc$$and$$dde$$. Let’s then prove the formula is true for k+1, assuming it holds for k. The k+1-combinations can be partitioned in n subsets as follows: combinations that include x1 at least once; combinations that do not include x1, but include x2 at least once; combinations that do not include x1 and x2, but include x3 at least once; combinations that do not include x1, x2,… xn-2 but include xn-1 at least once; combinations that do not include x1, x2,… xn-2, xn-1 but include xn only. We first separate the balls into two lots â the identical balls (say, lot 1) and the distinct balls (lot 2). Sep 15, 2014. The calculator provided computes one of the most typical concepts of permutations where arrangements of a fixed number of elements r, are taken fromThere are 5,040 combinations of four numbers when numb. All the three balls from lot 1: 1 way. ∎. C n, k â² = ( n + k - 1 k). Working With Arrays: Combinations, Permutations, Repeated Combinations, Repeated Permutations. A permutation of a set of objects is an ordering of those objects. Next, we divide our selection into two sub-tasks â select from lot 1 and select from lot 2. Print all the combinations of N elements by changing sign such that their sum is divisible by M. 07, Aug 18. A permutation with repetition is an arrangement of objects, where some objects are repeated a prescribed number of times. In python, we can find out the combination of the items of any iterable. Combinations from n arrays picking one element from each array. So how can we count the possible combinations in this case? I forgot the "password". Combinations with repetition of 5 taken elements in ones:$$a$$,$$b$$,$$c$$,$$d$$and$$e$$. We will solve this problem in python using itertools.combinations() module.. What does itertools.combinations() do ? The number Cn,k′ of the k-combinations with repeated elements is given by the formula: The proof is given by finite induction (http://planetmath.org/PrincipleOfFiniteInduction). The number of combinations of n objects taken r at a time with repetition. We will now solve some of the examples related to combinations with repetition which will make the whole concept more clear. A k-combination with repeated elements chosen within the set X={x1,x2,…xn} is a multiset with cardinality k having X as the underlying set. This is an example of permutation with repetition because the elements of the set are repeated ⦠Recovered from https://www.sangakoo.com/en/unit/combinations-with-repetition, https://www.sangakoo.com/en/unit/combinations-with-repetition. Proof: The number of permutations of n different things, taken r at a time is given by As there is no matter about the order of arrangement of the objects, therefore, to every combination of r ⦠The following formula says to us how many combinations with repetition of$$n$$taken elements of$$k$$in$$k$$are:$$$\displaystyle CR_{n,k}=\binom{n+k-1}{k}=\frac{(n+k-1)!}{(n-1)!k!}$$. For ⦠The below solution generates all tuples using the above logic by traversing the array from left to right. }=7 \cdot 5 = 35$$$, Solved problems of combinations with repetition, Sangaku S.L. 06, Jun 19. Example 1. Let's consider the set $$A=\{a,b,c,d,e \}$$. Also Check: N Choose K Formula. In elementary combinatorics, the name âpermutations and combinationsâ refers to two related problems, both counting possibilities to select k distinct elements from a set of n elements, where for k-permutations the order of selection is taken into account, but for k-combinations it is ignored. If "white" is the repeated element, then the first permutation is "Pick two that aren't white and aren't repeated," followed by "Pick two white." Same as permutations with repetition: we can select the same thing multiple times. Combinations and Permutations Calculator. Number of green flags = r = 4. Iterating over all possible combinations in an Array using Bits. Advertisement. Two combinations with repetition are considered identical. The proof is given by finite induction ( http://planetmath.org/PrincipleOfFiniteInduction ). The difference between combinations and permutations is ordering. Finally, we make cases.. Now since the B's are actually indistinct, you would have to divide the permutations in cases (2), (3), and (4) by 2 to account for the fact that the B's could be switched. We can also have an $$r$$-combination of $$n$$ items with repetition. Online calculator combinations with repetition. Here: The total number of flags = n = 8. The number of permutations with repetitions of k 1 copies of 1, k 2 copies of ⦠The PERMUTATIONA function returns the number of permutations for a specific number of elements that can be selected from a [â¦] Return all combinations Today I have two functions I would like to demonstrate, they calculate all possible combinations from a cell range. There are five colored balls in a pool. Here, n = total number of elements in a set. Finding Repeated Combinations from a Set with No Repeated Elements. Number of combinations with repetition n=11, k=3 is 286 - calculation result using a combinatorial calculator. Iterative approach to print all combinations of an Array. The repeats: there are four occurrences of the letter i, four occurrences of the letter s, and two occurrences of the letter p. The total number of letters is 11. The different combinations with repetition of these 5 elements are: As we see in this example, many more groups are possible than before. of the lettersa,b,c,dtaken 3 at a time with repetition are:aaa,aab, aac,aad,abb,abc,abd,acc,acd,add,bbb,bbc,bbd,bcc,bcd,bdd,ccc,ccd, cdd,ddd. In Apprenticeship Patterns, Dave Hoover and Ade Oshineye encourage software apprentices to make breakable toys.Building programs for yourself and for fun, they propose, is a great way to grow, since you can gain experience stretching your skill set in a context where ⦠(2021) Combinations with repetition. The proof is trivial for k=1, since no repetitions can occur and the number of 1-combinations is n=(n1). 12, Feb 19. Then "Selected the repeated elements." Of course, this process will be much more complicated with more repeated letters or ⦠Note that the following are equivalent: 1. 9.7. itertools, The same effect can be achieved in Python by combining map() and count() to form map(f, combinations(), p, r, r-length tuples, in sorted order, no repeated elements the iterable could get advanced without the tee objects being informed. The combinations with repetition of $$n$$ taken elements of $$k$$ in $$k$$ are the different groups of $$k$$ elements that can be formed from these $$n$$ elements, allowing the elements to repeat themselves, and considering that two groups differ only if they have different elements (that is to say, the order does not matter). sangakoo.com. Purpose of use something not wright Comment/Request I ha padlock wit 6 numbers in 4 possible combinations. Theorem 1. (For example, let's say you have 5 green, 3 blue, and 4 white, and pick four. This question revolves around a permutation of a word with many repeated letters. Calculates count of combinations with repetition. Proof. With permutations we care about the order of the elements, whereas with combinations we donât. Forinstance, thecombinations. They are represented as $$CR_{n,k}$$ . n is the size of the set from which elements are permuted; n, r are non-negative integers! Combination is the selection of set of elements from a collection, without regard to the order. I'm making an app and I need help I need the formula of combinations with repeated elements for example: from this list {a,b,c,a} make all the combinations possible, order doesn't matter a, b ,c ,ab ,ac ,aa ,abc ,aba ,aca ,abca Combinations with 4 elements 1 repeated⦠Solution. The number of combinations of n objects, taken r at a time represented by n C r or C (n, r). = 7 permutations within the combination of the items of any iterable count the possible combinations this! Using Bits set wright Comment/Request I ha padlock wit 6 numbers in 4 possible in. In 4 possible combinations in an Array using Bits combinations of an Array Bits. Does itertools.combinations ( ) module.. What does itertools.combinations ( ) do â select from 1... This gives 2 + 2 + 1 = 7 permutations //www.sangakoo.com/en/unit/combinations-with-repetition, https:,... And select from lot 2 is given by finite induction ( http: //planetmath.org/PrincipleOfFiniteInduction: we can find out combination... Care about the order situation is transformed into a problem about permutations with repetition of combination with distinct.... Result using a combinatorial calculator possible combinations in this case each Array Array... As , Solved problems of combinations of n elements, Sangaku S.L combinatorial.. Whole concept more clear are identical, the situation is transformed into a candy store and have enough for. Does itertools.combinations ( ) module.. What does itertools.combinations ( ) module.. What does itertools.combinations )... K-Combinations for all k is the selection of set of n objects taken r a. Have an \ ( n\ ) items with repetition numbers in 4 possible combinations this! The lemma, equalizes: Generated on Thu Feb 8 20:35:35 2018 by http! Arrangement of objects is an arrangement of objects, where some objects are identical, the situation is into., permutations, Repeated combinations from n arrays picking one element from each Array at time! Have 5 green, 3 blue, and pick four { n, k â² = ( +. Array using Bits Feb 8 20:35:35 2018 by, http: //planetmath.org/PrincipleOfFiniteInduction ) for example, let say... 2018 by, http: //planetmath.org/PrincipleOfFiniteInduction from n arrays picking one element from each Array solution generates all using. Working with arrays: combinations, permutations, Repeated combinations, permutations, Repeated permutations collection without... I ha padlock wit 6 numbers in 4 possible combinations this problem in python, we can the! Combination with distinct elements python using itertools.combinations ( ) do the items of any iterable c 2 = ways! The concept of combination with distinct elements You have 5 green, blue! All the three balls from lot 2 is 286 - calculation result using a combinatorial calculator inductive and. Consider the set of n objects taken r at a time with repetition, S.L! Inductive hypothesis and the number of flags = n = total number combinations. Elements, whereas with combinations we donât time with repetition, Sangaku S.L wright Comment/Request ha. From left to right from n arrays picking one element from each Array the lemma,:., k=3 is 286 - calculation result using a combinatorial calculator subsequences of elements in a set of elements... Picking one element from each Array, since No repetitions can occur and the lemma, equalizes: Generated Thu. Can we count the possible combinations in an Array some of those objects the. Not wright Comment/Request I ha padlock wit 6 numbers in 4 possible in... As permutations with repetition: we can find out the combination is the of... Over all possible -permutations n, k â² = ( n + k - 1 k ) objects an!, https: //www.sangakoo.com/en/unit/combinations-with-repetition elements that can be selected from a collection, without regard to the.... = 7 permutations Comment/Request I ha padlock wit 6 numbers in 4 possible in. = ( n + k - 1 k ) Comment/Request I ha padlock wit 6 numbers in possible. Is an arrangement of objects, where some objects are identical, the is. An \ ( r\ ) -combination of \ ( n\ ) items with repetition n=11, k=3 286...: we can also have an \ ( r\ ) -combination of \ ( n\ items... Pieces of candy for example, let 's say You have 5 green, blue. The combination is the selection of set of all possible -permutations to the.. Permutations with repetition is an ordering of those objects 4 possible combinations ) items with repetition, S.L. =7 \cdot 5 = 35 , Solved problems of combinations with repetition: we can find the. In a set of objects is an ordering of those objects ( n\ ) items with repetition n=11 k=3... { a, b, c, d, e \ } $, Solved problems of with. As$ $permutation of a set k-combinations for all k is the number of of... Are represented as$ $8 20:35:35 2018 by, http: //planetmath.org/PrincipleOfFiniteInduction, let 's say You 5... Any iterable it returns r length subsequences of elements from a set lot 2 count! All combinations of n objects taken r at a time with repetition which will make the concept. A=\ { a, b, c, d, e \ }$ $of candy a with. N = 8 combinations from a set of n objects taken r at a time with repetition is an of. Given by finite induction ( http: //planetmath.org/PrincipleOfFiniteInduction ) 286 - calculation result using combinatorial... To the order when some of those objects are identical, the is... Working with arrays: combinations, Repeated permutations the three balls from lot 2 finding Repeated combinations, permutations. Can occur and the number of elements that can be selected from a set of elements from the input.! Lot 2 1 way â select from lot 1 and select from lot 2 does matter... Use something not wright Comment/Request I ha padlock wit 6 numbers in possible! Have 5 green, 3 blue, and pick four n arrays picking one element from each Array:.. Situation is transformed into a problem about permutations with repetition is an arrangement of objects, where objects! For k=1, since No repetitions can occur and the number of combinations of an using... Order of the examples related to combinations with repetition which will make the whole concept more clear pick! Proof is given by finite induction ( http: //planetmath.org/PrincipleOfFiniteInduction 5 =$... Of combinations of an Array into two sub-tasks â select from lot 2 of! White, and pick four permutation of a set c 2 = 6 ways to pick two. The combination of the items of any iterable by, http: )... For k=1, since No repetitions can occur and the number of combinations of n objects r!: we can select the same thing multiple times enough money for 6 pieces of candy right! Gives 2 + 1 = 7 permutations Array using Bits and pick four k =! Combination will be Repeated many times in the set CR_ { n, k } $.. Pieces of candy No Repeated elements itertools.combinations ( ) do two white k.... 286 - calculation result using a combinatorial calculator, whereas with combinations donât. 6 pieces of candy all possible -permutations Sangaku S.L 7 permutations of elements from a set Array left! 1: 1 way concept more clear repetition combinations with repeated elements an arrangement of objects where. Wit 6 numbers in 4 possible combinations of those objects are identical, the situation transformed. Of the items of any iterable permutations we care about the order of the examples related to combinations repetition... The input iterable a problem about permutations with repetition: we can also have an \ ( n\ items. The set of n objects taken r at a time with repetition: we can find the. When some of the items of any iterable a permutation with repetition n=11, is... Transformed into a problem about permutations with repetition of combination with distinct elements and have enough money for pieces. Https: //www.sangakoo.com/en/unit/combinations-with-repetition with distinct elements k â² = ( n + k - 1 k ) =... Using Bits the items of any iterable equalizes: Generated on Thu Feb 8 20:35:35 by... Approach to print all combinations of n elements equalizes: Generated on Thu Feb 8 2018... Equalizes: Generated on Thu Feb 8 20:35:35 2018 by, http: //planetmath.org/PrincipleOfFiniteInduction http... Problem in python using itertools.combinations ( ) do 1-combinations is n= ( n1 ) of use something not wright I... Whereas with combinations we donât objects, where some objects are identical the. Definition generalizes the concept of combination with distinct elements in a set c, d, \. 1 way using the above logic by traversing the Array from left to.... Set of objects, where some objects are Repeated a prescribed number of elements a..., permutations, Repeated permutations = number of elements from a set n... 20:35:35 2018 by, http: //planetmath.org/PrincipleOfFiniteInduction ) combinations of an Array they are represented$. - calculation result using a combinatorial calculator 286 - calculation result using a calculator... Wright Comment/Request I ha padlock wit 6 numbers in 4 possible combinations, since No repetitions occur! Example: You walk into a candy store and have enough money combinations with repeated elements 6 pieces of candy -combination., since No repetitions can occur and the number of combinations with repetition 1 and select from lot 1 select. With combinations we donât Repeated letters make the whole concept more clear is... 4 possible combinations: Generated on Thu Feb 8 20:35:35 2018 by http... Permutation of a set of objects is an arrangement of objects is an ordering of those are... Concept and therefore the order of the elements, whereas with combinations we donât the order and 4 white and... Examples related to combinations with repetition: we can find out the combination is the selection of set all!
Category: Uncategorized
|
2021-03-04 02:40:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40300253033638, "perplexity": 1679.1362500560513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00146.warc.gz"}
|
https://zbmath.org/?q=an:0856.08004
|
# zbMATH — the first resource for mathematics
Semi-implication algebra. (English) Zbl 0856.08004
The author generalizes the concept of an implication algebra to that of a semi-implication algebra $$(A,\cdot)$$. $$A$$ induces naturally a $$q$$-semilattice. Let $$(a, b)\in R$$ for $$a, b\in A$$ iff $$(ab)b= 1b$$ and $$B_p= \{a\in A\mid (p, a)\in R\}$$ for $$p\in A$$. Then every $$B_p$$ is a $$q$$-algebra. Further results are on the nilpotent shift of the variety of semilattices and implication algebras.
Reviewer: G.Kalmbach (Ulm)
##### MSC:
08A62 Finitary algebras 06A12 Semilattices 08B99 Varieties
|
2021-09-22 04:56:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7367268204689026, "perplexity": 1207.481079391783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00130.warc.gz"}
|
https://hal.inria.fr/hal-00990525
|
# Rate of Convergence and Error Bounds for LSTD($\lambda$)
1 MAIA - Autonomous intelligent machine
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
Abstract : We consider LSTD($\lambda$), the least-squares temporal-difference algorithm with eligibility traces algorithm proposed by Boyan (2002). It computes a linear approximation of the value function of a fixed policy in a large Markov Decision Process. Under a $\beta$-mixing assumption, we derive, for any value of $\lambda \in (0,1)$, a high-probability estimate of the rate of convergence of this algorithm to its limit. We deduce a high-probability bound on the error of this algorithm, that extends (and slightly improves) that derived by Lazaric et al. (2012) in the specific case where $\lambda=0$. In particular, our analysis sheds some light on the choice of $\lambda$ with respect to the quality of the chosen linear space and the number of samples, that complies with simulations.
Type de document :
Rapport
[Research Report] 2014
Domaine :
Littérature citée [12 références]
https://hal.inria.fr/hal-00990525
Contributeur : Bruno Scherrer <>
Soumis le : mardi 13 mai 2014 - 15:49:54
Dernière modification le : jeudi 11 janvier 2018 - 06:25:23
Document(s) archivé(s) le : lundi 10 avril 2017 - 22:16:10
### Fichiers
report.pdf
Fichiers produits par l'(les) auteur(s)
### Identifiants
• HAL Id : hal-00990525, version 1
• ARXIV : 1405.3229
### Citation
Manel Tagorti, Bruno Scherrer. Rate of Convergence and Error Bounds for LSTD($\lambda$). [Research Report] 2014. 〈hal-00990525〉
### Métriques
Consultations de la notice
## 351
Téléchargements de fichiers
|
2018-12-15 12:35:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5494133830070496, "perplexity": 4568.737680534374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00369.warc.gz"}
|
https://earthscience.stackexchange.com/questions/14581/can-the-previous-weather-be-computed-from-the-current-situation/14584
|
# Can the previous weather be computed from the current situation?
If one applies today's state-of-the-art weather forecast computations "backwards", i.e. computing how the systems was X days before the current situation based on knowledge of today's situation, at which point do "predictions" deviate significantly from the (known) sitautions ?
In other words, are those computations reversible and even possible? And if so, is the error level comparabliy (i.e. time-symmetric) or not?
Edit: Imagine one would get the data from an Earth-like planet through an observation (satelite etc.) at a given moment, would current weather-models allow to compute how the weather was before that time?
• I've never heard that anyone would do this. Any sources implying that someone does? I would guess that most models can't do this. – Communisty Jul 9 '18 at 9:32
• The problem I'd think is A definitely causes B, but the inverse, B can often have many causes of A, C, D, etc. I'd think that's a problem in most mathematical modeling of time? – JeopardyTempest Jul 9 '18 at 10:25
• @JeopardyTempest Sure, but some mathematical algorithms are reversible, some within limits and some simply aren’t. I don’t know into which category the ones used for weather predictions fall - hence me asking. – BmyGuest Jul 9 '18 at 10:27
• Validation is done by hindcasting, not by reversing the time axis, which seems to be what you're asking about. Reversing the time axis would involve water falling up to the clouds and cyclones moving equatorward and turning into potential vorticity. It seems involved and I'm not sure what benefit would be. – gerrit Jul 9 '18 at 11:58
• @BmyGuest Gerrit's is the right answer. – gansub Jul 9 '18 at 12:05
The underlying equations for fluid-dynamic models are hyperbolic partial differential equations. They can generally be written in the form $$\frac{\partial}{\partial t} u(t) = D(u(t))$$ where $D$ in some way evaluates the current state of the system and its spatial derivatives.
A numerical simulation then integrates this differential equation, to extrapolate from a start state $u(t_0) = u_0$ the time-dependent $u(t)|_{t>t_0}$.
Well, if we can do that, then surely we could also solve in the inverse time direction, by considering the equation $$\frac{\partial}{\partial t} u(-t) = -D(u(t))$$ and running the integrator with $\tilde t = -t$, $\tilde D = -D$?
Actually, you quickly run into problems when you try that. The operator $D$ can be characterised by its Jacobian, which basically tells you how pertubations in the state influence the derivative. Specifically, the complex eigenvalues of the Jacobian can tell you whether a small deviation will a) amplify over time (positive real part), or b) decay (negative real part), or c) just oscillate (purely imaginary).
For physical systems the eigenvalues tend to be mostly c) or b): you get a lot of wave-like solutions which propagate / oscillate over the system, and tend to decay over time. a) however is more tricky: if you start with a small deviation from the start state, the system will over time deviate ever more and and more. Now, this kind of thing is by no means unheard of especially in meteorology; it's the essence of a chaotic system. Storms can emerge and grow stronger over time, but only by scooping up energy that's already stored in the system. At some point they'll stop.
OTOH, you always have a lot of consistently negative real-part eigenvalues. These correspond to dissipative effects: small-scale pertubations generally are smoothed out to zero by the physical effects, e.g. winds have friction, mixing of air of different temperature averages out the differences, etc.. If you now run the simulation backwards, you turn those negative real parts into positive real parts, and that means the system is suddenly massively chaotic on all length scales. Small pertubation arise out of numerical uncertainties, and grow over all bounds. You would not only end up with states different from the actual weather a week ago, but with states that are completely unlike anything the weather has ever been like – huge, erratic temperature fluctuations and small vortices with crazy wind speeds.
• If I want to put it into less-mathematical more intuitive phrasing, can one simply say: "Many (slightly) different situations would evolve to the same/similar weather situation (and the maths "smooths" over this) - so reverting doesn't get you to one of them (and the maths exaggerates the error)" ? – BmyGuest Jul 9 '18 at 15:55
• @Mark it'll depend on the level of detail you're trying to resolve. When you just run the dynamics backwards naïvely, I'd estimate you can do about 20-100 simulation time-steps before the state is dominated by unbounded oscillations, depending on the solver. Via the CFL condition, you can calculate what time span that is; it scales linearly with the size of the smallest feature you're resolving. – leftaroundabout Jul 9 '18 at 20:32
• TLDR: No, because entropy. – workoverflow Jul 10 '18 at 11:50
• @gansub yes. For most meteorological phenomena, Navier-Stokes behave mostly like the Euler equations, which are one of the classical examples of hyperbolic PDE. Only in the high-viscosity limit do Navier-Stokes turn into the parabolic diffusion equation, but that's AFAIK not relevant for weather prediction. – leftaroundabout Jul 10 '18 at 18:21
• @leftaroundabout - Here you go - scicomp.stackexchange.com/questions/11830/… – gansub Jul 11 '18 at 13:36
Validation is done by hindcasting, not by reversing the time axis, which seems to be what you're asking about. In hindcasting, we take the state at some time in the past, apply our weather models to that state (and the state before it), run the forecast model, and compare that to the reference state¹ ahead of that point in time.
There is a concept called backtracking, which is (for example) used to calculate where particulates have been first emitted (so we measure some plume, and calculate this originated 18 hours ago at a particular source). But this assumes knowledge of present and past winds, and is therefore different from what you ask.
Reversing the time axis would involve water falling up to the clouds and cyclones moving equatorward and turning into potential vorticity. It seems involved and I'm not sure what benefit would be. I don't think this can be done with existing models, and it would be a lot of effort to make it work.
¹ It's not as simple as that, because the full "actual state" also involves modelling to "fill in the gaps" in time and space, between all the times and places where we have measurements. This is known as re-analysis.
• Maybe "will be done" instead of "can be done". It seems to me that, given enough research effort, it could be done. But the benefit is questionable, so it likely won't be done. – Ian MacDonald Jul 9 '18 at 13:32
• @IanMacDonald Right. I just meant that it's not supported by current models, but of course, it can theoretically be developed given enough time and expertise. Edited for clarity. – gerrit Jul 9 '18 at 13:36
A completely different approach to the one leftroundabout pointed out is to use recurrent neural networks. These were made to predict the future development of a time series by first learning the hidden model itself and then using it to guesstimate the future values. The advantage of this method is, that not even the slightest knowledge about meteorology is needed, all the modelling of the weather system is done by the algorithm training the neural network.
In fact there was a Kaggle competition with the task to predict rainfall from past data: http://simaaron.github.io/Estimating-rainfall-from-weather-radar-readings-using-recurrent-neural-networks/ . The winner used recurrent neural networks.
In the case of predicting past values from current values the same architecture can be used, as it can learn the backward model as directly as a forward model.
• And can those be used to reverse the time axis? I imagine that a purely statistical approach could, but you're not actually addressing the core question in this answer. – gerrit Jul 12 '18 at 7:07
In a nutshell, no.
The reason why is because of the Butterfly Effect. In a system where the current state depends on the previous state in an iterative way, you can get chaotic effects. Chaotic effects can magnify extremely tiny inputs to gigantic changes over time.
This was first noted by the excellent mathematician and meteorologist Edward Lorenz. This is a decent explanation of how he came to notice that the equations predicting the weather are extremely sensitive to current conditions. You simply can't build a computer with enough sensitivity to do a good job.
Since tiny fluctuations can cause huge effects over time, you have to ask yourself - how much information can your simulation encompass? Lorenz showed that tiny things can change the entire landscape over time. To be accurate, a simulation would have to take into account every source of small changes - sunspot activity, the wobble of the moon, the gravitational tug from Pluto...the list is endless.
So unfortunately for a chaotic system like weather, you can't predict with any accuracy previous or future states.
• While Chaos Theory is indeed valid (leftaroundabout goes into great summarization of the mathematics of how it really works)... we can still make very good predictions at a certain length. Your answer makes it sound like all forecasting is hopeless. – JeopardyTempest Jul 10 '18 at 0:54
• In particular, "you can't predict with any accuracy previous or future states"... for example, NHC 48 hour tropical cyclone track forecasts are down to about 75 miles average error (whereas persistence+climatology is around 225 miles)... and continues to show improvement by on the order of 25% per decade. In the grand scheme, the window of predictability is pretty limited... but you certainly CAN with some accuracy predict some future states. – JeopardyTempest Jul 10 '18 at 0:56
• @JeopardyTempest, you are correct. Some prediction is of course possible, I did somewhat overstate the case when I said that any accuracy for all states is impossible. Mathematically that is true, but in practice it is not. You can predict some future states with the caveat that you understand that it is an approximation and not an exact representation, and with weather being a chaotic system it will diverge from your expectations fairly quickly. – BoredBsee Jul 10 '18 at 14:25
|
2019-07-23 20:06:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6755161881446838, "perplexity": 821.6990145948082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00507.warc.gz"}
|
http://clay6.com/qa/48630/a-parallel-plate-capacitor-with-air-between-the-plates-has-a-capacitance-of
|
Browse Questions
# A parallel plate capacitor with air between the plates has a capacitance of $9 \;pF$. The separation between its plates is d. The space between the plates is now filled with two dielectrics. One of the dielectrics has dielectric constant $K_1 = 3$ and thickness $\large\frac{d}{3}$ while the other one has dielectric constant $K_2 = 6$ and thickness $\large\frac{2d}{3}$ . Capacitance of the capacitor is now
$(C) 40.5 pF$
|
2017-06-25 19:02:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954192996025085, "perplexity": 260.8015353927952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320570.72/warc/CC-MAIN-20170625184914-20170625204914-00527.warc.gz"}
|
http://openstudy.com/updates/50c68422e4b0b766106da350
|
## dakotajason27 Group Title Find the exact value without a calculator log base radical 3 of 3 one year ago one year ago
1. asnaseer Group Title
Use the log rule that if:$\log_a(b)=c$then:$b=a^c$
2. asnaseer Group Title
in this case:$a=3^{\frac{1}{2}}$$b=3$and you need to find c.
3. dakotajason27 Group Title
so c is 2? since the 1/2 and the 2 will make it 3^1 which is 3
4. asnaseer Group Title
correct :)
5. dakotajason27 Group Title
thxs just need a refresher on this stuff got some more for yuh haha
6. asnaseer Group Title
yw :)
|
2014-07-23 20:24:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4679882228374481, "perplexity": 9821.423111114616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00101-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/420533/challenging-the-chebychev-function-prime-number-theorem
|
# Challenging the Chebychev function / prime number theorem?
The prime number theorem accords with the following equation for the first Chebychev function that:
$$\lim_{x\rightarrow\infty}\frac{\vartheta(x)}{x}=1 \qquad (1)$$
According to Muñoz García, E. and Pérez Marco, R. "The Product Over All Primes is $4\pi^2$":
$$\prod_{p\in \Bbb P}p=4\pi^2 \qquad (2)$$ hence
$$\sum_{p\in \Bbb P}\log p=\log (4\pi^2)=2\log(2\pi) \qquad (3)$$
Recall
$$\vartheta=\sum_{p\le x}_{p\in \Bbb P} \log p \qquad (4)$$
How can both $(1)$ and $(3)$ be true and do not contradict?
Note: the second Chebychev function $\psi(x)$ could be similarly challenged.
• Note that clearly $2$ and $3$ diverge, these are regularized products/sums, they don't represent the value of the sum/product in the principle sense. The zeta function has numerous representations valid for different domains, as a similar example we know that $$\frac{1}{1-x}=1+x+x^2+x^3..., \text{ for |x|<1}$$ And it would make sense to say that $$-1=\frac{1}{1-2}$$ But not to say $$-1=1+2+2^2+2^3+2^4...$$ But to be honest, I know very little about this kind of thing, though I think some keywords would be "Analytic continuation" and "Zeta function regularization". – Ethan Jun 14 '13 at 20:20
There is nothing contradictory about the two facts. Perhaps at face value it would seem like
$$\sum \log p=\log(4\pi^2)\implies \vartheta\to \log(4\pi^2),$$
which contradicts $\vartheta\sim x\to\infty$ (PNT), but the product-of-primes formula does not actually say that the sum $\sum \log p$ converges to $\log(4\pi^2)$. If you interpreted it this way, the first thing that would seem weird, before even comparing it to PNT, is that $\log p\to\infty$ so the sum is obviously divergent, contradicting the supposed fact that the sum coverges to $\log(4\pi^2)$ (a finite value).
One needs the correct way to interpret the identity "$\prod p=4\pi^2$," since it clearly is not true in the usual calculus sense of limits. In general, there are methods of reinterpreting divergent sums so that they become actual finite values - the keywords here are "regularization" & "summability."
Regularization applies to many types of expressions where infinity or divergent values appear, for example in functional integrals in quantum physics. Often it is applied to sums, and it is done using a phenomena called analytic continuation.
The general set-up is that an expression defines a function but only in a certain region of (say) the complex plane - for example $1+z+z^2+\cdots$ only converges when $|z|<1$ - and identify another function defined on a larger domain which agrees with every value of the original function on the original domain - in our example, that would be the rational function $1/(1-z)$, which is defined for all $z\in{\bf C}\setminus\{1\}$ and equals the infinite geometric series wherever the latter is defined.
One cool family of functions whose analytic continuations are interesting are given by Dirichlet series, ones of the form $L(s)=\sum_{n\ge1}a_nn^{-s}$, and in particular the Riemann zeta function $\zeta(s)$ given by $\sum_{n\ge1}n^{-s}$ for ${\rm Re}(s)>1$ (notice when $s=1$ it is the Harmonic series, which diverges), which has functional equation $\pi^{-(1-s)/2}\Gamma(\frac{1-s}{2})\zeta(1-s)=\pi^{-s/2}\Gamma(\frac{s}{2})\zeta(s)$ allowing $\zeta$ to be defined for all values $s\in{\bf C}\setminus\{1\}$, with a simple pole at $s=1$ with residue $1$.
(The $\pi^{-s/2}\Gamma(\frac{s}{2})$ factor is somewhat mysterious at first - it can be seen as the Euler factor at the so-called prime "$p=\infty$"; this derives from an elegant adelic formulation given by Tate's thesis.)
The actual details of the mathemagical regularization process to obtain $\prod p=4\pi^2$ is given in the cited article. To summarize, Möbius inversion yields
$$\zeta(s)=\sum_{n=1}^\infty\frac{1}{n^s}\iff \sum_{p\rm\,prime}\frac{1}{p^s}=\sum_{n=1}^\infty \frac{\mu(n)}{n}\log\zeta(ns)$$
Differentiate both sides of the second equality and "evaluate at $s=0$" to obtain
$$\sum_{p\rm\,prime}\frac{-\log p}{p^0} \quad ="\quad \underbrace{\sum_{n=1}^\infty \mu(n)}_{\displaystyle\left.\frac{1}{\zeta(s)}\right|_{s=0}}\frac{\zeta'(0)}{\zeta(0)} \quad ="\quad \frac{\zeta'(0)}{\zeta(0)^2}=-2\log(2\pi)$$
and hence $\prod p=4\pi^2$ upon exponentiating both sides of the "equality."
Ultimately, the identity is not true in the sense of limits, it is true in the sense of regularized sums and products. The regularization process is to identify the divergent expression as a more general expression evaluated at a point outside its domain, analytically continue that general expression instead and then evaluate at the given point so an actual value is obtained. In general, the value will depend on the choice of "more general expression."
As it happens, slick shortcuts in the regularization process can be obtained by maneuvering in ways that are both technically invalid but conceptually careful and clever, as above. Leonhard Euler is known for these sorts of maneuvers in obtaining finite values for divergent sums. As it happens, in some contexts (where the topology and therefore convergence is very different), the manipulations are actually perfectly technically valid - for example $1-2+4-8+\cdots=1/3$ in the $2$-adics.
The fact that the phenomena of analytic continuation exists and can be harnessed to regularize divergent sums and products seems almost mystical. In fact, contributing even further to the mystery, regularization provides empirically correct numerical values in physics sometimes (for instance, see the Casimir effect). And, according to my understanding, in theoretical physics the mathematical justification for why (zeta-)regularization works out "correctly" is still unknown.
• thanks aon. great description. there remains however a sense of mystery and what the intuition behind "regularised product of primes" is. – al-Hwarizmi Jun 15 '13 at 8:15
|
2019-09-18 13:25:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062471985816956, "perplexity": 382.80313479463086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573289.83/warc/CC-MAIN-20190918131429-20190918153429-00495.warc.gz"}
|
http://clay6.com/qa/70333/count-the-number-of-parallelogram-in-the-given-figure-
|
Comment
Share
Q)
# Count the number of parallelogram in the given figure.
$(A) 8 \\ (B) 11 \\(C)12 \\ (D)15$
|
2019-09-20 16:50:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5495256781578064, "perplexity": 4031.3458501672376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574050.69/warc/CC-MAIN-20190920155311-20190920181311-00123.warc.gz"}
|
https://nhomkinhnamphat.com/top-10-20-is-what-percent-of-160-that-will-change-your-life/
|
Sunday, 5 Feb 2023
# Top 10 20 is what percent of 160 That Will Change Your Life
Mục Lục
Below is information and knowledge on the topic 20 is what percent of 160 gather and compiled by the nhomkinhnamphat.com team. Along with other related topics like: What percent of 10000 is 225, 20 is what percent of 80, 20 is what percent of 200, 20 is what percent of 60, 30 is what percent of 80, 30 is what percent of 60, 30 is what percent of 200, 15 is what percent of 60.
t percent of 160? = 12.5
#### Solution for 20 is what percent of 160:
20:160*100 =
( 20*100):160 =
2000:160 = 12.5
Now we have: 20 is what percent of 160 = 12.5
Question: 20 is what percent of 160?
Percentage solution with steps:
Step 1: We make the assumption that 160 is 100% since it is our output value.
Step 2: We next represent the value we seek with {x}.
Step 3: From step 1, it follows that {100\%}={160}.
Step 4: In the same vein, {x\%}={ 20}.
Step 5: This gives us a pair of simple equations:
{100\%}={160}(1).
{x\%}={ 20}(2).
Step 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS
(left hand side) of both equations have the same unit (%); we have
\frac{100\%}{x\%}=\frac{160}{ 20}
Step 7: Taking the inverse (or reciprocal) of both sides yields
\frac{x\%}{100\%}=\frac{ 20}{160}
\Rightarrow{x} = {12.5\%}
Therefore, { 20} is {12.5\%} of {160}.
## Extra Information About 20 is what percent of 160 That You May Find Interested
If the information we provide above is not enough, you may find more below here.
### 20 is what percent of 160? = 12.5 – Percentage Calculator
• Author: percentagecal.com
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: 20:160*100 =
• Matching Result: Therefore, $20$20 is $12.5\%$12.5% of $160$160 …
• Intro: 20 is what percent of 160? = 12.5 Solution for 20 is what percent of 160: 20:160*100 = ( 20*100):160 = 2000:160 = 12.5Now we have: 20 is what percent of 160 = 12.5Question: 20 is what percent of 160?Percentage solution with steps:Step 1: We make the assumption that 160…
Xem Thêm: Top 10 how to become a obgyn nurse That Will Change Your Life
### 20 is what percent of 160 – Answers by Everydaycalculation.com
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: What percent is 20 of 160? The answer is 12.5%. Get stepwise instructions to work out “20 is what percent of 160?”
• Matching Result: 20 of 160 can be written as: 20/160 · To find percentage, we need to find an equivalent fraction with denominator 100. Multiply both numerator & denominator by …
• Intro: 20 is what percent of 160? or, What percent is 20 of 160?20 of 160 is 12.5%Steps to solve “what percent is 20 of 160?”20 of 160 can be written as:20/160To find percentage, we need to find an equivalent fraction with denominator 100. Multiply both numerator & denominator by 10020/160…
### 20 is what percent of 160
• Author: percentage-off-calculator.com
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: Solution: 20 is what percent of 160 is equal to (20 / 160) x 100 = 12.50%. So if you buy an item at $160 with$20 discounts, you will pay $140 and get 12.50% discount cashback… • Matching Result:$20 out of 160 is what percent ; The question $20 out of 160 is 12.50%, which is the same as 20/160 as a percent. This can be solved using this calculator above. • Intro: 20 is what percent of 160 | what percent of 160 is 20 what percent of 160 is 20 ? Solution: 20 is what percent of 160 is equal to (20 / 160) x 100 = 12.50%. So if you buy an item at$160 with \$20 discounts, you will…
### What is 20 of 160 as percentage – Aspose Products
• Author: products.aspose.app
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: Here you can see how the percentage of 20 out of 160 is calculated, as well as what your score will be according to your grading scale if you answered 20 questions out of 160 correctly. Easily find out the test percentage score and…
• Matching Result: First, you need to calculate your grade in percentages. The total answers count 160 – it’s 100%, so we to get a 1% value, divide 160 by 100 to get 1.60. Next, …
• Intro: What is 20 of 160 as percentage – Aspose percentage calculator Aspose.OMR Aspose Grade Calculator helps you to calculate your percentage grade for {0} of {1}. In addition, you can modify these values and choose different grading systems to get your letter grade. Aspose Grade Percentage Calculator is a free…
Xem Thêm: Top 10 how much does a senior partner lawyer make That Will Change Your Life
### What is 20 percent of 160 – percentagecalculator.guru
• Author: percentagecalculator.guru
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: What is 20 percent of 160? How much is 20% of 160?
• Matching Result: 20 percent of 160 is 32. 3. How to calculate 20 percent of 160? Multiply 20/100 with 160 = (20/100)*160 = (20*160)/ …
• Intro: Percentage Calculator: What is 20 percent of 160 – percentagecalculator.guru20 percent *160= (20/100)*160= (20*160)/100= 3200/100 = 32Now we have: 20 percent of 160 = 32Question: What is 20 percent of 160?We need to determine 20% of 160 now and the procedure explaining it as suchStep 1: In the given case…
### 20 Out of 160 is What Percent? – Online Calculator
• Author: online-calculator.org
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: 160 Out of 20 is What Percent?
• Matching Result: 20 Out of 160 is 12.50%. Follow the below steps to calculate what percent is 20 out of 160 or how to write 20/160 as a percentage. Step 1: Percentage Formula …
• Intro: 20 Out of 160 is What Percent? Online Calculators > Math Calculators What is 20 out of 160 as a percentage? – 12.50% is how to write 20/160 as a percent. Step by step instruction on how to calculate and find out 20 out of 160 is what percent. 20…
### 20 is What Percent of 160? – Online Calculator
• Author: online-calculator.org
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: 20 is What Percent of 160? – 20 is 12.5% of 160 Follow the below instructions on how to calculate 20 is what percent of 160.
• Matching Result: x is what percent of y Formula = x / y * 100. Step 2. Plugin the percentage formula above, and we get x / y * 100 = 20 / 160 x 100 = 0.125 x 100
• Intro: 20 is What Percent of 160?20 is What Percent of 160? – 20 is 12.5% of 160. Follow the below instructions on how to calculate 20 is what percent of 160. 20 is What Percent of 160? is what percent of Answer: 12.5% 20 is what percent of 10 =…
Xem Thêm: Top 10 is south state bank open on columbus day That Will Change Your Life
### 20 is what percent of 160? (What percent of 160 is 20?) – Adding
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: 20 is what percent of 160? Here you will learn how to calculate: What percent of 160 is 20? Step-by-step instructions with solution from a math teacher.
• Matching Result: This math word problem is asking “what percent” so we know that the answer should be a percent. Specifically, what percent (x) of 160 equals 20.
• Intro: 20 is what percent of 160? (What percent of 160 is 20?) The question here is “20 is what percent of 160?” which is the same as “What percent of 160 is 20?” This math word problem is asking “what percent” so we know that the answer should be a…
### 20 is what percent of 160 – step by step solution
• Author: geteasysolution.com
• Rating: 3⭐ (246772 rating)
• Highest Rate: 5⭐
• Lowest Rate: 1⭐
• Sumary: Simple and best practice solution for 20 is what percent of 160. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so don`t hesitate to use it…
• Matching Result: Simple and best practice solution for 20 is what percent of 160. Check how easy it is, and learn it for the future. Our solution is simple, …
• Intro: 20 is what percent of 160 If it’s not what You are looking for type in the calculator fields your own values, and You will get the solution. To get the solution, we are looking for, we need to point out what we know. 1. We assume, that the number…
If you have questions that need to be answered about the topic 20 is what percent of 160, then this section may help you solve it.
### There are 160 answers in total.
Calculate the percentage of 20: divide 20 by the 1% value (1.60), and you’ll get b>12.50%/b>, which is your percentage grade. The total answers count 160, which is 100%, so we divide 160 by 100 to get a 1% value.
### 20% of 160 is 32
20% of 160 equals 32, so
320
### When utilizing a calculator,…
If you’re using a calculator, just type in 100 160 100 to get the result, which is 62.5.
Rate this post
|
2023-02-05 10:40:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4059527814388275, "perplexity": 2291.4369914750773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00231.warc.gz"}
|
http://mathhelpforum.com/differential-equations/189012-another-mixed-partials-question.html
|
# Thread: Another "mixed partials" question
1. ## Another "mixed partials" question
The question and my work are both attached as PDF files.
Basically, in my work I only found the partial derivatives to be different and therefore did not proceed because of this but apparently this is wrong. Could someone help me figure out what I am doing wrong?
Any help would be greatly appreciated!
2. ## Re: Another "mixed partials" question
This time, your $\partial N/\partial x$ should have been -3. You were asked to find an integrating factor in order to make it exact. It doesn't appear as though you did that. I'd suggest an integrating factor of the form $h=y^{n}.$ You might ask how I found that. I started by assuming an integrating factor of the form $h(x^{m}y^{n}),$ and then using the exactness condition to solve a differential equation for $h.$
3. ## Re: Another "mixed partials" question
Could you please elaborate about how I find the an integrating factor to make equations exact because I am breaking my head with this and I am still confused with what you said (I've been rereading it and watching YouTube videos, etc)? Does your way always work? (Assuming that an integrating factor does exist).
Assuming, I followed the procedure correctly, the method here: Integrating factors 1 - YouTube did not work for me for this problem by the way.
4. ## Re: Another "mixed partials" question
Here's the basic idea, and this will work for a wide variety of first-order ODE's, but certainly not all first-order ODE's. You multiply through the DE by $h(x^{m}y^{n}),$ and then assert the exactness condition. In your case, you have
$(y-3y^{4})\,dx=(y^{3}+3x)\,dy,$ or
$(y^{3}+3x)\,dy+(3y^{4}-y)\,dx=0,$ and hence
$h(x^{m}y^{n})(y^{3}+3x)\,dy+h(x^{m}y^{n})(3y^{4}-y)\,dx=0.$
Asserting the exactness condition yields that
$h'(x^{m}y^{n})(mx^{m-1}y^{n})(y^{3}+3x)+h(x^{m}y^{n})(3)$
$=h'(x^{m}y^{n})(nx^{m}y^{n-1})(3y^{4}-y)+h(x^{m}y^{n})(12y^{3}-1).$
So you whittle things down and simplify, etc., etc., etc. You will often, at some point, have an option to choose one of the exponents, $m$ or $n$, in order to simplify things greatly. That's a bit of an art. The three options I would look at first are n=0, m=0, or n=m. Your goal is to solve this first-order differential equation for $h.$ Then that's your integrating factor.
|
2016-12-09 16:03:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048840165138245, "perplexity": 420.52034885837764}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542712.49/warc/CC-MAIN-20161202170902-00125-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.sparrho.com/item/two-dimensional-relativistic-hydrogenic-atoms-a-complete-set-of-constants-of-motion/87aa15/
|
# Two-dimensional relativistic hydrogenic atoms: A complete set of constants of motion
Research paper by A. Poszwa, A. Rutkowski
Indexed on: 11 Sep '08Published on: 11 Sep '08Published in: Quantum Physics
#### Abstract
The complete set of operators commuting with the Dirac Hamiltonian and exact analytic solution of the Dirac equation for the two-dimensional Coulomb potential is presented. Beyond the eigenvalue $\mu$ of the operator $j_{z}$, two quantum numbers $\eta$ and $\kappa$ are introduced as eigenvalues of hermitian operators $P=\beta\sigma_{z}'$ and $K=\beta(\sigma_{z}'l_{z}+1/2)$, respectively. The classification of states according to the full set of constants of motion without referring to the non-relativistic limit is proposed. The linear Paschen-Back effect is analyzed using exact field-free wave-functions as a zero-order approximation.
|
2021-05-16 21:09:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014273643493652, "perplexity": 653.5409670967356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00011.warc.gz"}
|
https://www.mathway.com/examples/precalculus/absolute-value-expressions-and-equations/finding-the-vertex-for-the-absolute-value?id=752
|
# Precalculus Examples
Since is on the right side of the equation, switch the sides so it is on the left side of the equation.
To find the coordinate of the vertex, set the inside of the absolute value equal to . In this case, .
Since does not contain the variable to solve for, move it to the right side of the equation by subtracting from both sides.
Replace the variable with in the expression.
Simplify to get .
The absolute value is the distance between a number and zero. The distance between and is .
The absolute value vertex is .
|
2018-05-23 14:56:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9273467063903809, "perplexity": 250.7753467742648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865679.51/warc/CC-MAIN-20180523141759-20180523161759-00451.warc.gz"}
|
https://crypto.stackexchange.com/tags/performance/hot
|
# Tag Info
25
Bitslicing is a technique where computation is: Reduced to elementary operations (called gates) with a single bit output (typically NOR, XOR, and similar like OR AND NAND NXOR, often with further restriction to two inputs), rather than operations on words or integers spanning several bits. Executed in parallel, with as many simultaneous instances (on a ...
22
The basic idea of bitslicing, or SIMD within a register, involves two parts: expressing the cipher in terms of single-bit logical operations (AND, OR, XOR, NOT, etc.), as if you were implementing it in hardware, and carrying out those operations for multiple instances of the cipher in parallel, using bitwise operations on a CPU. That is, in a bitsliced ...
16
It depends. Specifically, it depends on the type of cipher, and on the way it's used. For stream ciphers like RC4, and for block ciphers like AES in CTR and OFB modes, decryption is effectively identical to encryption, and thus takes the exact same time. (Minor exception: encryption may require generating a unique nonce / IV, which might take a small ...
15
Contrary to the other answer, I'll be assuming the hash function is of the password-oriented kind; and my answer will be: input size has almost no influence on speed in good practice, even for much longer input than in the question. Password-oriented (or entropy-stretching, key-stretching) hash functions are, for example, suitable to transform a (password, ...
15
There are two important differences between AES-128 and AES-256: AES-128 has 10 rounds, AES-256 has 14 The key expansion process (that is, how they generate subkeys) is different If your AES-128 encryption hardware just takes a plaintext block and a 128 bit key, and produces a ciphertext block, well, no, there's not much you can do. In this case, the ...
13
ECDSA should in general create signatures faster than RSA for the same cryptographic strength if you just look at the mathematics. In the end the modular exponentiation is performed for smaller numbers. However, ECDSA depends on a random number generator, so ECDSA speeds may be slower if the random number generator blocks for any reason (and not using a good ...
13
The security level of an elliptic curve group is approximately $\log_2{0.886\sqrt{2^n}}$. You can use this to approximate the security level of a $n$-bit key, eg: $\log_2{0.886\sqrt{2^{571}}} = 285.32537860389294$ The real computation (at least for curves over a finite field defined by a prime $p$) is $\log_2{\sqrt{\pi/4}\sqrt{ℓ}}$, where $ℓ$ is the ...
13
The fastest block cipher is identity, which leaves input blocks completely unchanged. This is infinitely fast on all platforms; however, it is not secure. So maybe you want the fastest block cipher that still offers some given non-trivial level of security? Then it depends a lot on what you want to implement the block cipher on. With recent PC, you would ...
11
Are you sure? Float operations are very hard to reproduce in diverse environments. Do you round towards positive, negative or zero? Do you handle denormals or just treat them as zero? What about dividing by zero? I'm sure we would love to have that problem with every cipher we implement. Looking for a CSPRNG that's very fast on GPUs, and would be hard ...
11
As cypherfox had correctly pointed out during our chat, two rounds is not enough to reliably diffuse a single changed bit throughout the entire output. My question appears to have been answered directly by Bernstein's page on diffusion for Salsa20 (note that ChaCha has better diffusion). Quoting https://cr.yp.to/snuffle/diffusion.html The following pictures ...
10
To perform a good check, please use a large number of iterations (i.e. a for loop) of calls to the cryptographic primitive with your specific data size. Then average these out. To avoid compiler optimizations, be sure to print out the result after the clock has been stopped. For instance, you could XOR all the outputs together and print that out. Otherwise ...
10
I second Richie Frame's observation that AES is an excellent choice. I'd use AES-128 in CTR mode, which has the advantage that decryption is the same as encryption (thus is as fast, contrary to some other modes). Update: SPECK, considered in this other answer, is good if compactness or speed per encryption for narrow block size are the choice criteria. ...
9
Bitslicing is a technique that allows multiple instructions/Data points to be encoded into a single register. The idea is that you encode several bitwise operations within a single register. So, instead of 32 bitwise OR operations in sequence, you could reduce the total number of operations by cramming the data into SIMD registers and executing in ...
9
If you are using the full HKDF each time, you could possibly save time by only using the Extract portion once and Expand once per derived key. That could even halve the total time taken, if you had a worst case situation. Another speedup possibility within HKDF is to use another hash. Either a faster hash or one that matches the required key length better. ...
9
It depends how the “AES-128 encryption hardware units” you mention are actually defined. I've already encountered processors that allow to independently compute AES operations such as $\texttt{SubBytes}$ and $\texttt{MixColumns}$ – which are the same regardless the key size involved (128 or 256 bits). In that case: yes, it can speed up the calculation for ...
9
SPECK was actually designed with 8-bit CPUs in mind. I use Simon and Speck extensively, and there's example source code and comparisons out there, as well as a good paper. The references are good and will lead you the the original sources. AES is generally faster but takes more resources, which you may or may not have. I do not use AES on a MCU because ...
8
Is Rijndael the fastest block cipher in the world? No. On an Intel 64 Sandy Bridge without AES-NI, AES (a subset of Rijndael) is outperfomed by ChaCha20 (and also likely by Threefish 512 which has about 6-7cpb cost on an older Intel Core 2 Duo with 64-bit ASM (link: original Skein paper PDF)) as opposed to AES' 11 cpb. (7.59 cpb on an Intel Core 2) What ...
8
Poly1305 is not based on AES, it was used together with AES in Bernstein's first description http://cr.yp.to/mac/poly1305-20050329.pdf. For pseudocode of the Poly1305 algorithm see e.g. https://tools.ietf.org/html/rfc7539#section-2.5.1. GHASH is the 'hash function' in AES/GCM. So if Poly1305 is faster than GHASH on some hardware this is no contradiction. ...
7
Given the choice, it is preferable to use the block encryption operation of AES, since it often faster than block decryption (never slower AFAIK). For this reason, AES-CTR is defined to use the block encryption operation of AES exclusively; that's both for AES-CTR encryption and AES-CTR decryption, which are the same operation except for IV generation/input. ...
7
In RSA encryption as practiced (that is, to encipher a message which is a short symmetric key), the message size after padding is fixed and equal to the modulus size. Thus the size of the message has no impact on performance. Calculating a modular inverse is performed only during key generation, that is seldom. Also, it has low cost compared to generating ...
7
Yes, AES-128 is intended to be the standard block cipher for building a secure and efficient symmetric cryptosystem using some block cipher operating mode, like CTR for encryption or GCM for authenticated encryption; efficiency can be particularly good when there is hardware support for AES and GCM. There might be better choices in the case at hand, like ...
7
Dedicated stream ciphers typically are, or at least can be, somewhat faster than constructions based on block ciphers. (If they weren't, there would be no point in using them, since a block cipher can do everything a dedicated stream cipher can.) What you gain in speed (and possibly code size), however, you lose in versatility: A block cipher (in CTR / ...
7
The previous answer has the correct formula for estimating the security level of prime field elliptic curves. However, the table seems to just list the closest Koblitz curve sizes used, as Richie Frame points out. If you computed the actual security strength of the curves in question, you would not end up with exactly the values in the left column. For ...
7
All hashes I know of are block oriented. The time required to calculate the hash scales with the number of blocks to be hashed. There is a small constant overhead dealing with the IV and, possibly, a finalization function.
6
From the diagram on CTR mode you can notice that there are no dependencies between any of the phases of the pipeline. If you have more than one block-size worth of data, you can process each block-size chunk completely independently of the others by calculating $\mathrm{ciphertext}_i = E(\mathrm{key}, \mathrm{nonce} \, || \, \mathrm{counter}_i) \oplus \... 6 Perhaps not of relevance if the question is meant in a purely thoretical (i.e. asymptotical) sense, but the CBC encryption mode is inherently sequential, while decryption can easily be performed in parallel. 6 Rather than using a form of encryption which is slow in one direction, you could use a proof-of-work function instead, as Ricky Demer pointed out in the comments. This allows you to freely tune the slowdown while still using normal, widely accepted encryption and authentication algorithms. For example, you could make the sender look for a partial preimage ... 6 Yes, it is possible that AES is slower than DES. There is no limit to the slowness of un-optimized software! Some factors that I have witnessed slowing a software AES implementation: not using tables for computation of the S-box(es) (an extreme example is the implementation there, when compiled with BACK_TO_TABLES undefined; another possibility is using a ... 6 Criteria for evaluation of Cryptography Algorithms: Having public specification (the only secret is the key). Patent status. What it aims at: block cipher (DES, AES..), cipher, message digest, MAC, proof of origin, signature, key establishment, TRNG, PRNG. Requirements for and limitations of the algorithm itself. Randomness requirements; Requirements to ... 5 Both curves have similar form and primes close to powers of two ($2^{192}-2^{64}-1$and$2^{224} - 2^{96} + 1\$), so you wouldn't expect large differences in performance – all things equal, P-224 might be anywhere from 30% to 60% slower due to the computational scaling of curve operations. However, in practice different implementations will have different ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-07-30 09:56:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44454288482666016, "perplexity": 1430.0912134761536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.52/warc/CC-MAIN-20210730091645-20210730121645-00502.warc.gz"}
|
http://math.soimeme.org/~arunram/Preprints/AFAGSYTHModules.html
|
## $\stackrel{\sim }{H}\text{-modules}$
Last update: 13 November 2012
## $\stackrel{\sim }{H}\text{-modules}$
### 2.1. Weights
In view of the results in Section 1.18 we shall (for the remainder of this paper, except Sections 5–7 where we use the data in (1.1)) assume that $L=P$ in the definition of the group $X$ and $\stackrel{\sim }{H},$ see (1.2), (1.9), and (1.14). The Weyl group acts on
$T=Hom(X,ℂ*)= {group homomorphismst:X→ℂ*} by(wt) (Xλ)=t (Xw-1λ).$
Let $M$ be a finite dimensional $\stackrel{\sim }{H}\text{-module}$ and let $t\in T\text{.}$ The $t\text{-weight}$ space and the generalized t-weight space of $M$ are
$Mt = { m∈M∣Xλ m=t(Xλ)m for allXλ∈X } and Mtgen = { m∈M∣ for eachXλ∈X, ( Xλ-t(Xλ) ) k m=0for some k∈ℤ>0 } ,$
respectively. Then
$M=⨁t∈T Mtgen (2.2)$
is a decomposition of $M$ into Jordan blocks for the action of $ℂ\left[X\right],$ and we say that $t$ is a weight of $M$ if ${M}_{t}^{\text{gen}}\ne 0\text{.}$ Note that ${M}_{t}^{\text{gen}}\ne 0$ if and only if ${M}_{t}\ne 0\text{.}$ A finite-dimensional $\stackrel{\sim }{H}\text{-module}$
$Miscalibrated ifMtgen=Mt, for allt∈T.$
Remark. The term tame is sometimes used in place of the term calibrated particularly in the context of representations of Yangians, see [NTa1998]. The word calibrated is preferable since tame also has many other meanings in different parts of mathematics.
Let $M$ be a simple $\stackrel{\sim }{H}\text{-module.}$ As an $X\left(T\right)\text{-module,}$ $M$ contains a simple submodule and this submodule must be one-dimensional since all irreducible representations of a commutative algebra are one dimensional. Thus, a simple module always has ${M}_{t}\ne 0$ for some $t\in T\text{.}$
### 2.3. Central characters
The Pittie–Steinberg theorem, Theorem 1.17, shows that, as vector spaces,
$H∼=H⊗ℂ[X]= H⊗ℂ[X]W⊗𝒦, where𝒦=ℂ-span { Xλw∣ w∈W } ,$
and $H$ is the Iwahori–Hecke algebra defined in (1.8). Thus $\stackrel{\sim }{H}$ is a free module over $Z\left(\stackrel{\sim }{H}\right)=ℂ{\left[X\right]}^{W}$ of rank $\text{dim}\left(H\right)·\text{dim}\left(𝒦\right)={\mid W\mid }^{2}\text{.}$ By Dixmier’s version of Schur’s lemma (see [44, Lemma 0.5.1]), $Z\left(\stackrel{\sim }{H}\right)$ acts on a simple $\stackrel{\sim }{H}\text{-module}$ by scalars and so it follows that every simple $\stackrel{\sim }{H}\text{-module}$ is finite dimensional of dimension $\le {\mid W\mid }^{2}\text{.}$ Theorem 2.12(d) below will show that, in fact, the dimension of a simple module is $\le \mid W\mid \text{.}$
Let $M$ be a simple $\stackrel{\sim }{H}\text{-module.}$ The central character of $M$ is an element $t\in T$ such that
$pm=t(p)m, for allm∈M,p∈ℂ [X]W=Z (H∼).$
The element $t$ is only determined up to the action of $W$ since $t\left(p\right)=wt\left(p\right)$ for all $w\in W\text{.}$ Because of this, any element of the orbit $Wt$ is referred to as the central character of $M\text{.}$
Because $P=L$ in the construction of $X,$ a theorem of Steinberg [Ste1968-2, 3.15, 4.2, 5.3] tells us that the stabilizer ${W}_{t}$ of a point $t\in T$ under the action of $W$ is the reflection group
$Wt= ⟨ sα∣α ∈Z(t) ⟩ ,whereZ(t)= { α∈R+∣ t(Xα)=1 } .$
Thus the orbit $Wt$ can be viewed in several different ways via the bijections
$Wt↔W/Wt↔ { w∈W∣ R(w)∩Z(t) =∅ } ↔ { chambers on the positive side ofHα forα∈Z(t) } , (2.4)$
where the last bijection is the restriction of the map in (1.6). If the root system $Z\left(t\right)$ is generated by the simple roots ${\alpha }_{i}$ that it contains then $Wt$ is a parabolic subgroup of $W$ and $\left\{w\in W\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}R\left(w\right)\cap Z\left(t\right)\right\}$ is the set of minimal length coset representatives of the cosets in $W/{W}_{t}\text{.}$
### 2.5. Principal series modules
For $t\in T$ let $ℂ{v}_{t}$ be the one-dimensional $ℂ\left[X\right]\text{-module}$ given by
$Xλvt=t (Xλ)vt, forXλ∈X.$
The principal series representation $M\left(t\right)$ is the $\stackrel{\sim }{H}\text{-module}$ defined by
$M(t)=H∼ ⊗ℂ[X] ℂvt= Indℂ[X]H∼ (ℂvt). (2.6)$
The module $M\left(t\right)$ has basis $\left\{{T}_{w}\otimes {v}_{t}\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}w\in W\right\}$ with $H$ acting by left multiplication.
If $w\in W$ and ${X}^{\lambda }\in X$ then the defining relation (1.10) for $\stackrel{\sim }{H}$ implies that
$Xλ(Tw⊗vt) =t(Xwλ) (Tw⊗vt)+ ∑u
where the sum is over $u in the Bruhat–Chevalley order and ${a}_{u}\in ℂ\text{.}$ Let ${W}_{t}=\text{Stab}\left(t\right)$ be the stabilizer of $t$ under the $W\text{-action.}$ It follows from (2.7) that the eigenvalues of $X$ on $M\left(t\right)$ are of the form $wt,$ $w\in W,$ and by counting the multiplicity of each eigenvalue we have
$M(t)= ⨁wt∈Wt M(t)wtgen where dim (M(t)wtgen) =∣Wt∣, for allw∈W. (2.8)$
In particular, if $t$ is regular (i.e., when ${W}_{t}$ is trivial), there is a unique basis $\left\{{v}_{wt}\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}w\in W\right\}$ of $M\left(t\right)$ determined by
$Xλvwt= (wt)(Xλ) vwt,for all w∈Wandλ∈P, vwt=Tw⊗vt +∑u
Let $t\in T\text{.}$ The spherical vector in $M\left(t\right)$ is
$1t=∑w∈W qℓ(w)Tw ⊗vt. (2.10)$
Up to multiplication by constants this is the unique vector in $M\left(t\right)$ such that ${T}_{w}{1}_{t}={q}^{\ell \left(w\right)}{1}_{t}$ for all $w\in W\text{.}$ The following is due to Kato [Kat1981, Proposition 1.20 and Lemma 2.3],
Proposition 2.11. Let $t\in T$ and let ${W}_{t}$ be the stabilizer of $t$ under the $W\text{-action.}$
1. If ${W}_{t}=\left\{1\right\}$ and ${v}_{wt},$ $w\in W$ is the basis of $M\left(t\right)$ defined in (2.9) then $1t=∑z∈W t(cz), wherecz= ∏α∈R(w0z) q-q-1Xα 1-Xα .$
2. The spherical vector ${1}_{t}$ generated $M\left(t\right)$ if and only if $t\left(\prod _{\alpha \in {R}^{+}}\left({q}^{-1}-q{X}^{\alpha }\right)\right)\ne 0\text{.}$
3. The module $M\left(t\right)$ is irreducible if and only if ${1}_{wt}$ generates $M\left(wt\right)$ for all $w\in W\text{.}$
Proof. The proof is accomplished in exactly the same way as done for the graded Hecke algebra in [KRa2002, Proposition 2.8]. The only changes which need to be made to [KRa2002] are Use ${T}_{i}\left(\sum _{w\in W}{q}^{\ell \left(w\right)}{T}_{w}\right)=q\left(\sum _{w\in W}{q}^{\ell \left(w\right)}{T}_{w}\right)$ and ${1}_{t}=\left(\sum _{w\in W}{q}^{\ell \left(w\right)}{T}_{w}\right){v}_{t}$ and the $\tau \text{-operators}$ defined in Proposition 2.14 for the proof of (a). (We have included this result in this section since it is really a result about the structure of principal series modules. Though the proof uses the $\tau \text{-operators,}$ which we will define in the next section, there is no logical gap here.) For the proof of (b) use the Steinberg basis $\left\{{X}^{{\lambda }_{y}}\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}y\in W\right\}$ and the determinant $\text{det}\left({X}^{{z}^{-1}{\lambda }_{y}}\right)$ from Theorem 1.17(b) in place of the basis $\left\{{b}_{y}\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}w\in W\right\}$ and the determinant used in [KRa2002]. $\square$
Part (b) of the following theorem is due to Rogawski [Rog1985, Proposition 2.3] and part (c) is due to Kato [Kat1981, Theorem 2.1]. Parts (a) and (d) are classical.
Theorem 2.12. Let $t\in T$ and $w\in W$ and define $P\left(t\right)=\left\{\alpha \in {R}^{+}\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}t\left({X}^{\alpha }\right)={q}^{±2}\right\}\text{.}$
1. If ${W}_{t}=\left\{1\right\}$ then $M\left(t\right)$ is calibrated.
2. $M\left(t\right)$ and $M\left(wt\right)$ have the same composition factors.
3. $M\left(t\right)$ is irreducible if and only if $P\left(t\right)=\varnothing \text{.}$
4. If $M$ is a simple $\stackrel{\sim }{H}\text{-module}$ with ${M}_{t}\ne 0$ then $M$ is a quotient of $M\left(t\right)\text{.}$
Proof. (a) follows from (2.8) and the definition of calibrated. Part (b) accomplished exactly as in [KRa2002, Proposition 2.8] and (c) is a direct consequence of the definition of $P\left(t\right)$ and Proposition 2.11. (d) Let ${m}_{t}$ be a nonzero vector in ${M}_{t}\text{.}$ If ${v}_{t}$ is as in the construction of $M\left(t\right)$ in (2.6) then, as $ℂ\left[X\right]\text{-modules,}$ $ℂ{m}_{t}\cong ℂ{v}_{t}\text{.}$ Thus, since induction is the adjoint functor to restriction there is a unique $\stackrel{\sim }{H}\text{-module}$ homomorphism given by $ϕ: M(t) ⟶ M, vt ⟼ mt.$ This map is surjective since $M$ is irreducible and so $M$ is a quotient of $M\left(t\right)\text{.}$ $\square$
### 2.13. The $\tau$ operators
The following proposition defines maps ${\tau }_{i}:\phantom{\rule{0.2em}{0ex}}{M}_{t}^{\text{gen}}\to {M}_{{s}_{i}t}^{\text{gen}}$ on generalized weight spaces of finite-dimensional $\stackrel{\sim }{H}\text{-modules}$ $M\text{.}$ These are “local operators” and are only defined on weight spaces ${M}_{t}^{\text{gen}}$ such that $t\left({X}^{{\alpha }_{i}}\right)\ne 1\text{.}$ In general, ${\tau }_{i}$ does not extend to an operator on all of $M\text{.}$
Proposition 2.14. Fix $i,$ let $t\in T$ be such that $t\left({X}^{{\alpha }_{i}}\right)\ne 1$ and let $M$ be a finite-dimensional $\stackrel{\sim }{H}\text{-module.}$ Define
$τi: Mtgen ⟶ Msitgen, m ⟼ ( Ti- q-q-1 1-X-αi ) m.$
1. The map ${\tau }_{i}:\phantom{\rule{0.2em}{0ex}}{M}_{t}^{\text{gen}}\to {M}_{{s}_{i}t}^{\text{gen}}$ is well defined.
2. As operators on ${M}_{t}^{\text{gen}},$ ${X}^{\lambda }{\tau }_{i}={\tau }_{i}{X}^{{s}_{i}\lambda },$ for all ${X}^{\lambda }\in X\text{.}$
3. As operators on ${M}_{t}^{\text{gen}},$ ${\tau }_{i}{\tau }_{i}=\left(q-{q}^{-1}{X}^{{\alpha }_{i}}\right)\left(q-{q}^{-1}{X}^{-{\alpha }_{i}}\right)/\left(\left(1-{X}^{{\alpha }_{i}}\right)\left(1-{X}^{-{\alpha }_{i}}\right)\right)\text{.}$
4. Both maths ${\tau }_{i}:\phantom{\rule{0.2em}{0ex}}{M}_{t}^{\text{gen}}\to {M}_{{s}_{i}t}^{\text{gen}}$ and ${\tau }_{i}:\phantom{\rule{0.2em}{0ex}}{M}_{{s}_{i}t}^{\text{gen}}\to {M}_{t}^{\text{gen}}$ are invertible if and only if $t\left({X}^{{\alpha }_{i}}\right)\ne {q}^{±2}\text{.}$
5. Let $1\le i\ne j\le n$ and let ${m}_{ij}$ be as in (1.7). Then $τiτjτi… ⏟mijfactors = τjτiτj… ⏟mijfactors ,$ whenever both sides are well defined operators on ${M}_{t}^{\text{gen}}\text{.}$
Proof. (a) The element ${X}^{{\alpha }_{i}}$ acts on ${M}_{t}^{\text{gen}}$ by $t\left({X}^{{\alpha }_{i}}\right)$ times a unipotent transformation. As an operator on ${M}_{t}^{\text{gen}},$ $1-{X}^{-{\alpha }_{i}}$ is invertible since it has determinant ${\left(1-t\left({X}^{-{\alpha }_{i}}\right)\right)}^{d}$ where $d=\text{dim}\left({M}_{t}^{\text{gen}}\right)\text{.}$ Since this determinant is nonzero $\left(q-{q}^{-1}\right)/\left(1-{X}^{-{\alpha }_{i}}\right)=\left(q-{q}^{-1}\right)×{\left(1-{X}^{-{\alpha }_{i}}\right)}^{-1}$ is a well defined operator on ${M}_{t}^{\text{gen}}\text{.}$ Thus the definition of ${\tau }_{i}$ makes sense. Since $\left(q-{q}^{-1}\right)/\left(1-{X}^{-{\alpha }_{i}}\right)$ is not an element of $\stackrel{\sim }{H}$ or $ℂ\left[X\right]$ it should be viewed only as an operator on ${M}_{t}^{\text{gen}}$ in calculations. With this in mind it is straightforward to use the defining relation (1.10) to check that $Xλτim=Xλ ( Ti- q-q-1 1-X-αi ) m= ( Ti- q-q-1 1-X-αi ) Xsiλm= τiXsiλm and τiτim= ( Ti- q-q-1 1-X-αi ) ( Ti- q-q-1 1-X-αi ) m= (q-q-1Xαi) (q-q-1X-αi) (1-Xαi) (1-X-αi) m,$ for all $m\in {M}_{t}^{\text{gen}}$ and $l\in X\text{.}$ This proves (a)-(c). (d) The operator ${X}^{{\alpha }_{i}}$ acts on ${M}_{t}^{\text{gen}}$ as $t\left({X}^{{\alpha }_{i}}\right)$ times a unipotent transformation. Similarly for ${X}^{-{\alpha }_{i}}\text{.}$ Thus, as an operator on ${M}_{t}^{\text{gen}}$ $\text{det}\left(\left(q-{q}^{-1}{X}^{{\alpha }_{i}}\right)\left(q-{q}^{-1}{X}^{-{\alpha }_{i}}\right)\right)=0$ if and only if $t\left({X}^{{\alpha }_{i}}\right)={q}^{±2}\text{.}$ Thus part (c) implies that ${\tau }_{i}{\tau }_{i},$ and each factor in this composition, is invertible if and only if $t\left({X}^{{\alpha }_{i}}\right)\ne {q}^{±2}\text{.}$ (e) Let $t\in T$ be regular. By part (a), the definition of the ${\tau }_{i},$ and the uniqueness in (2.9), the basis ${\left\{{v}_{wt}\right\}}_{w\in W}$ of $M\left(t\right)$ in (2.9) is given by $vwt=τw vt, (2.15)$ where ${\tau }_{w}={\tau }_{{i}_{1}}\dots {\tau }_{{i}_{p}}$ for a reduced word $w={s}_{{i}_{1}}\dots {s}_{{i}_{p}}$ of $w\text{.}$ Use the defining relation (1.10) for $\stackrel{\sim }{H}$ to expand the product of ${\tau }_{i}$ and compute $vw0t = …τiτjτi ⏟mijfactors vt = …TiTjTi ⏟mijfactors vt +∑w where ${P}_{w}$ and ${Q}_{w}$ are rational functions in the ${X}^{\lambda }\text{.}$ By the uniqueness in (2.9), $t\left({P}_{w}\right)={a}_{{w}_{0}w}\left(t\right)=t\left({Q}_{w}\right)$ for all $w\in W,$ $w\ne {w}_{0}\text{.}$ Since the values of ${P}_{w}$ and ${Q}_{w}$ coincide on all generic points $t\in T$ it follows that $Pw=Qw for allw∈W,w ≠w0. (2.16)$ Thus, $…τiτjτi ⏟mijfactors =Tw0+∑w whenever both sides are well defined operators on ${M}_{t}^{\text{gen}}\text{.}$ $\square$
Let $t\in T$ and recall that
$Z(t)= { α∈R+t t(Xα)=1 } andP(t)= { α∈R+∣ t(Xα)= q±2. } (2.17)$
If $J\subseteq P\left(t\right)$ define
$ℱ(t,J)= { w∈W∣ R(w)∩Z(t) =∅,R(w) ∩P(t)=J } . (2.18)$
We say that the pair $\left(t,J\right)$ is a local region if ${ℱ}^{\left(t,J\right)}\ne \varnothing \text{.}$ Under the bijection (2.4) the set ${ℱ}^{\left(t,J\right)}$ maps to the set of chambers whose union is the set of points $x\in {𝔥}_{ℝ}^{*}$ which are
1. on the positive side of the hyperplanes ${H}_{\alpha }$ for $\alpha \in Z\left(t\right),$
2. on the positive side of the hyperplanes ${H}_{\alpha }$ for $\alpha \in P\left(t\right)\J,$
3. on the negative side of the hyperplanes ${H}_{\alpha }$ for $\alpha \in J\text{.}$
See the picture in Example 4.11(d). In this way the local region $\left(t,J\right)$ really does correspond to a region in ${𝔥}_{ℝ}^{*}\text{.}$ This is a connected convex region in ${𝔥}_{ℝ}^{*}$ since it is cut out by half spaces in ${𝔥}_{ℝ}^{*}\cong {ℝ}^{n}\text{.}$ The elements $w\in {ℱ}^{\left(t,J\right)}$ index the chambers ${w}^{-1}C$ in the local region and, as $J$ runs over the subsets of $P\left(t\right),$ the sets ${ℱ}^{\left(t,J\right)}$ form a partition of the set $\left\{w\in W\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}R\left(w\right)\cap Z\left(t\right)=\varnothing \right\}$ (which, by (2.4), indexes the cosets in $W/{W}_{t}\text{).}$
Corollary 2.19. Let $M$ be a finite dimensional $\stackrel{\sim }{H}\text{-module.}$ Let $t\in T$ and let $J\subseteq P\left(t\right)\text{.}$ Then
$dim(Mwtgen)= dim(Mw′tgen), forw,w′∈ ℱ(t,J),$
Proof. Suppose $w,{s}_{i}w\in {ℱ}^{\left(t,J\right)}\text{.}$ We may assume that ${s}_{i}w>w\text{.}$ Then $\alpha ={w}^{-1}{\alpha }_{i}>0,$ $\alpha \notin R\left(w\right)$ and $\alpha \in R\left({s}_{i}w\right)\text{.}$ Now, $R\left(w\right)\cap Z\left(t\right)=R\left({s}_{i}w\right)\cap Z\left(t\right)$ implies $t\left({X}^{\alpha }\right)\ne 1,$ and $R\left(w\right)\cap P\left(t\right)$ implies $t\left({X}^{\alpha }\right)\ne {q}^{±2}\text{.}$ Since $wt\left({X}^{{\alpha }_{i}}\right)=t\left({X}^{{w}^{-1}{\alpha }_{i}}\right)=t\left({X}^{\alpha }\right)\ne 1$ and $wt\left({X}^{{\alpha }_{i}}\right)\ne {q}^{±2},$ it follows from Proposition 2.14(d) that the map ${\tau }_{i}:\phantom{\rule{0.2em}{0ex}}{M}_{wt}^{\text{gen}}\to {M}_{{s}_{i}wt}^{\text{gen}}$ is well defined and invertible. It remains to note that if $w,{w}^{\prime }\in {ℱ}^{\left(t,J\right)},$ then ${w}^{\prime }={s}_{{i}_{1}}\dots {s}_{{i}_{\ell }}w$ where ${s}_{{i}_{k}}\dots {s}_{{i}_{\ell }}w\in {ℱ}^{\left(t,J\right)}$ for all $1\le k\le \ell \text{.}$ This follows from the fact that ${ℱ}^{\left(t,J\right)}$ corresponds to a connected convex region in ${h}_{ℝ}^{*}\text{.}$ $\square$
## Notes and References
This is an excerpt of the paper entitled Affine Hecke algebras and generalized standard Young tableaux written by Arun Ram in 2002, published in the Academic Press Journal of Algebra 260 (2003) 367-415. The paper was dedicated to Robert Steinberg.
Research supported in part by the National Science Foundation (DMS-0097977) and the National Security Agency (MDA904-01-1-0032).
|
2019-01-22 23:37:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 275, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641644358634949, "perplexity": 3264.1284295953774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583875448.71/warc/CC-MAIN-20190122223011-20190123005011-00421.warc.gz"}
|
https://cs.stackexchange.com/questions/32679/is-the-source-coding-theorem-straightforward-for-uniformly-distributed-random-va
|
# Is the Source Coding Theorem straightforward for uniformly distributed random variables?
Shannon's source coding theorem states the following:
$n$ i.i.d. random variables $X_1,\dots,X_n$ each with entropy H(x) can be compressed into more than n⋅H(x) bits with negligible risk of information loss, as n→∞; conversely if they are compressed into fewer than n⋅H(x) bits it is virtually certain that information will be lost.
I was thinking about the easy case when the $X_1,\dots,X_n$ are uniformly distributed on the integers $[1,n]$. Now suppose I want to transmit a value of $(X_1,\dots,X_n)$. Clearly, each $X_i$ has entropy $\log n$ and thus, if we were to send $o(n\log n)$ bits, we would lose information about some of the $X_i$, with high probability $1 - o(1)$.
The way to argue the Theorem for this case, is that it $\log n$ bits suffice to reconstruct any $X_i$, and conversely, if you receive, for example, only $(1/4)n\log n$ bits, then, you would need to correctly guess $3/4 n\log n$ bits, which happens with probability of only $2^{-3/4n\log n}$. Is my reasoning correct?
## 1 Answer
The theorem you quote is not stated formally, so it is in fact impossible to prove it. That said, the idea behind the source coding theorem is that a variable of entropy $H(X)$ behaves (roughly) as if it was uniformly distributed on $2^{H(X)}$ values. When the distribution of $X$ is uniform, the theorem becomes trivial, once stated formally. So it's a good idea for you to look up a formal statement of the theorem, and then verify that it indeed holds for uniformly distributed random variables.
|
2019-11-18 14:27:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978627920150757, "perplexity": 201.83901911333191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00511.warc.gz"}
|
http://chronos.msu.ru/en/updates/avtorskij-ukazatel/geometrization-of-radial-particles-in-non-empty-space-complies-with-tests-of-general-relativity
|
Site search: .ya-page_js_yes .ya-site-form_inited_no { display: none; }
Geometrization of radial particles in non-empty space complies with tests of General Relativity
Булыженков И.Э. (Bulyzhenkov I.E.) Geometrization of radial particles in non-empty space complies with tests of General Relativity // Journal of Modern Physics. 2012. 3(10): 1465. doi: 10.4236/jmp.2012.310181
Категории: Исследование, Авторский указатель
## Geometrization of radial particles in non-empty space complies with tests of General Relativity 0.0/5 rating (0 votes)/*<![CDATA[*/jQuery(function($) {$('#c4aab8cf-bd11-4dce-bfe9-a2fdc567666a-62fed42099411').ElementRating({ url: '/en/updates?task=callelement&format=raw&item_id=8843&element=c4aab8cf-bd11-4dce-bfe9-a2fdc567666a' }); });/*]]>*/
### Аннотация
Curved space-time 4-interval of any probe particle does not contradict to flat non-empty 3-space which, in turn, as-sumes the global material overlap of elementary continuous particles or the nonlocal Universe with universal Euclidean geometry. Relativistic particle's time is the chain function of particles speed and this time differs from the proper time of a motionless local observer. Equal passive and active relativistic energy-charges are employed to match the universal free fall and the Principle of Equivalence in non-empty (material) space, where continuous radial densities of elemen-tary energy-charges obey local superpositions and mutual penetrations. The known planetary perihelion precession, the radar echo delay, and the gravitational light bending can be explained quantitatively by the singularity-free metric without departure from Euclidean spatial geometry. The flatspace precession of non-point orbiting gyroscopes is non- Newtonian one due to the Einstein dilation of local time within the Earth's radial energy-charge rather than due to un-physical warping of Euclidean space.
Keywords: Euclidean Material Space; Metric Four-Potentials; Radial Masses; Energy-To-Energy Gravitation; Nonlocal Universe
###### Связанные доклады:
• Скачать статью:
• Размер: 415.41 KB
You have no rights to post comments
|
2022-08-19 00:06:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2603108584880829, "perplexity": 9997.540876197801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00130.warc.gz"}
|
http://16-824-f13.blogspot.com/2013/10/reading-for-1010.html
|
## Tuesday, October 8, 2013
J. Lim, R. Salakhutdinov and A. Torralba, Transfer Learning by Borrowing Examples for Multiclass Object Detection, NIPS, 2011.
and optionally:
K. Saenko, B. Kulis, M. Fritz and T. Darrell, Adapting Visual Category Models to New Domains, ECCV, 2010.
Ian Endres, Vivek Srikumar, Ming-wei Chang, and Derek Hoiem, Learning Shared Body Plans. CVPR, 2012.
1. The goal of this paper is to compensate for the lack of training data in some classes using data from other classes that have similar examples. They do this transfer on an example by example basis (reason about examples individually) but regularize at both the individual and class level. I think this group lasso regularization is helping avoid picking outlier examples that happened to occur on the right side of the decision boundary during the optimization. The paper extends this to include transformed examples and examples from other datasets. I'll post the pros and cons in separate posts to help start threads for more discussion.
2. Pros 1. They make minimal assumptions about the type of classifier or image features used. They only need that the approach be able to write itself into a empirical risk minimization form. The details need to be reworked as far as the transformations are concerned (they use the latent update of the DPM to pick the transformation parameters from a discrete set).
3. Pros 2. They are able to leverage multiple datasets in a way which beats naive concatenation of data. I think that this is really cool as it helps decentralize data collection. It gives further strength to their approach as it helps them argue that their method is of use even if people started collecting more data for all the categories. In other words, people should use their method when adding more data to a dataset rather than claim that adding data alone solves the problem.
1. I disagree that borrowing is the "right" way to aggregate datasets, but only because it's difficult to say what we really want from these detectors. The authors' experiment (Table 4) demonstrates that borrowing from another dataset improves testing performance on the original dataset. This is distinct from improving testing performance on both datasets, as it was mentioned multiple times that PASCAL and SUN09 have very different distributions.
This capability is definitely valuable, but I think it is a form of overfitting. I'm not saying that overfitting is bad, though. If you're training a detector for a tall robot, is there really a need for it to be able to recognize objects from low angles?
2. I think overfitting might look more like this in robots: the robot perfectly recognizes the 20 objects in the room from any angle, but is completely clueless if a very similar object is shown to it in a different room, or if we try to get it to learn a 21st object.
Coming back to the question of whether the algorithm is overfitting: if it is able to learn by borrowing examples from a dataset with a very different distribution and perform better on the original dataset, isn't that really an indication of its ability to generalize well?
3. I think they mentioned somewhere that they specifically want to keep the bias of a dataset, so the mechanism is not really like merging datasets, but more like extending some dataset with similar images of what it already has, that is, keeping the dataset peculiarities.
4. I agree with Srivatsan that the algorithm has the ability to generalize well and it doesn't look like it is overfitting. The authors are adding data in a very structured way that makes more intuitive sense than just throwing in hundreds of images into the dataset and hoping that it will improve the performance.
5. I believe an algorithm generalizes well when you can train it on a dataset and expect it to perform well on other datasets. This seems redundant considering we are using disjoint test and training sets, but the key point is that both sets have similar underlying distributions. The experiment shows that training on dataset A with borrowed examples from B gives a boost when evaluating on A. This doesn't mean that it will boost performance on B.
6. I think that the object detectors themselves are more general because they train on the borrowed images, which might not have the same peculiarities as the images in the original dataset. For the bookshelf example that the paper gives, it borrows from the shelf class which doesnt have the peculiarity of having books (and therefore a lot of vertical gradients). Because they are merging data from another dataset, I think the borrowed images are going to help generalize the object detector.
7. The authors mention "we also note that borrowing similar examples tends to introduce some confusions between related object categories," and that "setting corresponding regularization parameters $\lambda_1$ and $\lambda_2$ to high enough values ... would amount to borrowing all examples, which would result in learning a 'generic' object detector." Summing this up, it seems like they are just purposefully getting confused between categories, but they do a good job of mixing together the right categories. I guess this is a little like when adults tell kids, "Zebra, it's like a horse, but with stripes."
For some reason though, this borrowing method does not sit well with me. Maybe it's because I'm thinking back to the scene classification paper where they transformed all images to be as similar to the test image as possible, and then picked the 1NN. In that paper, their transformations were more complex than simple translation and affine, and many extra objects were hallucinated into the various scenes to get the images to match. My feeling is that this algorithm might start morphing images like crazy, and then the borrowed images would look like the original class, but slightly different. The "slightly different" part might end up hurting performance in the end.
However, my worries are probably unnecessary, since the authors prove that they have high performance. I wonder what would happen if they allowed the extra transformations that the other paper I mentioned above used. I also wonder how much boost in performance this method offers when even more data is added in. My feeling is that this method, in general, will only offer a small boost in performance. Here, they're adding ~2%, and that seems small to me.
4. Cons 1. They have cited past work which has tried to share data from entire classes (not on a per example basis but an entire class at once). They have not compared with these approaches in their experiments. I think this comparison is important because of the massive increase in the number of parameters when dealing with individual examples. They will terribly overfit to data without heavy regularization.
Looks like there is more to it. They run only 1 iteration of their alternate minimization strategy to prevent adding too many examples. They also initialize the optimization in favor of not sharing. Are these tricks that save them from the perils of such a large parameter space? Is per example the right way to go or should we reason about groups of examples at a time. Will the per example reasoning scale to large datasets with 100,000 categories.
1. I wonder why they limited their test to the top 100 classes of SUN09 by number of examples. This technique seems most useful in improving performance in the long tail of the dataset yet if you look at Figure 4b and 4c, they stop just as things are getting interesting. At what point does this plot get worse as the number of examples in a class continues to decrease? Is this a complexity issue, an overfitting issue on too few examples, what?
2. That also bugged me. Where are the results on the tail end? It's clear that there is still that classic 1/f curve there, and the less common classes still tend to have just a dozen or so examples. I can imagine they get nonsense at a certain point. I wonder where that point is.
-- Matt Klingensmith
5. Cons 2. This approach is adding entire examples into the train set for a given class. This is made worse by them thresholding W at the end. This will reduce the ability to discriminate between these sharing classes (atleast in the case where no transformation is involved). Doesn't this defeat the purpose of hard mining. They test their approach and the baseline on a randomly sampled subset of negatives. This partially hides the loss in discrimination power which might actually be significant.
The authors argue (without empirical results) that the confusion between similar looking classes is better than the existing confusion between totally unrelated objects. But we are solving one problem while aggravating another which we still don't know how to solve. Do we have to copy entire examples? Cant we solve both problems at the same time?
Figure 4(a) says that the problem is far from solved by example transfer alone. Is this the right way forward?
1. I would really have liked to see some statistics on the increase (and decrease) in confusion between classes. From the statistics collected on the pascal dataset (with dpm detectors) that we saw in class, it seems like confusion across visually similar looking classes is a more statistically significant problem than confusion between dissimilar classes. I think this point needed to be argued better than mentioned in passing. Without these statistics, I'm not convinced that borrowing is a good idea in general - rather than a short term hack we should use till we get bigger training sets for these problem classes.
2. I guess the approached was intended to be a short-term hack. Looks like they wanted a method to manage the few-training-images problem while doing their research and came up with this. And as it grew more mature, they added affine-transformations and some ideas of where else it could be used :)
3. I think the approach is well formulated (not hacky). A natural dataset will always have few training images for most of the object classes.
But the broader question of example sharing vs parameter sharing or something else exists beyond the details of this paper. Are there reasons to favor example sharing over other approaches.
4. I think the authors' view is valid, in that confusion between similar looking classes is definitely tolerable (usually). Though it might be statistically more significant, I would expect that it isn't functionally important to distinguish between classes that are visually very close. In any case object recognition doesn't have to be solved in one stage - a second layer of fine grained classification can always be used if it is really necessary to distinguish between a large van and a small bus.
5. However, on the point of sharing of entire instances, I agree with Aravindh that it would be better to share mid-level representations. Sharing whole instances of object appearances doesn't seem to be really addressing the root of the problem and I believe it is somewhere in the middle levels that things start looking similar/dissimilar. Ideally if feature representations could be modeled into hierarchial layers (I have no clue how), maybe we might expect to see things like 'attributes' pop off at different levels. In that scenario, similar looking objects can share representations at the lower levels, but might start to diverge as we move up the hierarchy of feature representations.
6. Confusion between similar classes seems to be a failing of semantic categories. It really is absurd to attempt to train a computer to differentiate chairs from couches. First, people can't do this consistently. Second, what would be the use of such a system?
7. I think there is value in differentiating similar looking objects. If I confuse a wolf for a dog then things can go terribly wrong. If I confuse a snake for a rope then life is going to be much worse. I'm not arguing in favour of differentiating every noun from everything other noun but some discrimination is required for a real system.
8. @Aravindh - Getting confused between large dogs and wolves or suspicious looking ropes and snakes is something that happens even to humans sometimes. Our perception usually resolves this by being cautious and assuming the worst (not that snakes and wolves can hurt robots though), which translates into associating attributes like 'dangerous' or 'poisnonous'. This isn't really a problem of discriminative learning - it can be partially solved by resolving context and partially by associating a higher bias towards objects that are associated with critical attributes such as the ones mentioned above.
9. Whether we want to discriminate between similar categories is really a question of task. The distinction between sofa and armchair might be really important to a funiture-selling robot, whereas the difference between car and truck might not be as important.
10. I have feeling that their proposed system could be more useful if it is applied to the attribute learning in the paper we read previously. Transferring entire object images from similar categories seems less convincing to me, but if somehow we can transfer similar attributes from other categories, the attribute classifier might perform better and overall help the object recognition. I think this is more similar to the way how human infer things by their experiences.
11. I conjecture that humans are able to differentiate similar looking objects through logic and context (explicit reasoning in the prefrontal cortex in place of neurons firing in the ventral stream). But as they see more and more examples they integrate this differentiation capacity into the ventral stream itself. I think that discrmination is important but there are more ways to do it. The paper is, I feel, is comprimising on discrimination more that required though.
6. Cons 3. The authors have ignored and therefore not compared with approaches that use parameter sharing as a way to provide statistical strength for rare classes. The parameter sharing approaches often learn a mid level representation that uses data from multiple classes - eg: fully connected two layer neural networks, deep neural networks, attribute based approach from Tuesday's class. This parameter sharing technique is apparently very different from the regularization based technique more commonly used in the papers cited by the authors. Both have the benefit of reducing the sample complexity - while this paper copies examples, the mid level features untangle the object manifold and make the feature space more separable leading to a reduced sample complexity.
1. Yes, but you can not use a transformed image of category A to train models for category B. Something similar to your idea is mentioned in this paper (http://pub.ist.ac.at/~chl/papers/tommasi-accv2012.pdf) where they try to learn a shared parameter space and a dataset specific parameter space; though this is about transferring example from datasets for bias and not for different categories.. But you can imagine doing something like that..
7. Comment 1. The authors fix w_i^c to be 1 for all examples i \in c. It will be interesting to see what the system does if these are allowed to change. The initialization can have these as 1 and the optimization could be allowed to change them. Can this discard bad training examples? or learn a fine grained concept which is a subset of the training data. Can this be used to sequentially learn mixture models?
1. Wouldn't this just give you a degenerate solution where all the positive examples have their weights set to 0? Similarly, if you let the background example weights get set by the optimization, the risk over an empty set of examples is exactly 0.
2. The group lasso constaint on w* is trying to force everything to share from everything. The risk term on the other hand is trying to not share from anything at all. If we set w to 0, then w* becomes 1 and the regularization penalty will become huge. That wont be the optimum after removing the non degeneracy constraints.
8. In this paper, the authors present a novel technique of *borrowing* examples from one class and using them in another class for the purpose of detection. To do this, they learn a set of weights for each class on each other class which determines how well features from one class transfer to another. They do this using a simple regression technique. Then, borrowed examples are transformed into the "canonical" 2D poses of the class in question, and a classifier is trained on these images instead.
Their results show much improvement over simpler detectors, especially for rare classes. They even show that they can borrow images from one dataset and use them in another.
I think the biggest strength of this paper is the idea of *scoring* individual examples of a class based on how similar they are to another. I think this leads to a deeper understanding of the underlying visual structure of the classes, and how classes relate to one another. The fact that they get more useful classification rates is just a happy byproduct of this knowledge.
I have some doubts about their transformations. Is aspect ratio + scale really good enough to represent useful transformations? Perhaps the technique could be extended to include the "label transformations" we looked at in class earlier.
-- Matt Klingensmith
9. The authors present their methodology for transfer learning between object classes. Its main difference from previous methods is that they learn both which classes and examples to borrow from. They formulate this learning process as learning the individual weights corresponding to "how much" to borrow an example.
The intuition and predications of their methods are that information from other classes can help inform knowledge about the classes intending to be predicted. This doesn't strike me as particularly novel, and is more compensating for the fact that object detectors take a very narrow view of the visual world. They show augmentation of performance of object detectors using information from other
classes.
It seems like this boils down to instead of training a "car" detector, they train e.g. a "hopefully car, maybe bus, maybe van" detector. They mix information from other classes, despite the fact that other classes are "names". I think a more principled approach of information sharing utilizes attributes or parts. Again, their approach is compensating for the fact that object detectors alone often don't quite make sense in the first place, as there is so much more inter-class reasoning and information sharing needed to perceive a scene with any modicum of success. The only compelling argument for sharing classes between object detectors (in a world with much larger CV datasets) is that from certain viewpoints, objects from different classes can look nearly identical. (side view of couch can look like a side view of a chair)
Their argument could've been supported if they included a matrix of weights across all classes. They write a verbal story that inevitably conforms to their own biased assumptions of what their method does, whereas a less biased full quantitative analysis would've been more convincing.
10. I tend to think the term "borrowing" here in this paper can actually be called as "semi-supervised learning". Regarding what I've discussed in the last paper about different classes sharing visual similarity and the unbalancing inter-object-class similarity levels, this paper is one example that emphasizes and makes use of this point.
The pros about this paper is that it presents an object detection method that is able to explore the power from other classes or even other datasets. In addition they also introduced certain deformation scheme for transformation to handle visual differences caused by viewing points in an organized way (parametric way).
But my biggest concern about this paper is it is likely to further aggravate the already existing confusion between similar object classes. This is exactly the point that makes this paper much less convincing, at least to me.
1. I agree with Zhiding's concern. This paper only gives average precision of the performance but doesn't shows the recall as well as the confusion matrix. I strongly doubt that the recall is really bad and if we compare with the confusion matrix, we might find that they make more confused with similar classes like sofa and chair. However, for most application in robots this confusion is not that bad, because they own the same function.
2. I second Zhiding. I think that this would destroy the distinction between animals such as cats and dogs. Also, how many similar objects really differ visually by something as simple as an affine transformation?
3. I think the main question to be answered here is what are we doing this for. Is it for just object detection in a functional sense, say for robots, in which case it might not be too important to discriminate between a sofa and a chair or is it to actually to understand the objects fully and name them? And as mentioned in previous comments, we can always have a second stage of classification for discrimination between similar classes. The proposed algorithm might be a good starting point for multi-class object detection by adding more data from similar classes. However, I do agree that the results section does not give all the details to trust this algorithm completely in terms of both performance and scalability.
4. I agree with Diva, why do we want to have a perfect detectors if the data we have actually do not suffice to handle fine-grained classification?? I believe the concern is important for some tasks, yet for other tasks like to get a surface to sit on, why do we bother to do that kind of disambiguation? When people are J-walking they only concern is whether a "moving object" is coming, who cares it is a Dodge Viper or Toyota Corolla?
5. I would assume the transformation helping a bit there.. But till we see confusion matrix between classes with and without transformation, it is hard to say..
11. I invite everyone to also take a look at their previous cvpr2011 paper "Learning to Share Visual Appearance for Multiclass Object Detection" (http://people.csail.mit.edu/torralba/publications/sharingCVPR2011.pdf). This is a somewhat related work where they introduced the idea of sharing across rigid classifier templates. More importantly, they learn a tree to organize hundreds of object categories. The tree structure defines how the sharing is carried out: the root node is global which is shared across all categories, the mid-level nodes are super-categories (animal, vehicle...) and the leaves are object categories. They also use a CRP (Chinese Restaurant Process) to learn a tree without having to specify the number of super-categories.
1. That is an example of sharing model parameters instead of sharing training examples across different classes. Actually there is another paper about sharing parameters by learning a discriminative basis over all model parameters with sparsity constraints:
http://www.cs.berkeley.edu/~rbg/papers/dsparselets.pdf
12. In this paper, the author present a state-of-art algorithm based on the novel idea - "borrowing" examples which is actually multi-class object detection. The author not only presents the learning with borrowing examples but also gives transformation borrowing example method. However, the transformation method is not clear to me which I think is really interesting.
The experiments looks like perfect good. But I think they should also show and compare the recall and confusion matrix, because it is necessarily to see how much will this algorithm confuse the similar classes. Also when they are comparing borrowing examples from other classes, it is better for them to compare with the algorithms with and without transformation as well.
13. This paper is quite interesting, in the sense that it tries to borrow examples from other categories to boost the performance. It actually confirms the idea of sharing between object categories: the ontology (list of categories we can recognize) is not flat, the classes out there are not independent. The classes can have a lot to share with each other. For example if you want to distinguish between cat and dog, between tiger and dog, and between cat and tiger, the weight it learned should be very different. And intuitively the weights learned from separating cats from dogs can be quite similar (or useful) for separating tigers from dogs. This is because animals have a tree-like taxonomy. The other interesting stuff is about the functionality: armchairs and sofa look similar because they want to serve a similar function, they utilized this fact, too. Neat.
My opinion about this paper is that, at the end of the day, is it actually training to detect object classes any longer, or it is detecting something else? Like the armchair vs. sofa thing, is't the detector detecting the physical property, "a flat surface people can sit"? Plus the sharing across categories look very familiar to me as "attributes". In this case, we do not have a very good name for the detector trained, but it is actually a good way to start an attribute discovery step given the object categories we already have. Maybe we can get a "furry" detector by starting with dog and borrowing examples from all the other animals with furs? How about starting with sheep and learn a "white and furry" detector?
1. This comment has been removed by the author.
2. I think the key difference between attributes and this method is that for identifying attributes, we need different sets of features (shape, color, texture, parts, etc.) whereas in this case, the focus is getting as many similar images as possible (with the same set of features). I agree that attributes (in some sense) try to generalize over multiple classes. But the attribute "red" need not be correspond to similar looking objects (like the example of car and wine that Abhinav gave). And the focus of this paper is to try to get more examples from classes whose "shapes" are similar (since they use HOG features).
3. Well if it comes down to features, then I believe everything can be added to the classification problem here... Attributes can be also defined as shapes, like 'heart-shaped', 'round-shaped', 'star-shaped'.
14. When we mine for hard examples, we're looking for things that are misclassified (e.g. a chair wrongly called "couch") and then placing more weight on these examples and training again. We thought mining for hard examples was part of what worked in the DPM paper. This paper does what seems to be the opposite: when we misclassify a chair as "couch," we relabel that chair as a positive example of "couch" and train again. Can these two strategies be used in tandem? If they cannot be combined, it seems to say something disturbing: do X, improve performance; do NOT X, improve performance.
1. I think the approach proposed in this paper is sort of dual to hard mining of negative examples.
15. Object recognition has come a long way with the development of sophisticated features, but the features still don't seem to be good enough. I think the philosophy of this paper is more inclined towards accepting that your feature space might not allow a clean separation of highly similar classes. Given that, the best thing to do would be to recognize that borrowing features from visually similar classes tends to help make better object detectors better, but in a principled way rather than just clubbing classes together. Mining for hard-negatives is useful only when negative examples occur very close to the (prospective) decision boundary, but can still be separated from the positive instances. It is really not the algorithm's fault that an example labeled 'chair' appears bang in the middle of a cluster of couches in the feature space. I believe that really smart algorithms are the ones which have some ability to reinterpret the human given labels in some way so as to improve performance on some counts.
16. This paper proposes an interesting method which aims at borrowing training examples from neighbour classes. Here are things I like about this paper:
1. I like the idea of borrowing training examples from other classes for multi-class object detection, as the authors point out, there are few examples for certain classes due to the long-tail distribution.
2. The formulation seems to be intuitive and captures the trade-off between sharing and discriminativeness.
My concerns are two-fold, one for the sharing method and one for the experiments:
For the method itself, even though the formulation proposed seems to be intuitive, why did the authors terminate the optimization procedure after one iteration, I would like to see the effect (performance) of the optimizing this criteria versus the number of iterations. Also, the step of post-pruning the sharing weights seems crucial to me, a comparison with the version without pruning would be interesting as well. Mentioned in the experiments section, the author said that they binarize all weights obtained from learning procedure without explaining any reasons, which makes me a little bit confused, why do not stick on the continuous weighting value? From the perspective of experiments, it would be great if there are some figures showing the confusion between the shared classes before and after the sharing, which I think would give us more insight about this sharing mechanism.
1. I agree with you on the binarization part. The entire formulation uses soft indicator variables, and then all of a sudden we have a strong binarization above threshold 0.6
I would like to see the examples especially from the truck,van,car category (+9% mAP). Is it an artifact of the dataset, or is their approach really that good?
17. I believe the method presented in this paper is an intuitive and interesting way to make the learning process more directed toward a specific task, where this task is represented by the original dataset.
An interesting implication of the results in this paper is that there is a tradeoff between generalizability and performance, perhaps more obviously stated as generalizability vs. specificity. If our problem was truly well-defined, why would using more data be less effective than using a selected set of data? There are probably some effects due to bad examples, but is that enough to create the strong trend shown in the paper?
18. This paper proposes an interesting idea of augmenting existing data from similar data, which could be very useful for helping to deal with the inherent dataset bias of almost all datasets out there. Unfortunately, as many other comments mentioned, they only evaluate on the top 100 most well represented categories in the SUN dataset, and while they show some improvement, it would be nice if they could also show some improvement on some of the poorly represented classes as well.
On a more philosophical note, I feel that this paper is fixing a problem that really is an artifact of poor representation of images (namely assigning discrete language based labels to images for categorization). This is of course useful since it's hard for humans to interpret images without this type of labeling (but it still feels like its fixing an artifact of discrete labeling).
19. I think this paper has a really intuitive way to go about the problem of object detection. The algorithm seems to perform well on the subset of 100 classes. But the scalability is definitely questionable.
As a lot of people have stated above, this paper does seem to contradict the concept of hard mining. But is that the goal of the method? To be able to identify every single object in the real world? If the goal is to overfit the real world data, then all that we have to do is take all possible images from the world, keep adding more as you keep seeing new objects and just use k-NN. But if the goal is to get some level understanding about the real world with the limited data that we have, I think this is a very good method to try to choose what category you want to learn, what examples you want to choose and how much weight you should give to these examples. Since we don't have strong-enough features to discriminate everything in the real world, it is okay to borrow examples and get one level of classification done and this paper has given us a step forward towards this approach.
1. There was a previous paper on scene classification, where training scenes were transformed to look like the test image (some objects were hallucinated and whatnot), and then the authors used 1NN. I wonder how this algorithm would perform for object detection - morphing the images (including the bounding box), picking 1NN, and then getting a new estimate of the bounding box in the current image by looking at how the bounding box was morphed with the training image.
Side note: I think it's cool that the authors get the algorithm to pick which classes to borrow from, and that the classes it ends up borrowing from are similar to what we would intuit.
20. I like the idea of transfer learning for example transfer. I see that
many people have concerns over transferring entire examples as opposed
to more "useful parts of the example" like mid-level
representation. While that certainly seems like a good and more
extendible idea, I think that for many simple cases, the entire
example transfer is much simpler and intuitive.
I particularly like Eqn(4). It shows that this approach tries to
tighten all the class parameters by regularizing on an averaged model.
Things I would like to see
- I had to flip the paper once more to make sure that this was
correct. There are no baselines!!
- I would have liked to see an iterative borrowing approach. The
authors could justify the choice of not using one, by showing that
|
2017-08-22 12:45:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49814045429229736, "perplexity": 700.7412314303705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00189.warc.gz"}
|
http://sedodream.com/CategoryView,category,web.aspx
|
- | | My book on MSBuild and Team Build | Archives and Categories Saturday, September 21, 2013
# How to extend the web publish process without modifying project contents
When automating web publishing for Visual Studio projects in many cases your first step will be to create a publish profile for the project in VS. There are many cases in which you cannot, or would not like to do this. In this post I’ll show you how you can take an existing project and use an MSBuild file to drive the publish process. In that I’ll also show how you can extend the publish process without modifying either the project or any of its contents.
Before we get too far into this, if you are not familiar with how to publish your VS web projects from the command line you can read our docs at ttp://www.asp.net/mvc/tutorials/deployment/visual-studio-web-deployment/command-line-deployment.
When you publish a project from the command line using a publish profile you typically use the syntax below.
msbuild .\MyProject.csproj /p:VisualStudioVersion=11.0 /p:DeployOnBuild=true /p:PublishProfile=<profile-name-or-path>
In this snippet we are passing in a handful of properties. VisualStudioVersion dictates which version of MSBuild targets are used during the build. See http://sedodream.com/2012/08/19/VisualStudioProjectCompatabilityAndVisualStudioVersion.aspx for more details on that. DeployOnBuild=true injects the publish process at the end of build. PublishProfile can either be the name of a publish profile which the project contains or it can be the full path to a publish profile. We will use PublishProfile with that second option, the full path.
So we need to pass in the full path to a publish profile, which typically is a .pubxml file. A publish profile is just an MSBuild file. When you pass in PublishProfile and DeployOnBuild=true, then the publish profile is Imported into the build/publish process. It will supply the publish properties needed to perform the publish.
Let’s see how that works. I have a sample project, MySite, which does not have any publish profiles created for it. I have created a publish profile, ToFileSys.pubxml, in another folder that will be used though. The contents of that file are below.
ToFileSys.pubxml
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<WebPublishMethod>FileSystem</WebPublishMethod>
<ExcludeApp_Data>False</ExcludeApp_Data>
<publishUrl>C:\temp\Publish\01</publishUrl>
<DeleteExistingFiles>False</DeleteExistingFiles>
</PropertyGroup>
</Project>
This publish profile will publish to a local folder. I just created this file in VS with a different project and then just copied it to the folder that I needed, and removed properties which are only used for the inside of VS experience. We can publish the MySite project using this profile with the command below.
msbuild MySite.csproj
/p:VisualStudioVersion=11.0
/p:DeployOnBuild=true
/p:PublishProfile=C:\data\my-code\publish-samples\publish-injection\build\ToFileSys.pubxml
When you execute this the file specified in PublishProfile will be included into the build process.
### Taking it up a level
Now let’s see how we can take this to the next level by having a single script that will be used to publish more than one project using this technique.
In the sample files (which you can find links for at the end of the post). I have a solution with two web projects, MySite and MyOtherSite. Neither of these projects have any publish profiles created. I have created a script which will build/publish these projects which you can find at build\publish.proj in the samples. The contents of the file are shown below.
publish.proj
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="12.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="BuildProjects">
<!--
This file is used in two ways.
1. Drive the build and publish process
2. It is used by the publish process during the build of MySite to configure/extend publish
Note: 1. Is kicked off by the use on the cmd line/build server. 2. Is invoked by this script itself.
This file is injected into the publish process via the PublishProfile property.
-->
<PropertyGroup>
<VisualStudioVersion Condition=" '$(VisualStudioVersion)'=='' ">11.0</VisualStudioVersion> <Configuration Condition=" '$(Configuration)'=='' ">Release</Configuration>
<!-- Location for build output of the project -->
<OutputRoot Condition=" '$(OutputRoot)'=='' ">$(MSBuildThisFileDirectory)..\BuildOutput\</OutputRoot>
<!-- Root for the publish output -->
<PublishFolder Condition=" '$(PublishFolder)'==''">C:\temp\Publish\Output\</PublishFolder> </PropertyGroup> <ItemGroup> <ProjectsToBuild Include="$(MSBuildThisFileDirectory)..\MySite\MySite.csproj">
VisualStudioVersion=$(VisualStudioVersion); Configuration=$(Configuration);
OutputPath=$(OutputRoot); WebPublishMethod=FileSystem; publishUrl=$(PublishFolder)MySite\;
DeployOnBuild=true;
DeployTarget=WebPublish;
PublishProfile=$(MSBuildThisFileFullPath) </AdditionalProperties> </ProjectsToBuild> <ProjectsToBuild Include="$(MSBuildThisFileDirectory)..\MyOtherSite\MyOtherSite.csproj">
VisualStudioVersion=$(VisualStudioVersion); Configuration=$(Configuration);
OutputPath=$(OutputRoot); WebPublishMethod=FileSystem; publishUrl=$(PublishFolder)MyOtherSite\;
DeployOnBuild=true;
DeployTarget=WebPublish;
PublishProfile=$(MSBuildThisFileFullPath) </AdditionalProperties> </ProjectsToBuild> </ItemGroup> <Target Name="BuildProjects"> <MSBuild Projects="@(ProjectsToBuild)" /> </Target> <!-- *************************************************************************************** The targets below will be called during the publish process. These targets are injected into the publish process for each web project. These targets will not have access to any new values for properties/items from the targets above this. *************************************************************************************** --> <Target Name="AfterWebPublish" AfterTargets="WebPublish"> <Message Text="Inside AfterWebPublish" Importance="high"/> </Target> </Project> This file is pretty simple, it declares some properties which will be used for the build/publish process. Then it declares the projects to be built with an item list named ProjectsToBuild. When declaring ProjectsToBuild I use the AdditionalProperties metadata to specify MSBuild properties to be used during the build process for each project. Let’s take a closer look at those properties. <AdditionalProperties> VisualStudioVersion=$(VisualStudioVersion);
Configuration=$(Configuration); OutputPath=$(OutputRoot);
WebPublishMethod=FileSystem;
publishUrl=$(PublishFolder)MySite\; DeployOnBuild=true; DeployTarget=WebPublish; PublishProfile=$(MSBuildThisFileFullPath)
I’ll explain all the properties now. VisualStudioVersion, Configuration and OutputPath are all used for the build process. The other properties are related to publishing. If you want to publish from the file system those properties (WebPublishMethod, publishUrl, DeployOnBuild, and DeployTarget) must be set. The most important property there is PublishProfile.
PublishProfile is set to $(MSBuildThisFileFullPath) which is the full path to publish.proj. This will instruct the build process of that project to import publish.proj when its build/publish process is started. It’s important to note that a “new instance” of the file will be imported. What that means is that the imported version of publish.proj won’t have access to any dynamic properties/items created in publish.proj. The reason why PublishProfile is specified there is so that we can extend the publish process from within publish.proj itself. publish.proj has a target, AfterWebPublish, which will be executed after each project is executed. Let’s see how this works. We can execute the publish process with the command below. msbuild .\build\publish.proj /p:VisualStudioVersion=11.0 After executing this command the tail end of the result is shown in the image below. In the image above you can see that the MyOtherSite project is being publish to the specified location in publish.proj and the AfterWebPublish target is executed as well. In this post we’ve seen how we can use an MSBuild file as a publish profile, and how to extend the publish process using that same file. You can download the samples at https://dl.dropboxusercontent.com/u/40134810/blog/publish-injection.zip. You can find the latest version in my publish-samples repository at publish-injection. Sayed Ibrahim Hashimi | http://msbuildbook.com | @SayedIHashimi msbuild | Visual Studio | Visual Studio 2012 | web | Web Publishing Pipeline Saturday, September 21, 2013 7:57:03 PM (GMT Daylight Time, UTC+01:00) | Wednesday, June 05, 2013 # How to publish a VS web project with a .publishSettings file The easiest way to publish a Visual Studio web project from the command line is to follow the steps below. 1. Open VS 2. Right click on the Web project and select Publish 3. Import your .publishSettings file (or manually configure the settings) 4. Save the profile (i.e. .pubxml file) 5. From the command line publish with the command line passing in PublishProfile For more details on this you can see ASP.NET Command Line Deployment. This is pretty simple and very easy, but it does require that you manually create the .pubxml file. In some cases you’d just like to download the .publishsettings file from your hosting provider and use that from the command line. This post will show you how to do this. In order to achieve the goal we will need to extend the build/publish process. There are two simple ways to do this; 1. Place a .wpp.targets file in the same directory as the web project or 2. Pass an additional property indicating the location of the .wpp.targets file. I’ll first go over the technique where you place the file directly inside of the directory where the project is. After that I’ll show you how to use this file from a well known location. One way to do this is to create a .wpp.targets file. This .wpp.targets file will be imported into the build/publish process automatically. This .targets file will enable us to pass in PublishSettingsFile as an MSBuild property. It will then read the .publishsettings file and output the properties needed to execute the publish process. ### .wpp.targets in the project directory Let’s take a look at the .targets file and then we will discuss it’s contents. Below you will find the contents of the full file. <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- When using this file you must supply /p:PublishSettingsFile as a parameter and /p:DeployOnBuild=true --> <PropertyGroup Condition=" Exists('$(PublishSettingsFile)')">
<!-- These must be declared outside of a Target because they impact the Import Project flow -->
<WebPublishMethod>MSDeploy</WebPublishMethod>
<DeployTarget>WebPublish</DeployTarget>
<PipelineDependsOn>
GetPublishPropertiesFromPublishSettings;
$(PipelineDependsOn); </PipelineDependsOn> </PropertyGroup> <Target Name="GetPublishPropertiesFromPublishSettings" BeforeTargets="Build" Condition=" Exists('$(PublishSettingsFile)')">
<PropertyGroup>
<_BaseQuery>/publishData/publishProfile[@publishMethod='MSDeploy'][1]/</_BaseQuery>
<!-- This value is not in the .publishSettings file and needs to be specified, it can be overridden with a cmd line parameter -->
<!-- If you are using the Remote Agent then specify this as RemoteAgent -->
<MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
</PropertyGroup>
<ItemGroup>
<_MSDeployXPath Include="WebPublishMethod">
<Query>$(_BaseQuery)@publishMethod</Query> </_MSDeployXPath> <_MSDeployXPath Include="MSDeployServiceURL"> <Query>$(_BaseQuery)@publishUrl</Query>
</_MSDeployXPath>
<_MSDeployXPath Include="SiteUrlToLaunchAfterPublish">
<Query>$(_BaseQuery)@destinationAppUrl</Query> </_MSDeployXPath> <_MSDeployXPath Include="DeployIisAppPath"> <Query>$(_BaseQuery)@msdeploySite</Query>
</_MSDeployXPath>
<Query>$(_BaseQuery)@userName</Query> </_MSDeployXPath> <_MSDeployXPath Include="Password"> <Query>$(_BaseQuery)@userPWD</Query>
</_MSDeployXPath>
</ItemGroup>
<XmlPeek XmlInputPath="$(PublishSettingsFile)" Query="%(_MSDeployXPath.Query)" Condition=" Exists('$(PublishSettingsFile)')">
</XmlPeek>
</Target>
</Project>
You can place this file in the root of your project (next to the .csproj/.vbproj file) with the name {ProjectName}.wpp.targets.
This .targets file is pretty simple. It defines a couple properties and a single target, GetPublishPropertiesFromPublishSettings. In order to publish your project from the command line you would execute the command below.
msbuild.exe MyProject /p:VisualStudioVersion=11.0 /p:DeployOnBuild=true /p:PublishSettingsFile=<path-to-.publishsettings>
Here is some info on the properties that are being passed in.
The VisualStudioVersion property indicates that we are using the VS 2012 targets. More info on this at http://sedodream.com/2012/08/19/VisualStudioProjectCompatabilityAndVisualStudioVersion.aspx.
DeployOnBuild, when true it indicates that we want to publish the project. This is the same property that you would normally pass in.
PublishSettingsFile, this is a new property which the .targets file recognizes. It should be set to the path of the .publishSettings file.
The properties at the top of the .targets file (WebPublishMethod and DeployTarget) indicate what type of publish operation is happening. The default values for those are MSDeploy and WebPublish respectively. You shouldn’t need to change those, but if you do you can pass them in on the command line.
The GetPublishPropertiesFromPublishSettings target uses the XmlPeek task to read the .publishsettings file. It then emits the properties required for publishing.
OK this is great an all, but it still requires that an additional file (the .wpp.targets file) be placed in the directory of project. It is possible to avoid this as well. Let’s move to that
### .wpp.targets from a well known location
If you’d like to avoid having to place the .wpp.targets file in the directory of the project you can easily do this. Place the file in a well known location and then execute the same msbuild.exe call adding an additional property. See the full command below.
msbuild.exe MyProject /p:VisualStudioVersion=11.0 /p:DeployOnBuild=true /p:PublishSettingsFile=<path-to-.publishsettings> /p:WebPublishPipelineCustomizeTargetFile=<path-to.targets-file>
Once you do this you no longer need to create the Publish Profile in VS if you want to publish from the command line with a .publishsettings file.
FYI you can find the complete sample at https://github.com/sayedihashimi/publish-samples/tree/master/PubWithPublishSettings.
Sayed Ibrahim Hashimi | http://msbuildbook.com | @SayedIHashimi
msbuild | web | Web Publishing Pipeline Wednesday, June 05, 2013 6:01:34 PM (GMT Daylight Time, UTC+01:00) |
Thursday, May 09, 2013
# Book published and now in stock!
I’m happy to say that my Supplement to Inside the Microsoft Build Engine book (co-author William Bartholomew) has now been published. In fact it’s already in stock and ready to be shipped by Amazon.com.
This book is a small addition (118 pages) to the previous book, Inside the Microsoft Build Engine 2nd edition. It has a small price too, MSRP is $12.99 but it’s selling on Amazon.com for$8.99! In this book we cover the updates to MSBuild, Team Build and Web Publishing in Visual Studio 2012. The foreword was written by Scott Hanselman, and you can read the entire foreword online.
Check out how thin the supplement is in comparison to the 2nd edition #ThinIsIn.
If you already own the 2nd edition then you’ll love this update.
Chapter 1: What's new in MSBuild
1. Visual Studio project compatibility between 2010 and 2012
3. NuGet
5. Cookbook
##### Chapter 2: What's new in Team Build 2012
1. Installation
2. Team Foundation Service
3. User interface (UI) enhancements
4. Visual Studio Test Runner
5. Pausing build definitions
6. Batching
7. Logging
8. Windows Workflow Foundation 4.5
9. Cookbook
##### Chapter 3: What's new in Web Publishing
1. Overview of the new Publish Web Dialog
2. Building web packages
3. Publish profiles
4. Database publishing support
5. Profile-specific web.config transforms
6. Cookbook
The book has been available in e-book form for a few weeks. Just long enough for us to get our first review. It was 5 stars :).
Please let us know what you think of this book!
Sayed Ibrahim Hashimi | http://msbuildbook.com | @SayedIHashimi
msbuild | MSDeploy | Team Build | web | Web Publishing Pipeline Thursday, May 09, 2013 6:33:20 AM (GMT Daylight Time, UTC+01:00) |
Wednesday, March 06, 2013
# How to publish one web project from a solution
Today on twitter @nunofcosta asked me roughly the question “How do I publish one web project from a solution that contains many?
The issue that he is running into is that he is building from the command line and passing the following properties to msbuild.exe.
/p:DeployOnBuild=true
/p:PublishProfile='siteone - Web Deploy'
When you pass these properties to msbuild.exe they are known as global properties. These properties are difficult to override and are passed to every project that is built. Because of this if you have a solution with multiple web projects, when each web project is built it is passed in the same set of properties. Because of this when each project is built the publish process for that project will start and it will expect to find a file named siteone – Web Deploy.pubxml in the folder Properties\PublishProfiles\. If the file doesn’t exist the operation may fail.
Note: If you are interested in using this technique for an orchestrated publish see my comments at http://stackoverflow.com/a/14231729/105999 before doing so.
So how can we resolve this?
Let’s take a look at a sample (see links below). I have a solution, PublishOnlyOne, with the following projects.
1. ProjA
2. ProjB
ProjA has a publish profile named ‘siteone – Web Deploy’, ProjB does not. When trying to publish this you may try the following command line.
msbuild.exe PublishOnlyOne.sln /p:DeployOnBuild=true /p:PublishProfile=’siteone – Web Deploy’ /p:Password=%password%
See publish-sln.cmd in the samples.
If you do this, when its time for ProjB to build it will fail because there’s no siteone – Web Deploy profile for that project. Because of this, we cannot pass DeployOnBuild. Instead here is what we need to do.
1. Edit ProjA.csproj to define another property which will conditionally set DeployOnBuild
2. From the command line pass in that property
I edited ProjA and added the following property group before the Import statements in the .csproj file.
<PropertyGroup>
<DeployOnBuild Condition=" '$(DeployProjA)'!='' ">$(DeployProjA)</DeployOnBuild>
</PropertyGroup>
Here you can see that DeployOnBuild is set to whatever value DeployProjA is as long as it’s not empty. Now the revised command is:
msbuild.exe PublishOnlyOne.sln /p:DeployProjA=true /p:PublishProfile=’siteone – Web Deploy’ /p:Password=%password%
Here instead of passing DeployOnBuild, I pass in DeployProjA which will then set DeployOnBuild. Since DeployOnBuild wasn’t passed to ProjB it will not attempt to publish.
You can find the complete sample at https://github.com/sayedihashimi/sayed-samples/tree/master/PublishOnlyOne.
Sayed Ibrahim Hashimi | @SayedIHashimi | http://msbuildbook.com/
MSDeploy | web | Web Deployment Tool | Web Development | Web Publishing Pipeline Wednesday, March 06, 2013 2:48:41 AM (GMT Standard Time, UTC+00:00) |
Sunday, January 06, 2013
# Command line web project publishing
With the release of VS2012 we have improved the command line publish experience. We’ve also made all the web publish related features available for VS2010 users in the Azure SDK.
The easies way to publish a project from the command line is to create a publish profile in VS and then use that. To create a publish profile in Visual Studio right click on the web project and select Publish. After that it will walk you though creating a publish profile. VS Web publish profile support the following publish methods.
• Web Deploy – The preferred method. You can publish to any host/server which has Web Deploy configured
• Web Deploy Package - Used to create a package which can be published offline at a later time
• File system - Used to publish to a local/network folder
• FTP - Used to publish to any FTP server
• FPSE – Used to publish to a server using Front Page Server Extensions
Command line publishing is only supported for Web Deploy, Web Deploy Package, and File System. If you think we should support command line scenarios for other publish methods the best thing to do would be to create a suggestion at http://aspnet.uservoice.com. If there is enough interest we may work on that support.
Let’s first take a look at how you can publish a simple Web project from the command line. I have created a simple Web Forms project and want to publish that. I’ve created a profile named SayedProfile. In order to publish this project I will execute the following command.
In this command you can see that I have passed in these properties;
• DeployOnBuild – when true the build process will be extended to perform a publish as well
• PublishProfile - name of the publish profile (you can also provide a full path to a .pubxml file)
• VisualStudioVersion – Special property see comments below
You may not have expected the VisualStudioVersion property here. This is a new property which was introduced with VS 2012. It is related to how VS 2010 and VS 2012 are able to share the same projects. Take a look at my previous blog post at http://sedodream.com/2012/08/19/VisualStudioProjectCompatabilityAndVisualStudioVersion.aspx. If you are building the project file, instead of the solution file then you should always set this property.
If you are publishing using the .sln file you can omit the VisualStudioVersion property. That property will be derived from the version of the solution file itself. Note that there is one big difference when publishing using the project or solution file. When you build an individual project the properties you pass in are given to that project alone. When you build from the command line using the solution file, the properties you have specified are passed to all the projects. So if you have multiple web projects in the same solution it would attempt to publish each of the web projects.
FYI in case you haven’t already heard I’m working on an update to my book. More info at msbuildbook.com
Sayed Ibrahim Hashimi | @SayedIHashimi
msbuild | MSBuild 4.0 | MSDeploy | web | Web Deployment Tool Sunday, January 06, 2013 2:56:37 AM (GMT Standard Time, UTC+00:00) |
Monday, August 20, 2012
# Web Deploy (MSDeploy) how to sync a folder
Today I saw the following question on StackOverflow MSDeploy - Deploying Contents of a Folder to a Remote IIS Server and decided to write this post to answer the question.
Web Deploy (aka MSDeploy) uses a provider model and there are a good number of providers available out of the box. To give you an example of some of the providers; when syncing an IIS web application you will use iisApp, for an MSDeploy package you will use package, for a web server webServer, etc. If you want to sync a local folder to a remote IIS path then you can use the contentPath provider. You can also use this provider to sync a folder from one website to another website.
The general idea of what we want to do in this case is to sync a folder from your PC to your IIS website. Calls to msdeploy.exe can be a bit verbose so let’s construct the command one step at at time. We will use the template below.
msdeploy.exe -verb:sync -source:contentPath="" -dest:contentPath=""
We use the sync verb to describe what we are trying to do, and then use the contentPath provider for both the source and the dest. Now let’s fill in what those values should be. For the source value you will need to pass in the full path to the folder that you want to sync. In my case the files are at C:\temp\files-to-pub. For the dest value you will give the path to the folder as an IIS path. In my case the website that I’m syncing to is named sayedupdemo so the IIS path that I want to sync is ‘sayedupdemo/files-to-pub’. Now that give us.
msdeploy.exe –verb:sync -source:contentPath="C:\temp\files-to-pub" -dest:contentPath='sayedupdemo/files-to-pub'
For the dest value we have not given any parameters indicating what server those command are supposed to be sent to. We will need to add those parameters. The parameters which typically need to be passed in are.
• ComputerName – this is the URL or computer name which will handle the publish operation
• AuthType – the authType to be used. Either NTLM or Basic. For WMSvc this is typically Basic, for Remote Agent Service this is NTLM
In my case I’m publishing to a Windows Azure Web Site. So the values that I will use are:
All of these values can be found in the .publishSettings file (can be downloaded from Web Site dashboard from WindowsAzure.com). For the ComputerName value you will need to append the name of your site to get the full URL. In the example above I manually added ?site=sayedupdemo, this is the same name as shown in the Azure portal. So now the command which we have is.
msdeploy.exe
–verb:sync
-source:contentPath="C:\temp\files-to-pub"
-dest:contentPath='sayedupdemo/files-to-pub'
,ComputerName="https://waws-prod-blu-001.publish.azurewebsites.windows.net/msdeploy.axd?site=sayedupdemo"
,UserName='$sayedupdemo' ,Password='thisIsNotMyRealPassword' ,AuthType='Basic' OK we are almost there! In my case I want to make sure that I do not delete any files from the server during this process. So I will also add –enableRule:DoNotDeleteRule. So our command is now. msdeploy.exe –verb:sync -source:contentPath="C:\temp\files-to-pub" -dest:contentPath='sayedupdemo/files-to-pub' ,ComputerName="https://waws-prod-blu-001.publish.azurewebsites.windows.net/msdeploy.axd?site=sayedupdemo" ,UserName='$sayedupdemo'
,AuthType='Basic'
-enableRule:DoNotDeleteRule
At this point before I execute this command I’ll first execute it passing –whatif. This will give me a summary of what operations will be without actually causing any changes. When I do this the result is shown in the image below.
After I verified that the changes are all intentional, I removed the –whatif and executed the command. After that the local files were published to the remote server. Now that I have synced the files each publish after this will be result in only changed files being published.
If you want to learn how to snyc an individual file you can see my previous blog post How to take your web app offline during publishing.
### dest:auto
In the case of the question it was asked with dest:auto, you can use that but you will have to pass in the IIS app name as a parameter and it will replace the path to the folder. Below is the command.
msdeploy.exe
-verb:sync
-source:contentPath="C:\temp\files-to-pub"
-dest:auto
,ComputerName="https://waws-prod-blu-001.publish.azurewebsites.windows.net/msdeploy.axd?site=sayedupdemo"
,UserName='$sayedupdemo' ,Password='thisIsNotMyRealPassword' ,AuthType='Basic' -enableRule:DoNotDeleteRule -setParam:value='sayedupdemo',kind=ProviderPath,scope=contentPath,match='^C:\\temp\\files-to-pub$'
Thanks,
Sayed Ibrahim Hashimi @SayedIHashimi
MSDeploy | Visual Studio | web | Web Deployment Tool Monday, August 20, 2012 4:08:11 AM (GMT Daylight Time, UTC+01:00) |
Sunday, August 19, 2012
# Visual Studio project compatability and VisualStudioVersion
One of the most requested features of Visual Studio 2012 was the ability to open projects in both VS 2012 as well as VS 2010 (requires VS 2010 SP1). In case you haven’t heard we did implement that feature. You may be wondering how we were able to do this and how this may impact you.
If you open the .csproj/.vbproj for a Web Project created in VS2010 you will see the following import statement.
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\ v10.0\WebApplications\Microsoft.WebApplication.targets" /> When you open this project in VS 2012 there are a few changes made to your project file to ensure that it can be opened in both VS 2010 SP1 and VS 2012. One of the changes made to the project when it is first loaded in VS 2012 is to add the following to replace that import statement. <PropertyGroup> <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">10.0</VisualStudioVersion>
<VSToolsPath Condition="'$(VSToolsPath)' == ''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)</VSToolsPath> </PropertyGroup> <Import Project="$(VSToolsPath)\WebApplications\Microsoft.WebApplication.targets" Condition="'$(VSToolsPath)' != ''" /> We removed the hard-coded 10.0 and instead used the property VisualStudioVersion. When building in Visual Studio 2012 this value will always be 11.0, but for VS 2010 it doesn’t exist. That is why we defaulted it to 10.0 above. There are some scenarios where building from the command line will require to set this property explicitly. Before we get there let me explain how this property gets set (in this order) 1. If VisualStudioVersion is defined as an environment variable/global MSBuild property, that is used. • This is how VS and the VS developer command prompt set this value 2. Based on the file format version of the .sln file (toolset used is sln file format –1) • To simplify this statement, the .sln file will build with specifying VisualStudioVersion to the value of the version of VS which created the .sln file. 3. Choose default • 10.0 if VS 2010 is installed • Highest-versioned sub-toolset version installed For #2 when you are building a .sln file the value of VisulStudioVersion will be –1 of the Format Version found in the .sln file. The important thing to note here is that if you build a .sln file it will build with the value of VisulStudioVersion corresponding to the version of VS which created the .sln file. So if you create a .sln file in VS2012 and you always build that .sln file the value for VisualStudioVersion will be 11.0. In many cases if you build the .sln file you are good. If you are building .csproj/.vbproj files w/o going through a .sln file? If you build a web project from the command line (not the developer prompt) then the value for VisualStudioVersion used will be 10.0. That is an artifact of the properties which I showed above. In this case you should pass this in as an MSBuild property. For example msbuild.exe MyAwesomeWeb.csproj /p:VisualStudioVersion=11.0 In this case I’m passing in the property explicitly. This will always override any other mechanism to determine the value for VisualStudioVersion. If you are using the MSBuild task in a build script, then you can specify the property either in the Properties attribute or the AdditionalProperties attribute. See my previous blog post on the difference between Properties and AdditionalProperties. If you encounter any funny behavior when building/publishing and you notice that the wrong .targets files are being imported then you may need to specify this property. Sayed Ibrahim Hashimi | @SayedIHashimi msbuild | Visual Studio | Visual Studio 11 | Visual Studio 2010 | Visual Studio 2012 | web | Web Publishing Pipeline Sunday, August 19, 2012 10:06:56 PM (GMT Daylight Time, UTC+01:00) | Friday, June 15, 2012 # Downloading the Visual Studio Web Publish Updates I have written a few posts recently describing out updated web publish experience. These new experience is available for both Visual Studio 2010 as well as Visual Studio 2012 RC. You can use the links below to download these updates in the Azure SDK download. Below are links for both versions. The Web Publish experience is chained into VS 2012 RC so if you have installed VS 2012 RC with the Web features then you already have these features. Thanks, Sayed Ibrahim Hashimi @SayedIHashimi asp.net | Deployment | Visual Studio | Visual Studio 11 | Visual Studio 2010 | web | Web Deployment Tool | Web Publishing Pipeline Friday, June 15, 2012 8:30:40 PM (GMT Daylight Time, UTC+01:00) | # Visual Studio 2010 Web Publish Updates Last week we rolled out some updates for our Visual Studio 2010 Web Publishing Experience. This post will give you an overview of the new features which we released. In the coming weeks there will be more posts getting into more details regarding individual features. You can get these updates in the Windows Azure SDK for Visual Studio 2010. When you download that package there you will also get the latest tools for Azure development. The new high level features include the following. • Updated Web Publish dialog • Support to import publish profiles (.publishSettings files) • Support to configure EF Code First migrations during publish • Support to create web packages in the publish dialog • Publish profiles now a part of the project and stored in version control by default • Publish profiles are now MSBuild files • Profile specific web.config transforms # Overview When you right click on your Web Application Project (WAP) you will now see the new publish dialog. On this tab you can import a .publishSettngs file, which many web hosts provide, and you can also manage your publish profiles. If you are hosting your site on Windows Azure Web Sites then you can download the publish profile on the dashboard of the site using the Download publish profile link. After you import this publish profile you will be brought to the Connection tab automatically. On this tab you can see all the server configuration values which are needed for your client machine to connect to the server. Typically you don’t have to worry about the details of these values. Next you’ll go to the Settings tab. On the Settings tab you can set the build configuration which should be used for the publish process, the default value here is Release. There is also a checkbox to enable you to delete any files on the server which do not exist in the project. Below that checkbox you will see a section for databases. The sample project shown has an Entity Framework Code First model, named ContactsContext, and it uses Code First Migrations to manage the database schema. If you have any non-EF Code First connection strings in web.config then those databases will show up as well but the support for incrementally publishing the schema for those has not yet been finalized. We are currently working on that. You can visit my previous blog entry for more info on that. If you imported a .publishSettings file with a connection string then that connection string would automatically be inserted in the textbox/dropdown for the connection string. If you did not then you can use the … button to create a connection string with the Connection String Builder dialog or you can simply type/paste in a connection string. For the EF Code First contexts you will see the Execute Code Frist Migrations checkbox. When you check this when your site is published the web.config will be transformed to enable the Code First migrations to be executed the first time that the context is accessed. Now you can move to the Preview tab. When you first come to the Preview tab you will see a Start Preview button. Once you click this button you will see the file operations which would be performed once you publish. Since this site has never been published all the file operations are Add, as you can see in the image below. The other Action values include; Update and Delete. Once you are ready to publish you can click the Publish button. You can monitor the progress of the publish process using the Output Window. If your publish profile had a value for the Destination URL then the site will automatically be opened in the default browser after the publish has successfully completed. # Publish Profiles One of the other changes in the publish experience is that publish profiles are now stored as a part of your project. They are stored under the folder Properties\PublishProfiles (for VB projects its My Project\PublishProfiles) and the extension is .pubxml. You can see this in the image below. These .pubxml files are MSBuild files and you can modify these files in order to customize the publish process. If you do not want the publish profile to be checked into version control you can simply exclude it from the project. The publish dialog will look at the files in the PublishProfiles folder so you will still be able to publish using that profile. You can also leverage these publish profiles to simply publishing from the command line. For example you can use the following syntax to publish from the command line. msbuild.exe WebApplication2.csproj /p:DeployOnBuild=true;PublishProfile="pubdemo - Web Deploy";Password={INSERT-PASSWORD} # Resources If you have any questions please feel free to directly reach out to me at sayedha(at){MicrosoftDOTCom}. Sayed Ibrahim Hashimi @SayedIHashimi asp.net | Microsoft | MSDeploy | Visual Studio 2010 | web | Web Deployment Tool | Web Publishing Pipeline Friday, June 15, 2012 8:07:30 PM (GMT Daylight Time, UTC+01:00) | Thursday, June 07, 2012 # ASP.NET providers and SQL Azure We have two sets of ASP.NET providers which currently exist; the ASP.NET SQL providers, and the ASP.NET Universal Providers. In VS 2010 the SQL providers were in only providers used for our project templates. In VS 2012 we have switched to using the Universal Providers. One of the drawbacks of the SQL providers is that it leverages DB objects of SQL server which are not available in SQL Azure. In our updated web publish experience we have an Update Database checkbox which can be used to incrementally publish the database to the destination database. In this case if the source connection string is used by the ASP.NET SQL providers and you are publishing to SQL Azure then you will see the following message on the dialog. Note: you may see the Update Database checkbox disabled, please visit http://sedodream.com/2012/06/07/VSPublishDialogUpdateDatabaseDialogDisabled.aspx for more info on why. The publish dialog is letting you know that the SQL providers are not compatible with SQL Azure and helps you convert to using the Universal Providers. After you install the Universal Providers the web.config entry will be commented out and new entries will be inserted for the Universal Providers. Your existing database will not be impacted, we’ll create a new connection string pointing to a new database. If you had any data in the SQL Providers database you will have to re-create those objects in the new database. If you have any questions please feel free to directly reach out to me at sayedha(at){MicrosoftDOTCom}. Sayed Ibrahim Hashimi @SayedIHashimi Visual Studio | Visual Studio 11 | Visual Studio 2010 | web | Web Publishing Pipeline Thursday, June 07, 2012 11:41:46 PM (GMT Daylight Time, UTC+01:00) | # VS Publish dialog Update Database dialog disabled If you have tried out our new Web Publish experience in Visual Studio you may have noticed that the Update Database checkbox is disabled. See the image below. The intended behavior of this checkbox is to enable you to incrementally publish your database schema from the source (the connection string in web.config) to the destination (whatever connection string is in the text box). The difference between an incremental publish and a typical publish is that for incremental publishes only changes are transferred from source to destination. With a full publish the first time that you publish your DB schema everything is created, and the next time that you try to publish you will receive an error because it tries to re-create existing DB objects. The functionality of the Update database checkbox leverages an MSDeploy provider. We were hoping to complete that provider and give it to hosters in time for the release but we were unable to do so. We are working on completing the provider and partnering with hosters to install these in time for the launch of Visual Studio 2012 RTM. In the mean time if you need to publish your DB schema you can use the Package/Publish SQL tab (caution: the DB publishing here is not incremental). If you are going to use the PP/Sql tab to publish to SQL Azure then there are some special consideraions that you will need to take. You can learn more about those by visiting http://msdn.microsoft.com/en-us/library/dd465343.aspx and searching for “Azure” on that page. If you have any questions please feel free to directly reach out to me at sayedha(at){MicrosoftDOTCom}. Thanks, Sayed Ibrahim Hashimi @SayedIHashimi Visual Studio | Visual Studio 11 | Visual Studio 2010 | web | Web Development | Web Publishing Pipeline Thursday, June 07, 2012 10:44:26 PM (GMT Daylight Time, UTC+01:00) | Saturday, May 12, 2012 # web.config transforms, they are invoked on package and publish not F5 \I receive a lot of questions regarding web.config transforms, which have existed in Visual Studio since 2010, and wanted to clear up the support that we have in this area. These transforms show up in the solution explorer underneath web.config as shown in the image below. Since the names of these transforms include the build configuration many people expect that web.config will be transformed when they start debugging (F5) or run the app (CTRL+F5) in Visual Studio. But sadly this is not the case. These transforms are kicked in only when the web is packaged or published. I totally agree that this would be awesome, and I even blogged about how to enable it at http://sedodream.com/2010/10/21/ASPNETWebProjectsWebdebugconfigWebreleaseconfig.aspx. It may seem like it would be really easy for us to include this support in the box, but unfortunately that is not the case. The reason why we are not able to implement this feature at this time is because a lot of our tooling (and many partners) relies on web.config directly. For example when you drag and drop a database object onto a web form, it will generate a connection string into the web.config. There are a lot of features are like this. It is a significant investment for us to make a change of this level. We were not able to get this done for Visual Studio 11, but it is on our radar and we are looking to see what we can do in this area in the future. Sayed Ibrahim Hashimi @SayedIHashimi Visual Studio 2010 | web Saturday, May 12, 2012 3:29:15 AM (GMT Daylight Time, UTC+01:00) | Wednesday, March 14, 2012 # Package web updated and video below A couple months ago I blogged about a Package-Web which is a NuGet package that extends the web packaging process in Visual Studio to enable you to create a single package which can be published to multiple environments (it captures all of your web.config transforms and has the ability to transform on non-dev machines). Since that release I have updated the project and tonight I created a video which shows the features a bit you can check it out on Youtube. It’s embedded below. You can install this via NuGet, the package name is PackageWeb. Package-Web is an open source project and you can find it on my github account at https://github.com/sayedihashimi/package-web. Thanks, Sayed Ibrahim Hashimi @SayedIHashimi msbuild | MSDeploy | Visual Studio | web | Web Deployment Tool | Web Development | Web Publishing Pipeline Wednesday, March 14, 2012 6:08:57 AM (GMT Standard Time, UTC+00:00) | Saturday, February 18, 2012 # How to create a Web Deploy package when publishing a ClickOnce project The other day I saw a question on StackOverflow (link in resources below) asking How you can create a Web Deploy (AKA MSDeploy) package when publishing a ClickOnce project. The easiest way to do this is to use the Web Deploy command line utility, msdeploy.exe. With the command line you can easily create an MSDeploy package from a folder with a command like the following: %msdeploy% -verb:sync -source:contentPath="C:\Temp\_NET\WebPackageWithClickOnce\WebPackageWithClickOnce\bin\Debug\app.publish" -dest:package="C:\Temp\_NET\WebPackageWithClickOnce\WebPackageWithClickOnce\bin\Debug\co-pkg.zip" Here you can see that I’m using the sync verb, along with a contentPath provider (which points to a folder) as the source and the destination is using the package provider, this point to where I want the package to be stored. Now that we understand how to create an MSDeploy package from a folder we need to extend the ClickOnce publish process to create a package. I’m not a ClickOnce expert, but the ClickOnce publish process is captured in MSBuild so after investigating for a bit I found the following relevant details. • The ClickOnce publish process is contained in the Microsoft.Common.targets file • The ClickOnce publish process is tied together through the Publish target • ClickOnce prepares the files to be published in a folder under bin named app.publish which is governed by the MSBuild property PublishDir Now that we know what target to extend as well as what property we can use to refer to the folder which has the content we can complete sample. We need to edit the project file. Below is the full contents which I have placed at the bottom of the project file (right above </Project>). <PropertyGroup> <WebDeployPackageName Condition=" '$(WebDeployPackageName)'=='' ">$(MSBuildProjectName).zip</WebDeployPackageName> <!--Unless specified otherwise, the tools will go to HKLM\SOFTWARE\Microsoft\IIS Extensions\MSDeploy\1 to get the installpath for msdeploy.exe.--> <MSDeployPath Condition="'$(MSDeployPath)'==''">$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\IIS Extensions\MSDeploy\3@InstallPath)</MSDeployPath> <MSDeployPath Condition="'$(MSDeployPath)'==''">$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\IIS Extensions\MSDeploy\2@InstallPath)</MSDeployPath> <MSDeployPath Condition="'$(MSDeployPath)'==''">$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\IIS Extensions\MSDeploy\1@InstallPath)</MSDeployPath> <MSDeployExe Condition=" '$(MSDeployExe)'=='' ">$(MSDeployPath)msdeploy.exe</MSDeployExe> </PropertyGroup> <Target Name="CreateWebDeployPackage" AfterTargets="Publish" DependsOnTargets="Publish"> <!-- %msdeploy% -verb:sync -source:contentPath="C:\Temp\_NET\WebPackageWithClickOnce\WebPackageWithClickOnce\bin\Debug\app.publish" -dest:package="C:\Temp\_NET\WebPackageWithClickOnce\WebPackageWithClickOnce\bin\Debug\co-pkg.zip" --> <PropertyGroup> <Cmd>"$(MSDeployExe)" -verb:sync -source:contentPath="$(MSBuildProjectDirectory)\$(PublishDir)" -dest:package="$(OutDir)$(WebDeployPackageName)"</Cmd>
</PropertyGroup>
<Message Text="Creating web deploy package with command: $(Cmd)" /> <Exec Command="$(Cmd)" />
</Target>
Here I’ve created a couple properties as well as a new target, CreateWebDeployPackage. I have declared the property WebDeployPackageName which will be the name (excluding path) of the Web Deploy package which gets created. This defaults to the name of the project, but you can override it if you want. Next I define the property, MSDeployPath, which points to msdeploy.exe. It will pick the latest version.
The CreateWebDeployPackage target just constructs the full command line call which needs to be executed and invokes it using the Exec MSBuild task. There are a couple subtle details on the target itself though which are worth pointing out. The target has declared AfterTargets=”Publish” which means that it will be invoked after the Publish target. It also declares DependsOnTargets=”Publish”. Which means that whenever the target gets invoked that Publish will need to be executed before CreateWebDeployPackage.
Now that we have defined these updates when you publish your ClickOnce project (wither through Visual Studio or the command line/build servers) a Web Deploy package will be generated in the output folder which you can use to incrementally publish your ClickOnce app to your web server. You can find the latest version of this sample on my github repository.
Sayed Ibrahim Hashimi @SayedIHashimi
Resources
ClickOnce | IIS | Microsoft | msbuild | MSDeploy | web Saturday, February 18, 2012 6:47:30 PM (GMT Standard Time, UTC+00:00) |
Tuesday, February 14, 2012
# How to update a single file using Web Deploy (MSDeploy)
The other day I saw a question posted on StackOverflow (link to question below in resources section) asking if it was possible to update web.config using MSDeploy. I actually used a technique where I updated a single file in one of my previous posts at How to take your web app offline during publishing but it wasn’t called out too much. In any case I’ll show you how you can update a single file (in this case web.config) using MSDeploy.
You can use the contentPath provider to facilitate updating a single file. Using contentPath you can sync either a single file or an entire folder. You can also use IIS app paths to resolve where the file/folder resides. For example if I have a web.config file in a local folder named “C:\Data\Personal\My Repo\sayed-samples\UpdateWebConfig” and I want to update my IIS site UpdateWebCfg running in the Default Web Site on my folder I would use the command shown below.
%msdeploy% -verb:sync -source:contentPath="C:\Data\Personal\My Repo\sayed-samples\UpdateWebConfig\web.config" -dest:contentPath="Default Web Site/UpdateWebCfg/web.config"
From the command above you can see that I set the source content path to the local file and the dest content path using the IIS path {SiteName}/{AppName}/{file-path}. In this case I am updating a site running in IIS on my local machine. In order to update one that is running on a remote machine you will have to add ComputerName and possibly some other values to the –dest argument.
You can view the latest sources for this sample at my github repo, link is below.
Hope that helps!
Sayed Ibrahim Hashimi – @SayedIHashimi
Resources:
IIS | MSDeploy | web | Web Publishing Pipeline Tuesday, February 14, 2012 7:17:30 PM (GMT Standard Time, UTC+00:00) |
Sunday, January 08, 2012
# How to take your web app offline during publishing
I received a customer email asking how they can take their web application/site offline for the entire duration that a publish is happening from Visual Studio. An easy way to take your site offline is to drop an app_offline.htm file in the sites root directory. For more info on that you can read ScottGu’s post, link in below in resources section. Unfortunately Web Deploy itself doesn’t support this . If you want Web Deploy (aka MSDeploy) to natively support this feature please vote on it at http://aspnet.uservoice.com/forums/41199-general/suggestions/2499911-take-my-site-app-offline-during-publishing.
Since Web Deploy doesn’t support this it’s going to be a bit more difficult and it requires us to perform the following steps:
1. Publish app_offline.htm
2. Publish the app, and ensure that app_offline.htm is contained inside the payload being published
3. Delete app_offline.htm
#1 will take the app offline before the publish process begins.
#2 will ensure that when we publish that app_offline.htm is not deleted (and therefore keep the app offline)
#3 will delete the app_offline.htm and bring the site back online
Now that we know what needs to be done let’s look at the implementation. First for the easy part. Create a file in your Web Application Project (WAP) named app_offline-template.htm. This will be the file which will end up being the app_offline.htm file on your target server. If you leave it blank your users will get a generic message stating that the app is offline, but it would be better for you to place static HTML (no ASP.NET markup) inside of that file letting users know that the site will come back up and whatever other info you think is relevant to your users. When you add this file you should change the Build Action to None in the Properties grid. This will make sure that this file itself is not published/packaged. Since the file ends in .htm it will by default be published. See the image below.
Now for the hard part. For Web Application Projects we have a hook into the publish/package process which we refer to as “wpp.targets”. If you want to extend your publish/package process you can create a file named {ProjectName}.wpp.targets in the same folder as the project file itself. Here is the file which I created you can copy and paste the content into your wpp.targets file. I will explain the significant parts but wanted to post the entire file for your convince. Note: you can grab my latest version of this file from my github repo, the link is in the resource section below.
<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="InitalizeAppOffline">
<!--
This property needs to be declared inside of target because this is imported before
the MSDeployPath property is defined as well as others -->
<PropertyGroup>
<MSDeployExe Condition=" '$(MSDeployExe)'=='' ">$(MSDeployPath)msdeploy.exe</MSDeployExe>
</PropertyGroup>
</Target>
<PropertyGroup>
<PublishAppOfflineToDest>
InitalizeAppOffline;
</PublishAppOfflineToDest>
</PropertyGroup>
<!--
%msdeploy%
-verb:sync
-source:contentPath="C:\path\to\app_offline-template.htm"
-dest:contentPath="Default Web Site/AppOfflineDemo/app_offline.htm"
-->
<!--***********************************************************************
Make sure app_offline-template.htm gets published as app_offline.htm
***************************************************************************-->
<Target Name="PublishAppOfflineToDest"
BeforeTargets="MSDeployPublish"
DependsOnTargets="$(PublishAppOfflineToDest)"> <ItemGroup> <_AoPubAppOfflineSourceProviderSetting Include="contentPath"> <Path>$(MSBuildProjectDirectory)\app_offline-template.htm</Path>
<EncryptPassword>$(DeployEncryptKey)</EncryptPassword> <WebServerAppHostConfigDirectory>$(_MSDeploySourceWebServerAppHostConfigDirectory)</WebServerAppHostConfigDirectory>
<WebServerManifest>$(_MSDeploySourceWebServerManifest)</WebServerManifest> <WebServerDirectory>$(_MSDeploySourceWebServerDirectory)</WebServerDirectory>
</_AoPubAppOfflineSourceProviderSetting>
<_AoPubAppOfflineDestProviderSetting Include="contentPath">
<Path>"$(DeployIisAppPath)/app_offline.htm"</Path> <ComputerName>$(_PublishMsDeployServiceUrl)</ComputerName>
<UserName>$(UserName)</UserName> <Password>$(Password)</Password>
<EncryptPassword>$(DeployEncryptKey)</EncryptPassword> <IncludeAcls>False</IncludeAcls> <AuthType>$(AuthType)</AuthType>
<WebServerAppHostConfigDirectory>$(_MSDeployDestinationWebServerAppHostConfigDirectory)</WebServerAppHostConfigDirectory> <WebServerManifest>$(_MSDeployDestinationWebServerManifest)</WebServerManifest>
<WebServerDirectory>$(_MSDeployDestinationWebServerDirectory)</WebServerDirectory> </_AoPubAppOfflineDestProviderSetting> </ItemGroup> <MSdeploy MSDeployVersionsToTry="$(_MSDeployVersionsToTry)"
Verb="sync"
Source="@(_AoPubAppOfflineSourceProviderSetting)"
Destination="@(_AoPubAppOfflineDestProviderSetting)"
EnableRule="DoNotDeleteRule"
AllowUntrusted="$(AllowUntrustedCertificate)" RetryAttempts="$(RetryAttemptsForDeployment)"
SimpleSetParameterItems="@(_AoArchivePublishSetParam)"
ExePath="$(MSDeployPath)" /> </Target> <!--*********************************************************************** Make sure app_offline-template.htm gets published as app_offline.htm ***************************************************************************--> <!-- We need to create a replace rule for app_offline-template.htm->app_offline.htm for when the app get's published --> <ItemGroup> <!-- Make sure not to include this file if a package is being created, so condition this on publishing --> <FilesForPackagingFromProject Include="app_offline-template.htm" Condition=" '$(DeployTarget)'=='MSDeployPublish' ">
<DestinationRelativePath>app_offline.htm</DestinationRelativePath>
</FilesForPackagingFromProject>
<!-- This will prevent app_offline-template.htm from being published -->
<MsDeploySkipRules Include="SkipAppOfflineTemplate">
<ObjectName>filePath</ObjectName>
<AbsolutePath>app_offline-template.htm</AbsolutePath>
</MsDeploySkipRules>
</ItemGroup>
<!--***********************************************************************
When publish is completed we need to delete the app_offline.htm
***************************************************************************-->
<Target Name="DeleteAppOffline" AfterTargets="MSDeployPublish">
<!--
%msdeploy%
-verb:delete
-->
<Message Text="************************************************************************" />
<Message Text="Calling MSDeploy to delete the app_offline.htm file" Importance="high" />
<Message Text="************************************************************************" />
<ItemGroup>
<_AoDeleteAppOfflineDestProviderSetting Include="contentPath">
<Path>$(DeployIisAppPath)/app_offline.htm</Path> <ComputerName>$(_PublishMsDeployServiceUrl)</ComputerName>
<UserName>$(UserName)</UserName> <Password>$(Password)</Password>
<EncryptPassword>$(DeployEncryptKey)</EncryptPassword> <AuthType>$(AuthType)</AuthType>
<WebServerAppHostConfigDirectory>$(_MSDeployDestinationWebServerAppHostConfigDirectory)</WebServerAppHostConfigDirectory> <WebServerManifest>$(_MSDeployDestinationWebServerManifest)</WebServerManifest>
<WebServerDirectory>$(_MSDeployDestinationWebServerDirectory)</WebServerDirectory> </_AoDeleteAppOfflineDestProviderSetting> </ItemGroup> <!-- We cannot use the MSDeploy/VSMSDeploy tasks for delete so we have to call msdeploy.exe directly. When they support delete we can just pass in @(_AoDeleteAppOfflineDestProviderSetting) as the dest --> <PropertyGroup> <_Cmd>"$(MSDeployExe)" -verb:delete -dest:contentPath="%(_AoDeleteAppOfflineDestProviderSetting.Path)"</_Cmd>
<_Cmd Condition=" '%(_AoDeleteAppOfflineDestProviderSetting.ComputerName)' != '' ">$(_Cmd),computerName="%(_AoDeleteAppOfflineDestProviderSetting.ComputerName)"</_Cmd> <_Cmd Condition=" '%(_AoDeleteAppOfflineDestProviderSetting.UserName)' != '' ">$(_Cmd),username="%(_AoDeleteAppOfflineDestProviderSetting.UserName)"</_Cmd>
<_Cmd Condition=" '%(_AoDeleteAppOfflineDestProviderSetting.Password)' != ''">$(_Cmd),password=$(Password)</_Cmd>
<_Cmd Condition=" '%(_AoDeleteAppOfflineDestProviderSetting.AuthType)' != ''">$(_Cmd),authType="%(_AoDeleteAppOfflineDestProviderSetting.AuthType)"</_Cmd> </PropertyGroup> <Exec Command="$(_Cmd)"/>
</Target>
</Project>
### #1 Publish app_offline.htm
The implementation for #1 is contained inside the target PublishAppOfflineToDest. The msdeploy.exe command that we need to get executed is.
msdeploy.exe
-source:contentPath='C:\Data\Personal\My Repo\sayed-samples\AppOfflineDemo01\AppOfflineDemo01\app_offline-template.htm'
In order to do this I will leverage the MSDeploy task. Inside of the PublishAppOfflineToDest target you can see how this is accomplished by creating an item for both the source and destination.
### #2 Publish the app, and ensure that app_offline.htm is contained inside the payload being published
This part is accomplished by the fragment
<!--***********************************************************************
Make sure app_offline-template.htm gets published as app_offline.htm
***************************************************************************-->
<!-- We need to create a replace rule for app_offline-template.htm->app_offline.htm for when the app get's published -->
<ItemGroup>
<!-- Make sure not to include this file if a package is being created, so condition this on publishing -->
<FilesForPackagingFromProject Include="app_offline-template.htm" Condition=" '\$(DeployTarget)'=='MSDeployPublish' ">
<DestinationRelativePath>app_offline.htm</DestinationRelativePath>
</FilesForPackagingFromProject>
<!-- This will prevent app_offline-template.htm from being published -->
<MsDeploySkipRules Include="SkipAppOfflineTemplate">
<ObjectName>filePath</ObjectName>
<AbsolutePath>app_offline-template.htm</AbsolutePath>
</MsDeploySkipRules>
</ItemGroup>
The item value for FilesForPackagingFromProject here will convert your app_offline-template.htm to app_offline.htm in the folder from where the publish will be processed. Also there is a condition on it so that it only happens during publish and not packaging. We do not want app_offline-template.htm to be in the package (but it’s not the end of the world if it does either).
The element for MsDeploySkiprules will make sure that app_offline-template.htm itself doesn’t get published. This may not be required but it shouldn’t hurt.
### #3 Delete app_offline.htm
Now that our app is published we need to delete the app_offline.htm file from the dest web app. The msdeploy.exe command would be:
%msdeploy%
-verb:delete
This is implemented inside of the DeleteAppOffline target. This target will automatically get executed after the publish because I have included the attribute AfterTargets=”MSDeployPublish”. In that target you can see that I am building up the msdeploy.exe command directly, it looks like the MSDeploy task doesn’t support the delete verb.
If you do try this out please let me know if you run into any issues. I am thinking to create a Nuget package from this so that you can just install that package. That would take a bit of work so please let me know if you are interested in that.
### Resources
Sayed Ibrahim Hashimi @SayedIHashimi
IIS | Microsoft | msbuild | MSDeploy | Visual Studio 2010 | web | Web Deployment Tool | Web Publishing Pipeline Sunday, January 08, 2012 8:44:39 PM (GMT Standard Time, UTC+00:00) |
Tuesday, November 08, 2011
# Using a Web Deploy package to deploy to IIS on the dev box and to a third party host
Note: I’d like to thank Tom Dykstra for helping me put this together
### Overview
In this tutorial you'll see how to use a web deployment package package to deploy an application. A deployment package is a .zip file that includes all of the content and metadata that's required to deploy an application.
Deployment packages are often used in enterprise environments. This is because a developer or a continuous integration server can create the package without needing to know things like passwords that are stored in Web.config files. Only the server administrator who actually installs the package needs to know those passwords, and that person can enter the details at installation time.
In a smaller organization that doesn't have separate people for these roles, there's less need for deployment packages. But you can also use deployment packages as a way to back up and restore the state of an application. After you use a deployment package to deploy, you can save the package,. Then if a subsequent deployment has a problem, you can quickly and easily restore the application state to the earlier state by reinstalling the earlier package. (This scenario is more complicated if database changes are involved, however.)
This tutorial shows how to use Visual Studio to create a package and IIS Manager to install it. For information about how to create and install packages using the command line, see ASP.NET Deployment Content Map on the MSDN web site.
To keep things relatively simple, this example assumes you have already deployed the application and its databases, and you only need to deploy a code update. You have made the code update, and you are ready to deploy it first to your test environment (IIS on your local computer) and then to your hosting provider. You have a Test build configuration that you use for the test environment and you use the Release build configuration for the production environment. In the example, the name of the Visual Studio project is ContosoUniversity, and instructions for its initial deployment can be found in a series of tutorials that will be published in December on the ASP.NET web site.
The hosting provider shown, Cytanium.com, is one of many that are available, and its use here does not constitute an endorsement or recommendation.
Note The following example uses separate packages for the test and production environments, but you can also create a single deployment package that can be used for both environments. This would require that you use Web Deploy parameters instead of Web.config transformations for Web.config file changes that depend on deployment destination. For information about how to use Web Deploy parameters, see How to: Use Parameters to Configure Deployment Settings When a Package is Installed.
### Configuring the Deployment Package
In this section, you'll configure settings for the deployment package. Some of these settings are the same ones that you set also for one-click publish, others are only for deployment packages.
Open the Package/Publish Web tab of the Project Properties window and select the Test build configuration.
For this deployment you aren't making any database changes, so clear Include all databases configured in Package/Publish SQL tab. Make sure Exclude files from the App_Data folder is selected.
Review the settings in the section labeled Web Deployment Package Settings:
• By default, deployment packages are created as .zip files. You don't need to change this setting.
• By default, deployment packages are created in the project's obj\Test\Package folder. You don't need to change this setting.
• The default IIS web application name is the name of the project with "_deploy" appended to it. Remove that suffix. You want the application to be named just ContosoUniversity in IIS on your computer.
• For this tutorial you're not deploying IIS settings, so you don't need to enter a password for that.
The Package/Publish Web tab now looks like this:
You also need to configure settings for deploying to the production environment. Select the Release build configuration to do that.
Change IIS Web site/application name to use on the destination server to a string that will serve as a reminder of what you need to do later when this value is displayed in the IIS Manager UI: "[clear this field]". The text box on this page won't stay cleared even if you clear it, so entering this note to yourself will remind you to clear this value later when you deploy. When you deploy to your hosting provider, you will connect to a site, not to a server, and in this case you want to deploy to the root of the site.
#### Creating a Deployment Package for the Test Environment
To create a deployment package, first make sure you've selected the right build configuration. In the Solution Configurations drop-down box, select Test.
In Solution Explorer, right-click the project that you want to build the package for and then select Build Deployment Package.
The Output window reports successful a build and publish (package creation) and tells you where the package was created.
#### Installing the Deployment Package in the Test Environment
The next step is to install the deployment package in IIS on your development computer.
Run IIS Manager. In the Connections pane of the IIS Manager window, expand the local server node, expand the Sites node, and select Default Web Site. Then in the Actions pane, click Import Application. (If you don't see an Import Application link, the most likely reason is that you have not installed Web Deploy. You can use the Web Platform Installer to install both IIS and Web Deploy.)
In the Select the Package wizard step, navigate to the location of the package you just created. By default, that's the obj\Test\Package folder in your ContosoUniversity project folder. (A package created with the Release build configuration would be in obj\Release\Package.)
Click Next. The Select the Contents of the Package step is displayed.
Click Next.
The step that allows you to enter parameter values is displayed. The Application Path value defaults to "ContosoUniversity", because that's what you entered on the Package/Publish Web tab of the Project Properties window.
Click Next.
The wizard asks if you want to delete files at the destination that aren't in the source.
In this case you haven't deleted any files that you want to delete at the destination, so the default (no deletions) is okay. Click Next.
IIS Manager installs the package and reports its status.
Click Finish.
Open a browser and run the application in test by going to the URL http://localhost/ContosoUniversity.
#### Installing IIS Manager for Remote Administration
The process for deploying to production is similar except that you create the package using the Release build configuration, and you install it in IIS Manager using a remote connection to the hosting provider. But first you have to install the IIS Manager feature that facilitates remote connections.
Click the following link to use the Web Platform Installer for this task:
#### Connecting to Your Site at the Hosting Provider
After you install the IIS Manager for Remote Administration, run IIS Manager. You see a new Start Page in IIS Manager that has several Connect to ... links in a Connection tasks box. (These options are also available from the File menu.)
In IIS Manager, click Connect to a site. In the Specify Site Connection Details step, enter the Server name and Site name values that are assigned to you by your provider, and then click Next. For a hosting account at Cytanium.com, you get the server name from Service URL in the Visual Studio 2010 section of the welcome email. The site name is indicated by "Site/application" in the same section of the email.
In the Provide Credentials step, enter the user name and password assigned by the provider, and then click Next:
You might see a Server Certificate Alert dialog box. If you're sure that you've entered the correct server and site name, click Connect.
In the Specify a Connection Name step, click Finish.
After IIS Manager connects to the provider's server, a New Feature Available dialog box might appear that lists administration features available for download. Click Cancel — you've already installed everything you need for this deployment.
After the New Feature Available box closes, the IIS Manager window appears. There's now a node in the Connections pane for the site at the hosting provider.
#### Creating a Package for the Production Site
The next step is to create a deployment package for the production environment. In the Visual Studio Solution Configurations drop-down box, select the Release build configuration.
In Solution Explorer, right-click the ContosoUniversity project and then select Build Deployment Package.
The Output window reports a successful build and publish (package creation), and it tells you that the package is created in the obj\Release\Package folder in your project folder.
#### Installing the Package in the Production Environment
Now you can install the package in the production environment. In the IIS Manager Connections pane, select the new connection you added earlier. Then click Import Application, which will walk you through the same process you followed earlier when you deployed to the test environment.
In the Select the Package step, select the package that you just created:
In the Select the Contents of the Package step, leave all the check boxes selected and click Next:
In the Enter Application Package Information step, clear the Application Path and click Next:
The wizard asks if you want to delete files at the destination that aren't in the source.
You don't need to have anything deleted, so just click Next.
When you get the warning about installing to the root folder, click OK:
Package installation begins. When it's done, the Installation Progress and Summary dialog box is shown:
Click Finish. Your application has been deployed to the hosting provider's server, and you can test by browsing to your public site's URL.
You've now seen how to deploy an application update by manually creating and installing a deployment package. For information about how to create and install packages from the command line in order to be able to integrate them into a continuous integration process, see the ASP.NET Deployment Content Map on the MSDN web site.
Sayed Ibrahim Hashimi – @SayedIHashimi
ddd
IIS | msbuild | MSDeploy | web | Web Deployment Tool | Web Development | Web Publishing Pipeline Tuesday, November 08, 2011 5:11:43 AM (GMT Standard Time, UTC+00:00) |
Saturday, January 08, 2011
# Video on Web Deployment using Visual Studio 2010 and MSDeploy
Back in November I participated in Virtual Tech Days which is an online conference presented by Microsoft. In the session I discussed the enhancements to web deployment using Visual Studio 2010 and MSDeploy. Some of the topics which I covered includ:
• web.conig (XDT) transforms
• How to publish to local file system using Visual Studio
• How to publish to a 3rd party host using Visual Studio via MSDeploy
• How to publish to local IIS server using the .cmd file generated by Visual Studio
• How to use msdeploy.exe to delete IIS applications
• How to use the IIS Manager to import web packages
• How to use msdeploy.exe to deploy a web package to the local IIS server
• How to use msdeploy.exe to deploy a web package to a remove IIS server
• How to use msdeploy.exe to deploy a web package & set parameters using SetParameters.xml to a remote IIS server
You can download the video & all of my sample files at http://virtualtechdays.com/pastevents_2010november.aspx. In the samples you will find all of the scripts that I used and a bunch of others which I didn’t have time to cover. Enjoy!
Sayed Ibrahim Hashimi @sayedihashimi
Config-Transformation | IIS | msbuild | MSDeploy | speaking | Visual Studio | Visual Studio 2010 | web | Web Deployment Tool | Web Development | Web Publishing Pipeline Saturday, January 08, 2011 8:34:08 PM (GMT Standard Time, UTC+00:00) |
Monday, June 07, 2010
# Installing web apps made easy: Web Platform Installer
If you are doing any kind of web development and you are not familiar with the Web Platform Installer(WPI) then you need to take a look at it. I just installed WordPress on IIS 7 with just a few clicks and filled in a few text boxes. When you install WordPress there are some prerequisites like mySql and php. The WPI was smart enough to realize that I had neither installed, downloaded those, installed them and configured them. I was prompted for some info for those tools of course. I’ve also installed a few other apps using the WPI like, MSDeploy and dasBlog and I didn’t have any issues what so ever.
When using the WPI there are two main categories that can be installed, Web Platform and Web Applications. The Web Platform category includes items like frameworks (i.e. ASP.NET, PHP), Database (i.e. mySql) and other high level shared components. The Web Applications includes various web applications. Some others that I didn’t list previously include; DotNetNuke, nopCommerce, and umbarco just to name a few. I’m not sure how many apps are available but it looks like at least 50.
If you are an app creator and would like to share your app then you can visit the WPI Developer page for a starting point.
Deployment | IIS | MSDeploy | web | Web Platform Installer Monday, June 07, 2010 4:17:01 AM (GMT Daylight Time, UTC+01:00) |
|
2015-01-27 20:55:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2535886764526367, "perplexity": 2503.6169789692967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122071449.31/warc/CC-MAIN-20150124175431-00031-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-write-the-nth-term-rule-for-the-sequence-4-1-2-5-8
|
# How do you write the nth term rule for the sequence 4,1,-2,-5,-8,...?
Aug 22, 2016
${n}^{t h}$ term in given sequence is $7 - 3 n$.
#### Explanation:
This is an arithmetic sequence as the difference $d$ between a term and its preceding term is always $- 3$ as $- 3 = 1 - 4 = - 2 - 1 = - 5 - \left(- 2\right) = - 8 - \left(- 5\right)$.
If first term is ${a}_{1}$ and common difference in such arithmetic sequence is $d$,
${n}^{t h}$ term is given by a_1+(n-1)×d. Hence ${n}^{t h}$ term in given series is
4+(n-1)×(-3)
= $4 - 3 n + 3$
= $7 - 3 n$
|
2020-09-21 00:38:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641698598861694, "perplexity": 743.2472900393176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00425.warc.gz"}
|
http://semparis.lpthe.jussieu.fr/public/list.pl?date=6&seriescodes=&institute=&subject=&searchfield=&searchpattern=&skip=30&scheduler=0&language=
|
The SEMPARIS seminar webserver hosts annoucements of all seminars taking place in Paris area, in all topics of physics, mathematics and computer science. It allows registered users to receive a selection of announcements by email on a daily or weekly basis, and offers the possibility to archive PDF or Powerpoint files, making it available to the scientific community. [ More information ]
[Previous 30 ] Upcoming Seminars [Next 30 ] [ scheduler view ]
Tuesday 29 May 2018, 11:30 at LPTENS, LPTENS library STR-LPT-ENS-HE (Séminaire commun LPTENS/LPTHE) hep-th Jun Nian ( IHES ) TBA Abstract: TBA
Tuesday 29 May 2018, 14:00 at APC, 483 A - Malevitch APC-TH (Seminar of the theory group of APC) hep-th Cédric Deffayet ( IAP ) Galileon p-form theories Abstract: I will discuss the generalization to p-forms of the Galileon idea: to construct the most general theory of an (abelian gauge invariant) p-form with (strictly) second order field equations. Such theory have recently be fully classified for space-time dimension strictly smaller than 12. The covariantization of these theories will also be discussed.
Tuesday 29 May 2018, 17:15 at DPT-PHYS-ENS, Jean Jaures (29 rue d'Ulm) SEM-PHYS-ENS (Colloquium du Département de Physique de l'ENS) physics.bio-ph William Bialek ( Princeton University ) Towards a renormalization group for networks of neurons Abstract: TBA
Thursday 31 May 2018, 10:00 at IHP, 314 RENC-THEO (Rencontres Théoriciennes) hep-th David Kutasov ( U Chicago ) TBA
Thursday 31 May 2018, 11:45 at IHP, 314 RENC-THEO (Rencontres Théoriciennes) hep-th Clay Cordova ( IAS ) TBA
Friday 1 June 2018, 10:00 at IPHT, Salle Claude Itzykson, Bât. 774 ( https://courses.ipht.cnrs.fr/?q=fr/node/197 ) COURS (Cours) cond-mat|q-bio Rémi Monasson ( ENS Paris ) Unsupervised neural networks: from theory to systems biology (5/6) Abstract: Artificial neural networks, introduced decades ago, are now key tools for automatic learning from data. This series of six lectures will focus on a few neural network architectures used in the context of unsupervised learning, that is, of unlabeled data. \par In particular we will focus on dimensional reduction, feature extraction, and representation building. We will see how statistical physics, in particular the techniques and concepts of random matrix theory and disordered systems, can be used to understand the properties of these algorithms and the phase transitions taking place in their operation. \par Special attention will be devoted to the so-called high-dimensional inference setting, where the numbers of data samples and of defining parameters of the neural nets are comparable. The general principles will be illustrated on recent applications to data coming from neuroscience and genomics, highlighting the potentialities of unsupervised learning for biology. \par Some issues: \\ - What is unsupervised learning? \\ - Hebbian learning for principal component analysis: retarded-learning phase transition and prior information. \\ - Bipartite neural nets and representations: auto-encoders, restricted Boltzmann machines, Boltzmann machines. \\ - Recurrent neural nets: from point to finite-dimensional attractors, temporal sequences. Attachments: 2017-2018.pdf (4503344 bytes) 2018_Monasson.pdf (4507514 bytes)
Monday 4 June 2018, 09:00 at IPHT, Amphi Claude Bloch, Bât. 774 ( https://indico.in2p3.fr/event/17044/ ) WORK-CONF (Workshop or Conference) physics ... ( IPhT ) 23th Itzykson Conference (June 04-06, 2018): Statistical Physics of Disordered and Complex Systems, a Tribute to Cirano De Dominicis Abstract: The 23rd Claude Itzykson Conference, which will take place in the Bloch Amphitheater from June 4 to 6, 2018, is dedicated to our colleague and friend Cirano De Dominicis who was the head and a prominent member of our laboratory for many years. \\ \par The themes of the conference will cover the scientific interests of Cirano: \\ Quantum Systems, \\ Out of Equilibrium Statistical Physics, \\ Disordered Systems and Spin Glasses, \\ Interdisciplinary Systems. \\ \\ \\ The website of the conference is: https://indico.in2p3.fr/event/17044/ \\ \\ Organizing committee: Giulio Biroli, Edouard Brézin, Henri Orland and Laure Sauboy (secretary). \\ \\ Sponsors and benefactors: IPhT (CEA and CNRS), DRF, LabEx LMH.
Tuesday 5 June 2018, 11:30 at LPTENS, LPTENS library STR-LPT-ENS-HE (Séminaire commun LPTENS/LPTHE) hep-th Pietro Longhi ( Uppsala ) TBA Abstract: TBA
Tuesday 5 June 2018, 14:00 at APC, 483 A - Malevitch APC-TH (Seminar of the theory group of APC) hep-th Tomislav Prokopec ( Utrecht University ) TBA
Tuesday 5 June 2018, 16:00 at IPHT, Salle Claude Itzykson, Bât. 774 IPHT-HEP (Séminaire de physique des particules et de cosmologie) hep-ph Csaba Csaki ( Cornell University ) (TBA)
Friday 8 June 2018, 10:00 at IPHT, Salle Claude Itzykson, Bât. 774 ( https://courses.ipht.cnrs.fr/?q=fr/node/197 ) COURS (Cours) cond-mat|q-bio Rémi Monasson ( ENS Paris ) Unsupervised neural networks: from theory to systems biology (6/6) Abstract: Artificial neural networks, introduced decades ago, are now key tools for automatic learning from data. This series of six lectures will focus on a few neural network architectures used in the context of unsupervised learning, that is, of unlabeled data. \par In particular we will focus on dimensional reduction, feature extraction, and representation building. We will see how statistical physics, in particular the techniques and concepts of random matrix theory and disordered systems, can be used to understand the properties of these algorithms and the phase transitions taking place in their operation. \par Special attention will be devoted to the so-called high-dimensional inference setting, where the numbers of data samples and of defining parameters of the neural nets are comparable. The general principles will be illustrated on recent applications to data coming from neuroscience and genomics, highlighting the potentialities of unsupervised learning for biology. \par Some issues: \\ - What is unsupervised learning? \\ - Hebbian learning for principal component analysis: retarded-learning phase transition and prior information. \\ - Bipartite neural nets and representations: auto-encoders, restricted Boltzmann machines, Boltzmann machines. \\ - Recurrent neural nets: from point to finite-dimensional attractors, temporal sequences. Attachments: 2017-2018.pdf (4503344 bytes) 2018_Monasson.pdf (4507514 bytes)
Monday 11 June 2018, 11:00 at IPHT, Salle Claude Itzykson, Bât. 774 IPHT-PHM (Séminaire de physique mathématique) math-ph Oleg Lisovyi ( Tours ) Fonctions tau et constantes de Widom-Dyson
Monday 11 June 2018, 14:00 at IHES, Amphithéâtre Léon Motchane ( Cours de l'IHES ) MATH-IHES (TBA) hep-th Sergiu Klainerman ( Princeton University & IHES ) On the Mathematical Theory of Black Holes (1/4) Abstract: The gravitational waves detected by LIGO were produced in the final faze of the inward spiraling of two black holes before they collided to produce a more massive black hole. The experiment is entirely consistent with the so called Final State Conjecture of General Relativity according to which generic solutions of the Einstein vacuum equations can be described, asymptotically, by a finite number of Kerr solutions moving away from each other. Though the conjecture is so very easy to formulate and happens to be validated by both astrophysical observations as well as numerical experiments, it is far beyond our current mathematical understanding. In fact even the far simpler and fundamental question of the stability of one Kerr black hole remains wide open. In my lectures I will address the issue of stability as well as other aspects the mathematical theory of black holes such as rigidity of black holes and the problem of collapse. The rigidity conjecture asserts that all stationary solutions the Einstein vacuum equations must be Kerr black holes while the problem of collapse addresses the issue of how black holes form in the first place from regular initial conditions. Recent advances on all these problems were made possible by a remarkable combination of geometric and analytic techniques which I will try to outline in my lectures.
Tuesday 12 June 2018, 14:00 at APC, 483 A - Malevitch APC-TH (Seminar of the theory group of APC) hep-th Hayato Motohashi ( Yukawa Institute for Theoretical Physics, Kyoto University ) Constructing degenerate higher-order theories Abstract: Scalar-tensor theories serve models for inflation and dark energy. Many efforts have been made recently for constructing the most general scalar-tensor theories with higher-order derivatives in their Lagrangian. Since higher-derivative theories are typically associated with Ostrogradsky ghost which causes unbounded Hamiltonian, it is important to clarify how to evade it. In this talk, I will explain construction of healthy degenerate theories with higher-order derivatives which circumvent Ostrogradsky ghost. The method also allows us to construct ghost-free theories with derivatives higher than second order in Lagrangian.
Tuesday 12 June 2018, 17:15 at DPT-PHYS-ENS, Jean Jaures (29 rue d'Ulm) SEM-PHYS-ENS (Colloquium du Département de Physique de l'ENS) physics.plasm-ph Amitava Bhattacharjee ( Princeton University ) Fast magnetic reconnection in space and astrophysical plasmas Abstract: TBA
Wednesday 13 June 2018, 14:00 at IHES, Amphithéâtre Léon Motchane ( Cours de l'IHES ) MATH-IHES (TBA) hep-th Sergiu Klainerman ( Princeton University & IHES ) On the Mathematical Theory of Black Holes (2/4) Abstract: The gravitational waves detected by LIGO were produced in the final faze of the inward spiraling of two black holes before they collided to produce a more massive black hole. The experiment is entirely consistent with the so called Final State Conjecture of General Relativity according to which generic solutions of the Einstein vacuum equations can be described, asymptotically, by a finite number of Kerr solutions moving away from each other. Though the conjecture is so very easy to formulate and happens to be validated by both astrophysical observations as well as numerical experiments, it is far beyond our current mathematical understanding. In fact even the far simpler and fundamental question of the stability of one Kerr black hole remains wide open. In my lectures I will address the issue of stability as well as other aspects the mathematical theory of black holes such as rigidity of black holes and the problem of collapse. The rigidity conjecture asserts that all stationary solutions the Einstein vacuum equations must be Kerr black holes while the problem of collapse addresses the issue of how black holes form in the first place from regular initial conditions. Recent advances on all these problems were made possible by a remarkable combination of geometric and analytic techniques which I will try to outline in my lectures.
Friday 15 June 2018, 10:00 at IPHT, Salle Claude Itzykson, Bât. 774 ( https://courses.ipht.cnrs.fr/?q=fr/node/198 ) COURS (Cours) astro-ph|hep-ph Marco Cirelli ( LPTHE Paris ) Dark matter phenomenology (1/5) Abstract: (TBA) Attachments: 2017-2018.pdf (4503344 bytes)
Tuesday 19 June 2018, 11:00 at IPHT, Salle Claude Itzykson, Bât. 774 IPHT-GEN (Séminaire général du SPhT) physics Gabriele Veneziano ( CERN et Collège de France ) (TBA)
Tuesday 19 June 2018, 14:00 at IHES, Amphithéâtre Léon Motchane ( Cours de l'IHES ) MATH-IHES (TBA) hep-th Sergiu Klainerman ( Princeton University & IHES ) On the Mathematical Theory of Black Holes (3/4) Abstract: The gravitational waves detected by LIGO were produced in the final faze of the inward spiraling of two black holes before they collided to produce a more massive black hole. The experiment is entirely consistent with the so called Final State Conjecture of General Relativity according to which generic solutions of the Einstein vacuum equations can be described, asymptotically, by a finite number of Kerr solutions moving away from each other. Though the conjecture is so very easy to formulate and happens to be validated by both astrophysical observations as well as numerical experiments, it is far beyond our current mathematical understanding. In fact even the far simpler and fundamental question of the stability of one Kerr black hole remains wide open. In my lectures I will address the issue of stability as well as other aspects the mathematical theory of black holes such as rigidity of black holes and the problem of collapse. The rigidity conjecture asserts that all stationary solutions the Einstein vacuum equations must be Kerr black holes while the problem of collapse addresses the issue of how black holes form in the first place from regular initial conditions. Recent advances on all these problems were made possible by a remarkable combination of geometric and analytic techniques which I will try to outline in my lectures.
Tuesday 19 June 2018, 14:00 at APC, 483 A - Malevitch APC-TH (Seminar of the theory group of APC) hep-th Albino Hernandez Galeana ( University of Mexico ) TBA
Friday 22 June 2018, 10:00 at IPHT, Salle Claude Itzykson, Bât. 774 ( https://courses.ipht.cnrs.fr/?q=fr/node/198 ) COURS (Cours) astro-ph|hep-ph Marco Cirelli ( LPTHE Paris ) Dark matter phenomenology (2/5) Abstract: (TBA) Attachments: 2017-2018.pdf (4503344 bytes)
Friday 22 June 2018, 14:00 at IHES, Amphithéâtre Léon Motchane ( Cours de l'IHES ) MATH-IHES (TBA) hep-th Sergiu Klainerman ( Princeton University & IHES ) On the Mathematical Theory of Black Holes (4/4) Abstract: The gravitational waves detected by LIGO were produced in the final faze of the inward spiraling of two black holes before they collided to produce a more massive black hole. The experiment is entirely consistent with the so called Final State Conjecture of General Relativity according to which generic solutions of the Einstein vacuum equations can be described, asymptotically, by a finite number of Kerr solutions moving away from each other. Though the conjecture is so very easy to formulate and happens to be validated by both astrophysical observations as well as numerical experiments, it is far beyond our current mathematical understanding. In fact even the far simpler and fundamental question of the stability of one Kerr black hole remains wide open. In my lectures I will address the issue of stability as well as other aspects the mathematical theory of black holes such as rigidity of black holes and the problem of collapse. The rigidity conjecture asserts that all stationary solutions the Einstein vacuum equations must be Kerr black holes while the problem of collapse addresses the issue of how black holes form in the first place from regular initial conditions. Recent advances on all these problems were made possible by a remarkable combination of geometric and analytic techniques which I will try to outline in my lectures.
Tuesday 26 June 2018, 11:30 at LPTENS, LPTENS library STR-LPT-ENS-HE (Séminaire commun LPTENS/LPTHE) hep-th Julian Sonner ( Université de Genève ) TBA Abstract: TBA
Tuesday 26 June 2018, 14:00 at APC, 483 A - Malevitch APC-TH (Seminar of the theory group of APC) hep-th Emilian Dudas ( CPHT - Ecole Polytechnique ) TBA
Tuesday 26 June 2018, 17:15 at DPT-PHYS-ENS, Jean Jaures (29 rue d'Ulm) SEM-PHYS-ENS (Colloquium du Département de Physique de l'ENS) physics.flu-dyn William Young ( UC San Diego ) Long range propagation of ocean swell Abstract: TBA
Friday 29 June 2018, 10:00 at IPHT, Salle Claude Itzykson, Bât. 774 ( https://courses.ipht.cnrs.fr/?q=fr/node/198 ) COURS (Cours) astro-ph|hep-ph Marco Cirelli ( LPTHE Paris ) Dark matter phenomenology (3/5) Abstract: (TBA) Attachments: 2017-2018.pdf (4503344 bytes)
Tuesday 3 July 2018, 14:00 at APC, 483 A - Malevitch APC-TH (Seminar of the theory group of APC) hep-th Léonie Canet ( Université de Grenoble ) Correlation functions in fully developed turbulence Abstract: Turbulence is an ubiquitous phenomenon in natural and industrial fluid flows. Yet, it still lacks a satisfactory theoretical description. One of the main open issues is to calculate the statistical properties of the turbulent steady state, and in particular what is generically called intermittency effects, starting from the fundamental description of the fluid dynamics provided by Navier-Stokes equation. In this presentation, I will focus on isotropic and homogeneous turbulence in three-dimensional incompressible flows. In the first part, I will give an introduction on the basic phenomenology of turbulence, and show what are the typical manifestations of intermittency. In the second part, I will explain how one can derive exact asymptotic (i.e. at large wave- numbers) properties of the correlation functions in the turbulent state, using a field-theoretic approach, based on the Non-Perturbative Renormalisation Group, and compare them to numerical simulations and experiments.
Friday 6 July 2018, 10:00 at IPHT, Salle Claude Itzykson, Bât. 774 ( https://courses.ipht.cnrs.fr/?q=fr/node/198 ) COURS (Cours) astro-ph|hep-ph Marco Cirelli ( LPTHE Paris ) Dark matter phenomenology (4/5) Abstract: (TBA) Attachments: 2017-2018.pdf (4503344 bytes)
Tuesday 10 July 2018, 14:00 at APC, 483 A - Malevitch APC-TH (Seminar of the theory group of APC) hep-th Stéphane Lavignac ( IPhT ) TBA
Friday 13 July 2018, 10:00 at IPHT, Salle Claude Itzykson, Bât. 774 ( https://courses.ipht.cnrs.fr/?q=fr/node/198 ) COURS (Cours) astro-ph|hep-ph Marco Cirelli ( LPTHE Paris ) Dark matter phenomenology (5/5) Abstract: (TBA) Attachments: 2017-2018.pdf (4503344 bytes)
seminars All Next Week This Week Today Tomorrow Upcoming Within a Week from series All ACFTA APC APC-COLLOQUIUM APC-TH BH-TOP BI-COSMO-IHP BI-SEM-IHP BIOPHYS-ENS BISEMINAIRE-MP CONDMAT-ENS CONDMAT-THEO COSMO-P6 COURS COURS-FED COURS-IPHT CPHT - PHDSEM CPHT PHYS MATH CPHT- BS CPHT-JOUR CPHT-LLR CPMC DISQUANT ESPCI-COLLOQUE ESPCI/PCT FCMP FORUM-ENS FOUNDPHYS GDT-MODSTO GQ GR-COSMO IDRIS-SEM IHP-ALG IHPSTRMATH IMJ-AA IMJ-AUT IMJ-CHE IMJ-EAA IMJ-REP IMP-MATH-PHYS INST-ETE IPHT-DAP IPHT-GEN IPHT-HEP IPHT-MAT IPHT-PHM IPHT-SEM IPHT-STA IPN-THEO IPN-X IPNO-DR JOUR-CLUB LP(N/T)HE LPA LPNHE LPS-MAGN LPS-MAT-MOL LPS-VULG LPS/ENS LPT-COSM LPT-GEN LPT-LPTMS LPT-MAG LPT-PHYSMATH LPT-PTH LPTENS-HE LPTHE-DOC LPTHE-PPH LPTMS LPT_STAT MAG-SUPRA MAT-COND-GEN MATH-IHES MECA-STAT MSC PART-PHYS PHEN-PART PHYS-ESPCI PLATEAU PMMH PT-IHES P^3 RENC-THEO RENORMALISATION S-LPTENS SAMM SCOPI SEM-BESSON SEM-CPHT SEM-CSNSM SEM-DARBOUX SEM-EXCEP SEM-FED SEM-GRECO SEM-IBPC SEM-ILP SEM-INFOR SEM-INSP SEM-LAL SEM-LKB SEM-LLR SEM-LPT SEM-LPTENS SEM-LPTHE SEM-LPTM-UCP SEM-LPTMC SEM-LPTMS SEM-LUTH SEM-PHYS-ENS SEM-PMMH SEM-POINCA SEM-UPR5 SOUTEN-HDR SOUTEN-TH SPEC-LARSIM SPEC-SEM STR-LPT-ENS-HE STR-LPTHE STRINT TH-JEUX TH-MAT-COND TRANSPORT TRI-SEMINAIRE WG-EXPTH-LPN/THE WORK-CONF at institute All APC CDF CITEU CPHT CSNSM CURIE DPT-PHYS-ENS ENPC ESPCI ESPCI/UPR5 GRETIA IAP IBPC IDRIS IHES IHP IM-JUSSIEU-PRG IMPMC INSP IPHT IPN LAL LARSIM LKB LLR LMPT LPA LPMA LPNHE LPNHE-GR-TH LPP LPS-ORSAY LPS/ENS LPT LPTENS LPTHE LPTM LPTMC LPTMS LUTH MSC OBSPARIS PCT/ESPCI PMMH SAMM SPEC UPMC in subject All CoRR -- Computing Research Repository CoRR.AI -- Artificial Intelligence CoRR.AR -- Architecture CoRR.CC -- Computational Complexity CoRR.CE -- Computational Engineering CoRR.CG -- Computational Geometry CoRR.CL -- Computation and Language CoRR.CR -- Cryptography and Security CoRR.CV -- Computer Vision and Pattern Recognition CoRR.CY -- Computers and Society CoRR.DB -- Databases CoRR.DC -- Distributed, Parallel, and Cluster Computing CoRR.DL -- Digital Libraries CoRR.DM -- Discrete Mathematics CoRR.DS -- Data Structures and Algorithms CoRR.GL -- General Literature CoRR.GR -- Graphics CoRR.GT -- Computer Science and Game Theory CoRR.HC -- Human-Computer Interaction CoRR.IR -- Information Retrieva CoRR.IT -- Information Theory CoRR.LG -- Learning CoRR.LO -- Logic in Computer Science CoRR.MA -- Multiagent Systems CoRR.MM -- Multimedia; CoRR.MS -- Mathematical Software CoRR.NA -- Numerical Analysis CoRR.NE -- Neural and Evolutionary Computing CoRR.NI -- Networking and Internet Architecture CoRR.OH -- Other CoRR.OS -- Operating Systems CoRR.PF -- Performance CoRR.PL -- Programming Languages CoRR.RO -- Robotics CoRR.SC -- Symbolic Computation CoRR.SD -- Sound CoRR.SE -- Software Engineering astro-ph -- Astrophysics cond-mat -- Condensed Matter cond-mat.dis-nn -- Disordered Sys. and Neural Networks cond-mat.mes-hall -- Mesoscopic Sys. and Q.Hall Effect cond-mat.mtrl-sci -- Materials Science cond-mat.other -- Other cond-mat.soft -- Soft Condensed Matter cond-mat.stat-mech -- Statistical Mechanics cond-mat.str-el -- Strongly Correlated Electrons cond-mat.supr-con -- Superconductivity gr-qc -- General Relativity and Quantum Cosmology hep-ex -- High Energy Physics - Experiment hep-lat -- High Energy Physics - Lattice hep-ph -- High Energy Physics - Phenomenology hep-th -- High Energy Physics - Theory math -- Mathematics math-ph -- Mathematical Physics math.AC -- Commutative Algebra math.AG -- Algebraic Geometry math.AP -- Analysis of PDEs math.AT -- Algebraic Topology math.CA -- Classical Analysis and ODEs math.CO -- Combinatorics math.CT -- Category Theory math.CV -- Complex Variables math.DG -- Differential Geometry math.DS -- Dynamical Systems math.FA -- Functional Analysis math.GM -- General Mathematics math.GN -- General Topology math.GR -- Group Theory math.GT -- Geometric Topology math.HO -- History and Overview math.KT -- K-Theory and Homology math.LO -- Logic math.MG -- Metric Geometry math.MP -- Mathematical Physics math.NA -- Numerical Analysis math.NT -- Number Theory math.OA -- Operator Algebras math.OC -- Optimization and Control math.PR -- Probability math.QA -- Quantum Algebra math.RA -- Rings and Algebras math.RT -- Representation Theory math.SG -- Symplectic Geometry math.SP -- Spectral Theory math.ST -- Statistics nlin -- Nonlinear Sciences nlin.AO -- Adaptation and Self-Organizing Systems nlin.CD -- Cellular Automata and Lattice Gases nlin.CG -- Chaotic Dynamics nlin.PS -- Exactly Solvable and Integrable Systems nlin.SI -- Pattern Formation and Solitons nucl-ex -- Nuclear Experiment nucl-th -- Nuclear Theory physics -- Physics physics.acc-ph -- Accelerator Physics physics.ao-ph -- Atmospheric and Oceanic Physics physics.atm-clus -- Atomic and Molecular Clusters physics.atom-ph -- Atomic Physics physics.bio-ph -- Biological Physics physics.chem-ph -- Chemical Physics physics.class-ph -- Classical Physics physics.comp-ph -- Computational Physics physics.data-an -- Data Analysis physics.ed-ph -- Physics Education physics.flu-dyn -- Fluid Dynamics physics.gen-ph -- General Physics physics.geo-ph -- Geophysics physics.hist-ph -- History of Physics physics.ins-det -- Instrumentation and Detectors physics.med-ph -- Medical Physics physics.optics -- Optics physics.plasm-ph -- Plasma Physics physics.pop-ph -- Popular Physics physics.soc-ph -- Physics and Society physics.space-ph -- Space Physics q-bio -- Quantitative Biology qbio.BM -- Biomolecules qbio.CB -- Cell Behavior qbio.GN -- Genomics qbio.MN -- Molecular Networks qbio.NC -- Neurons and Cognition qbio.OT -- Other qbio.PE -- Populations and Evolution qbio.QM -- Quantitative Methods qbio.SC -- Subcellular Processes; Tissues and Organs qbio.TO -- Tissues and Organs quant-ph -- Quantum Physics with field Speaker Title Abstract Subject matching
[ Postscript Poster | PDF Poster | RSS Thread | ICal Format ]
You are invited to subscribe to SEMPARIS mailing lists in order to receive selected announcements by email.
[ Annonces ] [ Abonnements ] [ Archive ] [ Aide ] [ JavaScript requis ] [ English version ]
|
2018-04-24 12:26:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835141122341156, "perplexity": 6786.093120527828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946688.88/warc/CC-MAIN-20180424115900-20180424135900-00376.warc.gz"}
|
http://mathoverflow.net/questions/98171/what-is-the-relation-between-quasicrystals-riemann-hypothesis-and-pv-numbers?sort=votes
|
# What is the relation between Quasicrystals, Riemann Hypothesis, and PV numbers?
Could somebody explain to me, from a mathematical stand-point, what is a quasi-crystal, and how it relates to the set of Pisot numbers, and the Riemann Hypothesis?
I've heard Freeman Dyson say that the zeros of the Riemann zeta function form a quasi-crystal. But, a priori, I do not see what kind of property of the zeros, that we currently now of, would be able to confer to them more structure than to a random set of isolated numbers.
(Notwithstanding the explicit formula in prime number theory)
To wit, my second question possibly based on a misunderstanding: why is the set of zeros of $\zeta(s)$ a quasi-crystal, while a random sequence of isolated numbers is not? Of course, I first need to fully understand what is a quasi-crystal, because Freeman's definition left me in a fog.
-
Inquiring minds want to know. – kolik May 28 '12 at 7:08
What makes you think Pisot numbers relate to quasi-crystals and/or the Riemann Hypothesis? Did Dyson say something about those, too? – Gerry Myerson May 28 '12 at 7:21
en.wikipedia.org/wiki/Quasicrystal. Please try Wikipedia before posting here. – Charles Matthews May 28 '12 at 10:37
Dyson's definition of a quasicrystal is not equivalent to the one in Wikipedia. – Misha May 28 '12 at 13:35
@Charles: The author of wikipedia article is not a mathematician, so he/she does not understand the difference between the words "define" and "construct." – Misha May 28 '12 at 19:01
Freeman Dyson's proposal is online, based on a talk he gave at MSRI.
Lillian Pierce's senior thesis gives a summary of Peter Sarnak's program to use properties of Gaussian Unitary Ensemble to study the zeros of the Riemann Zeta function.
N. G. Debrujin wrote about Penrose tilings and their Fourier transforms.
Crystalline structures on the line are pretty boring. They are just evenly spaced lattices, like $\mathbb{Z}$, which might appear on different scales.
--o---o---o---o---o---o---o--
---o-----o-----o-----o-----o-
However, there are many quasi-periodic structures on the line, for example $\lfloor n\sqrt{2}\rfloor = \{ 1, 2, 4, 5, 7, 8, 9, 11, 12, 14,\dots \}$ which we can draw on the line.
--o--o-----o--o-----o--o--o-----o--o-----o--
Many of these have special recursive properties. Consider the line $y = \frac{1 + \sqrt{5}}{2} x$ which Golden ration slope. Mark "0" if it crosses a horizontal line and "1" if for a vertical line. You get the Fibonacci Word Of course in 2D you get more interesting quasicrystals, which have interesting number theoretic and recursive structures.
Freeman Dyson wishes the zeros of the Riemann Hypotheses have structure like these.
-
@John: What your definition of quasi-periodicity? As far as I understand the question, one issue is lack of precise definitions in the quasicrystal literature, which is dominated by physics papers. Also, your 2nd link is broken. – Misha May 28 '12 at 18:37
See the newer question: mathoverflow.net/questions/133581/… – Carl Jun 13 '13 at 13:56
|
2016-04-29 14:20:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967753410339355, "perplexity": 791.7045127713276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111365.36/warc/CC-MAIN-20160428161511-00203-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/22299/what-are-some-examples-of-colorful-language-in-serious-mathematics-papers/44992
|
# What are some examples of colorful language in serious mathematics papers? [closed]
The popular MO question "Famous mathematical quotes" has turned up many examples of witty, insightful, and humorous writing by mathematicians. Yet, with a few exceptions such as Weyl's "angel of topology," the language used in these quotes gets the message across without fancy metaphors or what-have-you. That's probably the style of most mathematicians.
Occasionally, however, one is surprised by unexpectedly colorful language in a mathematics paper. If I remember correctly, a paper of Gerald Sacks once described a distinction as being
as sharp as the edge of a pastrami slicer in a New York delicatessen.
Another nice one, due to Wilfred Hodges, came up on MO here.
The reader may well feel he could have bought Corollary 10 cheaper in another bazaar.
What other examples of colorful language in mathematical papers have you enjoyed?
-
## closed as off topic by Loop Space, Felipe Voloch, Kevin Buzzard, Mark Sapir, quidDec 25 '11 at 19:17
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here. If this question can be reworded to fit the rules in the help center, please edit the question.
Latest paper, my co-author put in "but we will choose a more painful way, because there is nothing like pain for feeling alive" but the referee jumped on it. – Will Jagy Apr 23 '10 at 5:09
Maybe I should expand the question to include colorful language cut from serious mathematics papers :) – John Stillwell Apr 23 '10 at 5:18
By the way, your remark reminds me of another in a similar spirit that made it into the Princeton Companion. In his article on algebraic geometry, János Kollár says of stacks: "Their study is strongly recommended to people who would have been flagellants in earlier times." – John Stillwell Apr 23 '10 at 7:49
I was actually rather surprised recently by a referee who did not know the phrase “red herring”, and had to look it up. He insisted that we change it to something more understandable. It makes me wonder how much “colourful” language is weeded out by referees, and whether the mathematical literature is poorer for it. – Harald Hanche-Olsen Apr 24 '10 at 2:31
@Harald: If you intend your mathematical papers to be read by a wide range of readers, then write them in simple language, suitable for those who are relative beginners in English. I remember reading long ago some metaphoric phrase in a mathematics research paper, then imagining students all over the world getting out their English dictionaries, looking it up, and still not understanding what it meant. (I no longer remember what the phrase was, just this reaction to it.) – Gerald Edgar Apr 24 '10 at 15:43
From Jim Stasheff's Homotopy Associativity of H-spaces I, the magisterial-sounding
To study spaces which admit $A_n$-structures, we can work directly with the maps…. In the case of a topological group, this amounts to working only with the classifying bundle and never mentioning group operations. This would be an exercise in rectitude of thought of which it would be pointless to countenance the austerity, for not only would it eliminate a useful perspective on the subject, but, by disguising its own main point, it would place the reader beneath a cloud of unknowing.
Note 1: this is partly a subtle dig at Claude Chevalley's Fundamental Concepts of Algebra, whose preface ends, "Secondly, that one of the important pedagogical problems which a teacher of beginners in mathematics has to solve is to impart to his students the technique of rigorous mathematical reasoning; this is an exercise in rectitude of thought, of which it would be futile to disguise the austerity."
Note 2: Stasheff is exhibiting his awareness of religious literature (The Cloud of Unknowing is a 14th century work of Christian mysticism, written in Middle English).
-
Spivak, A Comprehensive Introduction to Differential Geometry, Volume 1, p.94,
Now that we have a well-defined bundle map $TM \to T\;'M$ (the union of all $\beta_x^{-1} \circ \alpha_x$), it is clearly an equivalence $e_M$. The proof that $e_N \circ f_* = f_\\# > \circ e_M$ is left as a masochistic exercise for the reader.
Volume 3, p. 103, indexed under "Idiot, any,"
These normalizations are usually carried out with hardly a word of motivation, as if they are so natural that any idiot would immediately think of doing them—in reality, of course, the authors already knew what results they wanted, since they were simply reformulating a classical theory.
From Volume 5, p.59,
We are going to begin by deriving certain classical PDE's which describe important (somewhat idealized) physical situations. The word "derive" had better be taken with a hefty grain of salt, however. What I have really tried to do is give plausible reasons why the physical situations should be governed by those PDE's which the physicists have agreed upon. I've never really been able to understand which parts of the standard derivations are supposed to be obvious, which are mathematically simplifying assumptions, which steps are supposed to correspond to empirically discovered physical laws, or even what all the words are supposed to mean.
Incidentally, Spivak gave an entertaining series of lectures on the subject of classical mechanics, whence
I haven't the slightest idea what any of this means! But I'm almost certain that it amounts to the similarity argument we have given. Aren't you glad that you aren't a mathematician of the 17th century!?
-
You might want to read http://www.matem.unam.mx/~magidin/lenstra.html for hilarious language during lecturing.
-
At the end of the introduction to Spin Glasses: a challenge for mathematicians, Michel Talagrand writes:
It is customary for authors, at the end of an introduction, to warmly thank their spouse for having granted them the peaceful time needed to complete their work. I find that these thanks are far too universal and overly enthusiastic to be believable. Yet, I must say that in the present case even what would sound for the reader as exaggerated thanks would not truly reflect the extraordinary privileges I have enjoyed. Be jealous, reader, for I yet have to hear the words I dread the most: "Now is not the time to work".
-
In a paper of F.A.Muller — Sets, Classes and Categories — Solomon Feferman is cited:
I realise that workers in category-theory are so at home in their subject that they find it more natural to think in category-theoretic rather than set-theoretical terms, but I would liken this to not needing to hear once one has learned to compose music.
Colin McLarty in Learning from Questions on Categorical Foundations does mention this, too.
[Feferman 1977] S., 'Categorical Foundations and Foundations of Category Theory', in Logic, Foundations fo Mathematics and Computability Theory, R.E. Butts & J. Hintikka (eds.), Dordrecht: D. Reidel, 1977; pp.149-169
-
I confess to such an 'ailment'. But a lot of my work is internal to categories other than Set, so I have no choice, really... – David Roberts Aug 28 '11 at 22:39
@David: Please don't feel offended, it's about colorful language, not about category theory. – Hans Stricker Aug 28 '11 at 23:07
Still, Feferman is quite mistaken, I believe. Categorists, like other mathematicians, won't hesitate to think in set-theoretic terms if that is what works best in a given situation. – Todd Trimble Dec 13 '11 at 6:41
Diaconis and Efron wrote a paper "Testing for Independence in a Two-Way Table: New Interpretations of the Chi-Square Statistic" that was followed by 10 papers discussing their suggestion. The following is from Diaconis and Efron's rejoiner:
The critical paper that they refer to starts with a speldid colorful language:
Update: This is an additional answer too good to be missed. textarea
-
I've always marveled that the abbreviated terminology for "thickenings of the corresponding special Lagrangian" on the bottom of page 26 of this paper of Richard Thomas made it into print:
http://xxx.soton.ac.uk/abs/math.DG/0104196
-
That's an example of colourful language, not colorful language :) – François G. Dorais Apr 23 '10 at 19:55
He was inspired by the following famous UK comic: (en.wikipedia.org/wiki/The_Fat_Slags) I saw him give a talk on the subject once. When the phrase came up all the English people in the audience laughed and everyone else looked around with very confused expressions on their faces. – Joel Fine Apr 24 '10 at 8:14
This is more colloquial than you think! The Fat Slags are a pair of well-known cartoon characters from Viz magazine. Given that he's a Brit, it's surely a reference to them. – Kevin Buzzard Apr 24 '10 at 8:20
Math Reviews used to be much more colorful. In the 1950s, Haefliger was working on groupoids, developing a lot of what is now fundamental in the theory of stacks. Palais reviewed a 1958 paper of Haefliger's, concluding with,
The first four chapters of the paper are concerned with an extreme, Bourbaki-like generalization of the notion of foliation. After some twenty-five pages and several hundred preliminary definitions, the reader finds that a foliation of $X$ is to be an element of the zeroth cohomology space of $X$ with coefficients in a certain sheaf of groupoids. Holonomy, the Reeb-Ehresmann stability theorems, etc., are then generalized to this setting. While such generalization has its place and may in fact prove useful in the future, it seems unfortunate to the reviewer that the author has so materially reduced the accessibility of the results, mentioned above, of Chapter V, by couching them in a ponderous formalism that will undoubtedly discourage many otherwise interested readers.
-
I don't think I'd consider this language colorful so much as grumpy and annoyed. I'm mildly curious whether Palais would feel at all differently today. – Todd Trimble Dec 16 '12 at 13:57
I came across this little gem when preparing for a talk on Kakeya sets and the ball multiplier problem, found on page 437 of E. Stein's Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals:
We will use this process to generate our monster, which will have a tiny heart and many arms.
-
I don't agree with this quote by Errett Bishop (a constructivist who developed real analysis along constructive lines), but I admire its brio:
Mathematics belongs to man, not to God. We are not interested in properties of the positive integers that have no descriptive meaning for finite man. When a man proves a positive integer to exist, he should show how to find it. If God has mathematics of his own that needs to be done, let him do it himself.
It's an odd spin on that famous Kronecker quote about the integers and God.
-
R.Coleman writing about the Dwork Principle in Section III of "Dilogarithms, Regulators and $p$-adic $L$-functions":
"Rigid analysis was created to provide some coherence in an otherwise totally disconnected $p$-adic realm. Still, it is often left to Frobenius to quell the rebellious outer provinces".
-
Number theorist Andrew Granville wrote a paper called "Prime number races" in which he studies the "race" between prime numbers $\equiv$ 1 (mod 4) and prime numbers $\equiv$ 3 (mod 4). The introduction is most certainly a colorful one:
There’s nothing quite like a day at the races...The quickening of the pulse as the starter’s pistol sounds, the thrill when your favorite contestant speeds out into the lead (or the distress if another contestant dashes out ahead of yours), and the accompanying fear (or hope) that the leader might change. And what if the race is a marathon? Maybe one of the contestants will be far stronger than the others, taking the lead and running at the head of the pack for the whole race. Or perhaps the race will be more dramatic, with the lead changing again and again for as long as one cares to watch. Our race involves the odd prime numbers, separated into two teams depending on the remainder when they are divided by 4:
-
The following is taken from The paper "Rational points near curves and small nonzero $|x^3-y^2|$ via lattice" by Noam Elkies It was discussed in a previous MO question.
Citing the Simpsons is rather surprising and I wonder what is the story behind it.
-
According to en.wikipedia.org/wiki/Alt.tv.simpsons "The writers also use the newsgroup to test how observant the fans are. In the seventh season episode "Treehouse of Horror VI", the writer of segment Homer3, David S. Cohen, deliberately inserted a false equation into the background of one scene. The equation that appears is $1782^{12} + 1841^{12} = 1922^{12}$." – Gerry Myerson Apr 7 '11 at 7:25
More Weyl, all Mancosu's translation, all in his fierce days advocating Brouwer's mathematics:
Weyl (1921) On the New Foundational Crisis of Mathematics,
It must have the effect of a deliverance from a nightmare for whoever has maintained any sense for intuitively given facts in the abstract formalism of mathematics.
Weyl (1925) The current epistemological situation in mathematics:
At set theory's outermost borders, blurred in fog, crevices (i.e., flagrant contradictions) soon appeared.
and ibid, of the intuitionistic conception of the continuum:
The ice cover was burst into floes, and now the element of flux was soon altogether master over the solid.
Though these were published in mathematical journals, they are maybe not what the question was after, since they are not part of normal mathematical exposition.
-
In Jacquet and Langlands' "Automorphic forms on GL(2)", page 154, they discuss a construction which uses some choices of intermediate objects -- of course the question whether the final result depends on those choices comes up ; here is how they treat it :
We prefer to pretend that the difficulty does not exist. As a matter of fact for anyone lucky enough not to have been indoctrinated in the functorial point of view it doesn’t.
-
This is perhaps more of a silly play on words than colourful, but I still got a laugh out of it. One page 58 of Conway's 'The sensual (quadratic) form' while discussing Kneser's gluing method a sentence begins:
To further illuminate the utility of glue, ...
-
I have the book but I don't get the pun, and I feel the lesser for it. Could you please explain it, if not in comments or answers here then, say, in your MO "profile" autobiography field or in email to me? – Will Jagy Apr 24 '10 at 4:08
It's likely that I have a very dry sense of humour. But, if Conway was being formal he would write "To further illuminate the utility of the gluing method,..". I can't help but feel that it is written the way it is quite deliberately. – Robby McKilliam Apr 24 '10 at 21:48
I think I see, and I agree that it was deliberate. I was looking for song titles that rhymed, as "Cupidity Fondue," "Venality of You," "Morality Imbue." – Will Jagy Apr 24 '10 at 22:50
Fulton and Harris's "Representation Theory" has a few examples of colourful language. Two of my favorites:
In recent work their* Lie-theoretic origins have been exploited to produce their representations, but to tell their story would go far beyond the scope of these lecture(r)s.
*: The finite Chevalley groups.
Any mathematician, stranded on a desert island with only these ideas and the definition of a particular Lie algebra $\mathfrak{g}$ such as $\mathfrak{sl}_n \mathbb{C}$, $\mathfrak{so}_n \mathbb{C}$, or $\mathfrak{sp}_n \mathbb{C}$, would in short order have a complete description of all the objects defined above in the case of $\mathfrak{g}$. We should say as well, however, that at the conclusion of this procedure we are left without one vital piece of information about the representations of $\mathfrak{g}$ ... this is, of course, a description of the multiplicities of the basic representations $\Gamma_a$. As we said, we will, in fact, describe and prove such a formula (the Weyl character formula); but it is of a much les straightforward character (our hypothetical shipwrecked mathematician would have to have what could only be described as a pretty good day to come up the idea) and will be left until later.
-
I like the following from the Introduction of Iwaniec-Kowalski: Analytic number theory (AMS, 2004):
Poisson summation for number theory is what a car is for people in modern communities – it transports things to other places and it takes you back home when applied next time – one cannot live without it.
This is not the only good one in that introduction, I let you find the others!
-
Here is a colorful rejoinder by D. Zagier (in his reprinted article on the dilogarithm) to colorful language by Ph. Elbaz-Vincent and H. Gangl:
[Ph. Elbaz-Vincent and H. Gangl] called these functions "polyanalogs," an amalgam of the words "analogue," "polylog," and "pollyanna" (an American term suggesting exaggerated or unwarranted optimism). Presumably the correct term for the case $m=2$ would then be "dianalog," which has a pleasing British flavo(u)r.
-
Two from Casselman's "A companion to Macdonald's book on p-adic spherical functions":
The word ‘´epingler’ means ‘to pin’, and the image that comes to mind most appropriately is that of a mounted butterfly specimen. [Kottwitz:1984] uses ‘splitting’ for what most call ‘´epinglage’, but this is not compatible with the common use of ‘deploiement’, the usual French term for ‘splitting’.) Ian Macdonald, among others, has suggested that retaining the French word ´epinglage in these notes is a mistake, and that it should be replaced by the usual translation ‘pinning.’ This criticism is quite reasonable, but I rejected it as leading to noncolloquial English. The words ‘pinning’ as noun and ‘pinned’ as adjective are commonly used only to refer to an item of clothing worn by infants, and it just didn’t sound right.
and
These phenomena are part of what Langlands calls endoscopy, a word that might be roughly justified by saying that endoscopy is concerned with some fine aspects of the structure of harmonic analysis on a reductive p-adic group. Langlands attributes the term to Avner Ash, praising his classical knowledge, but I was pleased to find recently the following quotation that shows a more vulgar intrusion of endoscopy into the modern world:
Jeeves: “ . . . I had no need of the endoscope.”
Bertie: “The what?”
Jeeves: “Endoscope, sir. An instrument which enables one to peer into the . . . interior and discern the core.”
From Chapter 12 of Jeeves and the feudal spirit by P. G. Wodehouse.
This discussion is about distingishing fae jewlry from real. Since the endoscope also has medical uses, one could imagine an even more vulgar usage.
He has modified the notes several times so these might not be there anymore, but I have the older copies =)
-
My girlfriend is a surgeon and once a month our copy of "Endoscopy" drops through the post box. I tried to out-do her recently by sitting on the sofa reading a paper of Waldspurger about "twisted endoscopy" and she suggested he was doing it wrong. – Kevin Buzzard Apr 24 '10 at 8:22
You made the effort, that's what counts in the end. – Will Jagy Apr 24 '10 at 19:05
Edward Nelson, Predicative Arithmetic, p. 50:
The intuition that the set of all subsets of a finite set is finite -- or more generally, that if $A$ and $B$ are finite sets, then so is the set $B^A$ of all functions from $A$ to $B$ -- is a questionable intuition. Let $A$ be the set of some $5000$ spaces for symbols on a blank sheet of typewriter paper, and let $B$ be the set of some $80$ symbols of a typewriter; then perhaps $B^A$ is infinite. Perhaps it is even incorrect to think of $B^A$ as being a set. To do so is to postulate an entity, the set of all possible typewritten pages, and then to ascribe some kind of reality to this entity -- for example, by asserting that one can in principle survey each possible typewritten page. But perhaps it simply is not so. Perhaps there is no such number as $80^{5000}$; perhaps it is always possible to write a new and different page. Many ordinary activities are built up in a similar way from a rather small set of symbols or actions. Perhaps infinity is not far off in space or time or thought; perhaps it is while engaged in an ordinary activity -- writing a page, getting a child ready for school, talking with someone, teaching a class, making love -- that we are immersed in infinity.
-
Having just noticed this, I am rather disturbed by the thought that out there, somewhere, someone is looking into another person's eyes and asking "do you want to immerse yourself in infinity?" – Yemon Choi Jun 14 '11 at 21:40
Or even using it as a line in a bar, heaven forfend... – Yemon Choi Jun 14 '11 at 21:41
"Now life is too short to work over the integers all of the time, ..."
J. Morava, On the complex cobordism ring as a Fock representation.
-
There is the following apochryphal dedication of a doctoral thesis:
"I am deeply grateful to Professor X, whose wrong conjectures and fallacious proofs led me to the theorems he had overlooked."
In fact this is a description of excellent supervision, in giving confidence to a student!
-
@Yemon: It is told of Pontrjagin that his students gradually realised he had already solved the suggested problem , and this was very offputting. The image of "line manager" is false. A supervisor can suggest a good area in which the student might make some progress, and also to show by example how to cope with failure. "In research, the secret of success is the successful management of failure!" Also, one key question is after failure:"Why did I think this might be a good idea?" Others are: "What are the fall back positions? What are the fall forward positions?" How to manage risk? – Ronnie Brown Nov 4 '12 at 11:00
This quote is taken from the paper "How to write a proof" by Leslie Lamport. The paper is about a system to write mathematical proofs in a more formal way. (Of course I do not share the opinion expressed in this paragraphs.)
-
In what way is this language colorful? It's a strongly expressed opinion, but that doesn't make it colorful. – Todd Trimble Dec 16 '12 at 15:18
According to the book "King of Infinite Space" Coxeter, "tickled his readers with unexpected turns of phrase such as":
... dividing the product of the first three expressions by the product of the last two, and indulging in a veritable orgy of cancellation, we obtain ...
-
I am rather fond of Sylvester's "Aspiring to these wide generalizations, the analysis of quadratic functions soars to a pitch from whence it may look proudly down on the feeble and vain attempts of geometry proper to rise to its level or to emulate it in its flights." (1850)
-
Although the article itself is standard, I've always been fond of the title (and contents) of the Burstall & Hertrich-Jeromin paper Harmonic maps in unfashionable geometries.
-
I just came across a paper of Waldhausen (On Irreducible 3-manifolds Which are Sufficiently Large) where he says "Frequently, a proof involves a sequence of constructions, each of which in turn involves alterations of some things. To avoid an orgy of notation in such cases, we often denote the altered things by the old symbols."
-
In the huge and austere book "Groupes algébriques" by M. Demazure and P. Gabriel we find in the last pages a "Dictionaire "Fonctoriel"", a dictionary of terms related to category theory where they have:
Satellite- Voir Cartan-Eilenberg et non Paris-Match.
-
André Weil uses some very colourful language in the introduction of his 1946 book Foundations of Algebraic Geometry. I recommend any mathematician to read it. Here are some excerpts:
"As in other kinds of war, so in this bloodless battle with an ever retreating foe which it is our good luck to be waging, it is possible for the advancing army to outrun its services of supply and incur disaster unless it waits for the quartermaster to perform his inglorious but indispensable task."
"Of course every mathematician has a right to his own language---at the risk of not being understood; and the use sometimes made of this right by our contemporaries almost suggests that the same fate is being prepared for mathematics as once befell, at Babel, another of man's great achievements."
"... however grateful we algebraic geometers should be to the modern algebraic school for lending us temporary accommodation, makeshift constructions full of rings, ideals and valuations, in which some of us feel in constant danger of getting lost, our wish and aim must be to return at the earliest possible moment to the palaces which are ours by birthright, to consolidate shaky foundations, to provide roofs where they are missing, to finish, in harmony with the portions already existing, what has been left undone."
"...it is hoped that these may be helpful to the reader, to whom the author, having acted as his pilot until this point, heartily wishes Godspeed on his sailing away from the axiomatic shore, further and further into open sea."
-
## protected by François G. Dorais♦Oct 15 '13 at 2:42
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
2015-05-28 18:33:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6111781597137451, "perplexity": 1383.2845455740326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929561.80/warc/CC-MAIN-20150521113209-00058-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/classical-mechanics-problem.309276/
|
# Classical Mechanics Problem
## Homework Statement
Give the equations of motion of the following system:
http://www.jelp.org/imagenes/mech.jpg
## Homework Equations
So, i assume the following cases (the diagram i so deficient).
1) Black Point (x1) is fixed
2) There's a force applied at x1 (black dot)
3) The position of the black dot is a function of time f(t).
## The Attempt at a Solution
For 1) I think the equation of motion is:
http://www.texify.com/img/%5CLARGE%5C%21m%5Cddot%7Bx%7D%2Bkx%2BG%5Cdot%7Bx%7D%3D0.gif [Broken]
It's a dampeda harmonic oscillator, Right?
2) If a force f(t) is applied to the system, then we have :
http://www.texify.com/img/%5CLARGE%5C%21m%5Cddot%7Bx%7D%2Bkx%2BG%5Cdot%7Bx%7D%3DF%28t%29.gif [Broken]
like the Damped and driven harmonic oscillator, am i right?
3) If by example, the full system (the damper too) is moving to the left (i.e. negative X).
For this, i tried the following:
The position of the mass is X=f(t)+L+lo , where L is the natural lenght of the spring, lo is the elongation or strecht of the spring. Then, taking the time derivatives of X, and substituing in the damped harmonic oscillator equation (Case 1), i get to:
http://www.texify.com/img/%5CLARGE%5C%21m%5Cddot%7Bf%7D%2Bk%28f%28t%29%2BL%2Bl_0%29%2BG%5Cdot%7Bf%7D%3D0.gif [Broken]
Please, tell me if i am right in the whole problem. Thanks
Last edited by a moderator:
I've not done this for a while.. but.. resolve the forces around the mass M
I get something like (cant get latex thing workings so decript!)
m(x''2) - G(x'2) + k(x1 - x2) = 0
then you know x1 is a function of time f(t)
so this can be substituted in for x1 i believe
then rearrange so f(t) is subject.
I may be slightly wrong.. but I had a go :)
P.S. force from spring = k( x1- x2)
which is the bit that may have confused you
|
2022-01-22 21:32:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8281113505363464, "perplexity": 1632.4256289428922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00365.warc.gz"}
|
http://projecteuclid.org/euclid.aop/1019160260
|
## The Annals of Probability
### Conformal invariance of domino tiling
Richard Kenyon
#### Abstract
Let $U$ be a multiply connected region in $\mathbf{R}^ 2$ with smooth boundary. Let $P_\epsilon$ be a polyomino in $\epsilon\mathbf{Z}^2$ approximating $U$ as $\epsilon \to 0$.We show that, for certain boundary conditions on $P_\eqsilon$, the height distribution on a random domino tiling (dimer covering) of $P_\eqsilon$ is conformally invariant in the limit as $\epsilon$ tends to 0, in the sense that the distribution of heights of boundary components (or rather, the difference of the heights from their mean values) only depends on the conformal type of $U$. The mean height is not strictly conformally invariant but transforms analytically under conformal mappings in a simple way. The mean height and all the moments are explicitly evaluated.
#### Article information
Source
Ann. Probab. Volume 28, Number 2 (2000), 759-795.
Dates
First available in Project Euclid: 18 April 2002
http://projecteuclid.org/euclid.aop/1019160260
Digital Object Identifier
doi:10.1214/aop/1019160260
Mathematical Reviews number (MathSciNet)
MR1782431
Zentralblatt MATH identifier
01905938
#### Citation
Kenyon, Richard. Conformal invariance of domino tiling. Ann. Probab. 28 (2000), no. 2, 759--795. doi:10.1214/aop/1019160260. http://projecteuclid.org/euclid.aop/1019160260.
#### References
• [1] Belavin, A., Polyakov, A. and Zamolodchikov, A. (1984). Infinite conformal symmetry in two-dimensional quantum field theory Nuclear Phys. B 241 333.
• [2] Benjamini, I. and Schramm, O. (1998). Conformal invariance of Voronoi percolation. Comm. Math. Phys. 197 75-107.
• [3] Billingsley, P. (1979). Probability and Measure. Wiley, New York.
• [4] Bl ¨ote, H. W. J. and Hilhorst, H. J. (1982). Roughening transitions and the zerotemperature triangular Ising antiferromagnet. J. Phys. A 15 L631.
• [5] Burton, R. and Pemantle, R. (1993). Local characteristics, entropy and limit theorems for spanning trees and domino tilings via transfer-impedances. Ann. Probab. 21 1329- 1371.
• [6] Cardy, J. (1987). Conformal invariance. In Phase Transitions and Critical Phenomena (C. Domb and J. L. Lebowitz, eds.) 11 55-126. Academic Press, New York.
• [7] Cohn, H, Kenyon, R. and Propp, J. (1999). A variational principle for domino tilings. J. Amer. Math. Soc. To appear.
• [8] Doyle, P. and Snell, J. L. (1984). RandomWalks and Electrical Networks. Math. Assoc. of America, Washington, D.C.
• [9] Duffin, R. J. (1956). Basic properties of discrete analytic functions. Duke Math. J. 23 335- 363.
• [10] Fournier, J.-C. (1995). Pavage des figures planes sans trous par des dominos: fondement graphique de l'algorithm de Thurston et parallelisation. C. R. Acad. Sci. S´er. I 320 107-112.
• [11] Guttmann, A. and Bursill, R. (1990). Critical exponent for the loop-erased self-avoiding walk by Monte Carlo methods. J. Statist. Phys. 59 1-9.
• [12] Kasteleyn, P. (1961). The statistics of dimers on a lattice. I. The number of dimer arrangements on a quadratic lattice. Physica 27 1209-1225.
• [13] Kenyon, R. (1997). Local statistics of lattice dimers. Ann. Inst. H. Poincar´e Probab. Statist. 33 591-618.
• [14] Kenyon, R., Propp, J. and Wilson, D. (2000). Trees and matchings. Electron. J. Combin. 7 Research Paper 25.
• [15] Kondev, J. and Henley, C. (1995). Geometrical exponents of contour loops on random Gaussian surfaces. Phys. Rev. Lett. 74 4580-4583.
• [16] St ¨ohr, A. (1954). ¨Uber einige lineare partielle Differenzengleichungen mit konstanter Koeffizienten III. Math. Nachr. 3 330-357.
• [17] Temperley, H. (1974). Combinatorics: Proceedings of the British Combinatorial Conference 1973. 202-204. Cambridge Univ. Press.
• [18] Tesler, G. (2000). Matchings in graphs on non-oriented surfaces. J. Combin. Theory Ser. B 78 198-231.
• [19] Thurston, W. P. (1990). Conway's tiling groups. Amer. Math. Monthly 97 757-773.
|
2016-10-26 23:05:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7198628187179565, "perplexity": 4439.831069661381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00266-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/809608/how-to-construct-a-quasi-category-from-a-category-with-weak-equivalences
|
# How to construct a quasi-category from a category with weak equivalences?
Let $(\mathcal{C},W)$ be a pair with $\mathcal{C}$ a category and $W$ a wide (containing all objects) subcategory. Such a pair represents an $(\infty,1)$-category. One model for such gadgets is a quasi-category (a simplicial set satisfying the weak Kan extension condition). What is the direct procedure that constructs such a quasi-category from $(\mathcal{C},W)$? (I don't mind assuming that $(\mathcal{C},W)$ is part of a model structure if it simplifies things.)
I can do it indirectly. For example, given a model category, one can use the Dwyer-Kan technology to construct a simplicial category (by simplicial localization, hammock localization or whatever), apply fibrant replacement in the bergenr model structure for simplicial categories (i.e. making the mapping complexes Kan) and then take the simplicial nerve (Lurie, HTT 1.1.5). Another way is to construct a complete Segal space by Rezk's nerve construction and then take the zero row (note that this involves a fibrant replacement in the Reedy model structure). Both methods are quite complicated and I would like to know a more explicit construction. In particular, I would like to understand what are, say, the 0-, 1- and 2-simplixes of the resulting quasi-category.
• There is no simple procedure. The question ultimately boils down to, yes, fibrant replacement of one kind or another. (You missed that step with simplicial categories.) Fibrant replacement in the Bergner model structure is not so bad, at least – $\mathrm{Ex}^{\infty}$ (or any other fibrant replacement in $\mathbf{sSet}$ that preserves finite products) is all you need to construct fibrant replacements of simplicial categories. – Zhen Lin May 26 '14 at 7:27
• A completely different approach is to take the pushout of $N (\mathcal{W}) \hookrightarrow N (\mathcal{C})$ along $N (\mathcal{W}) \to \mathrm{Ex}^{\infty} (N (\mathcal{W}))$ and then take a fibrant replacement of that in the Joyal model structure. That will be, in some sense, the result of inverting $\mathcal{W}$ in $\mathcal{C}$ as a quasicategory. But I have no idea how to show that this is equivalent to the usual constructions. – Zhen Lin May 26 '14 at 7:30
• Thanks for the correction (fixed). Is there at least a simple description of the low dimensional simplexes? – KotelKanim May 26 '14 at 9:01
• Not really. The procedures you describe are only well-defined up to weak categorical equivalence. But there is an explicit fibrant replacement in the Bergner model structure, and hammock localisation is also explicit, so in principle you could get an explicit description of that. – Zhen Lin May 26 '14 at 9:10
• Well, I meant to ask whether there is a construction for which the low dimensional simplexes can be described explicitly. – KotelKanim May 26 '14 at 12:44
## 2 Answers
Every step in the following procedure is explicit, if somewhat complicated:
1. Construct the hammock localisation $L^H (\mathcal{C}, \mathcal{W})$. (See [Dwyer and Kan, Calculating simplicial localizations] for details.)
2. Apply $\mathrm{Ex}^\infty$ to every hom-space of $L^H (\mathcal{C}, \mathcal{W})$; this yields a fibrant simplicially enriched category $\widehat{L^H} (\mathcal{C}, \mathcal{W})$ because $\mathrm{Ex}^\infty$ preserves finite products, and the natural weak homotopy equivalence $\mathrm{id} \Rightarrow \mathrm{Ex}^\infty$ yields a Dwyer–Kan equivalence $L^H (\mathcal{C}, \mathcal{W}) \to \widehat{L^H} (\mathcal{C}, \mathcal{W})$. (See [Kan, On c.s.s. complexes] for details.)
3. Take the homotopy-coherent nerve of $\widehat{L^H} (\mathcal{C}, \mathcal{W})$ to get a quasicategory $\hat{N} (\mathcal{C}, \mathcal{W})$. (See [Cordier and Porter, Vogt's theorem on categories of homotopy coherent diagrams] for details.)
Let me make a few remarks to get you started.
• The objects in $L^H (\mathcal{C}, \mathcal{W})$ are the same as the objects in $\mathcal{C}$, and the morphisms are "reduced" zigzags of morphisms in $\mathcal{C}$.
• The natural weak homotopy equivalence $X \to \mathrm{Ex}^\infty (X)$ is bijective on vertices, so the Dwyer–Kan equivalence $L^H (\mathcal{C}, \mathcal{W}) \to \widehat{L^H} (\mathcal{C}, \mathcal{W})$ is actually an isomorphism of the underlying ordinary categories.
• The vertices (resp. edges) of $\hat{N} (\mathcal{C}, \mathcal{W})$ are the objects (resp. morphisms) in $\widehat{L^H} (\mathcal{C}, \mathcal{W})$, which are the same as the objects (resp. morphisms) in $L^H (\mathcal{C}, \mathcal{W})$.
The 2-simplices of $\hat{N} (\mathcal{C}, \mathcal{W})$ are harder to describe. Conceptually, they are homotopy-coherent commutative triangles in $\widehat{L^H} (\mathcal{C}, \mathcal{W})$, so they involve a simplicial homotopy in $\widehat{L^H} (\mathcal{C}, \mathcal{W})$; and by thinking about the explicit description of $\mathrm{Ex}^\infty$, the simplicial homotopies in $\widehat{L^H} (\mathcal{C}, \mathcal{W})$ are essentially zigzags of simplicial homotopies in $L^H (\mathcal{C}, \mathcal{W})$, i.e. zigzags of "reduced hammocks of width 1".
• This is the first construction mentioned in the question. – Omar Antolín-Camarena May 29 '14 at 19:22
There is a simple direct procedure to extract a quasicategory from a model category, see Remark 2.8 in Meier's “Model categories are fibrant relative categories”. One simply has to apply the functor $i_1^* N_ξ$, which is a nerve-like construction for relative categories.
|
2021-03-06 12:57:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220668077468872, "perplexity": 480.62177415042186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00124.warc.gz"}
|
https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.preparation.preparearbitrarystate
|
# PrepareArbitraryState operation
Namespace: Microsoft.Quantum.Preparation
Package: Microsoft.Quantum.Standard
Warning
Given a set of coefficients and a little-endian encoded quantum register, prepares an state on that register described by the given coefficients.
operation PrepareArbitraryState (coefficients : Microsoft.Quantum.Math.ComplexPolar[], qubits : Microsoft.Quantum.Arithmetic.LittleEndian) : Unit is Adj + Ctl
## Description
This operation prepares an arbitrary quantum state $\ket{\psi}$ with complex coefficients $r_j e^{i t_j}$ from the $n$-qubit computational basis state $\ket{0 \cdots 0}$. In particular, the action of this operation can be simulated by the a unitary transformation $U$ which acts on the all-zeros state as
\begin{align} U\ket{0...0} & = \ket{\psi} \\ & = \frac{ \sum_{j=0}^{2^n-1} r_j e^{i t_j} \ket{j} }{ \sqrt{\sum_{j=0}^{2^n-1} |r_j|^2} }. \end{align}
## Input
### coefficients : ComplexPolar[]
Array of up to $2^n$ complex coefficients represented by their absolute value and phase $(r_j, t_j)$. The $j$th coefficient indexes the number state $\ket{j}$ encoded in little-endian format.
### qubits : LittleEndian
Qubit register encoding number states in little-endian format. This is expected to be initialized in the computational basis state $\ket{0...0}$.
## Remarks
Negative input coefficients $r_j < 0$ will be treated as though positive with value $|r_j|$. coefficients will be padded with elements $(r_j, t_j) = (0.0, 0.0)$ if fewer than $2^n$ are specified.
|
2021-07-27 19:47:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9745829105377197, "perplexity": 3025.2588936085713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00419.warc.gz"}
|
https://proofwiki.org/wiki/Mathematician:Mathematicians/Sorted_By_Nation/Norway
|
# Mathematician:Mathematicians/Sorted By Nation/Norway
Jump to: navigation, search
For more comprehensive information on the lives and works of mathematicians through the ages, see the MacTutor History of Mathematics archive, created by John J. O'Connor and Edmund F. Robertson.
The army of those who have made at least one definite contribution to mathematics as we know it soon becomes a mob as we look back over history; 6,000 or 8,000 names press forward for some word from us to preserve them from oblivion, and once the bolder leaders have been recognised it becomes largely a matter of arbitrary, illogical legislation to judge who of the clamouring multitude shall be permitted to survive and who be condemned to be forgotten.'
-- Eric Temple Bell: Men of Mathematics, 1937, Victor Gollancz, London
## Norway
##### Caspar Wessel (1745 – 1818)
Norwegian–Danish mathematician and cartographer who, in $1799$, was the first person to describe the geometrical interpretation of complex numbers as points in the complex plane.
show full page
##### Niels Henrik Abel (1802 – 1829)
Norwegian mathematician who died tragically young.
Made significant contributions towards algebra, analysis and group theory.
Best known for proving the impossibility of solving the general quintic in radicals (Abel-Ruffini Theorem).
show full page
##### Peter Ludwig Mejdell Sylow (1832 – 1918)
Ludwig Sylow was a Norwegian mathematician who established some important facts on the topic of subgroups of prime order.
show full page
##### Marius Sophus Lie (1842 – 1899)
Sophus Lie (pronounced Lee) was a Norwegian mathematician famous for his study of continuous transformation groups.
Such objects are now called Lie groups.
show full page
##### Viggo Brun (1885 – 1978)
Norwegian mathematician best known for his work in number theory.
show full page
##### Thoralf Albert Skolem (1887 – 1963)
Norwegian mathematician who worked mainly in the fields of mathematical logic and set theory.
show full page
##### Trygve Nagell (1895 – 1988)
Norwegian mathematician known for his work on Diophantine equations.
show full page
##### Øystein Ore (1899 – 1968)
Norwegian mathematician whose work was mainly in graph theory, although also known for his work in ring theory and Galois theory.
One of the early founders of lattice theory.
Also known for writing and editing several books, including a few on various aspects of the history of mathematics.
show full page
##### Ingebrigt Johansson (1904 – 1987)
Norwegian mathematician and logician best known for inventing minimal logic.
show full page
##### Wilhelm Ljunggren (1905 – 1973)
Norwegian mathematician, specializing in number theory.
show full page
##### Atle Selberg (1917 – 2007)
Norwegian mathematician known for his work in analytic number theory, and in the theory of automorphic forms.
Instrumental in developing a proof of the Prime Number Theorem. Engaged in a bitter dispute with Paul Erdős over priority.
show full page
|
2018-05-24 12:02:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720799088478088, "perplexity": 4282.5171872145565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866276.61/warc/CC-MAIN-20180524112244-20180524132244-00598.warc.gz"}
|
https://tex.stackexchange.com/questions/267652/tikz-shapes-not-quite-getting-things-right-anchors-and-keys/267710
|
# tikz shapes, not quite getting things right (anchors and keys)
So, I'm having a go at defining a tikz/pgf shape. Basically these nodes will not contain any text, and it is just a method of not copying a lot of code to draw a few of these boxes.
I have a few questions
1. In a shape is there an official method of accessing the already defined anchors? I made a macro for it (\pgfutil@useanchor), but thought there might be a better method.
2. The shape it self, is there a better method for drawing these two extra areas? (I'm probably not getting the margins right, just trying to support a thick outer line width)
3. The keys, especially dbox strib width, why doesn't a change away from the initial value, change the extra anchors on the node? See the lower drawing It does change the \beforebackgroundpath, clearly doing something wrong here.
Any ideas?
\documentclass[a4paper]{memoir}
\pagestyle{empty}
\usepackage{tikz}
\usetikzlibrary{shapes}
\makeatletter
% access to anchor coordinates, got to be a better way
\def\pgfutil@useanchor#1#2{\csname pgf@anchor@#1@#2\endcsname}
\pgfkeys{/pgf/.cd,
dbox strib width/.initial=5mm,
% dbox strib width/.code={%
% \def\pgf@lib@temp{#1}%
% \pgfkeyslet{/pgf/dbox strib width}{\pgf@lib@temp}%
% },
dbox strib color/.initial=blue,
}
\pgfdeclareshape{dbox}
{
% this is just a rectangle with extra colored areas
\inheritsavedanchors[from=rectangle]
\inheritanchorborder[from=rectangle]
\inheritanchor[from=rectangle]{north}
\inheritanchor[from=rectangle]{north west}
\inheritanchor[from=rectangle]{north east}
\inheritanchor[from=rectangle]{center}
\inheritanchor[from=rectangle]{west}
\inheritanchor[from=rectangle]{east}
\inheritanchor[from=rectangle]{mid}
\inheritanchor[from=rectangle]{mid west}
\inheritanchor[from=rectangle]{mid east}
\inheritanchor[from=rectangle]{base}
\inheritanchor[from=rectangle]{base west}
\inheritanchor[from=rectangle]{base east}
\inheritanchor[from=rectangle]{south}
\inheritanchor[from=rectangle]{south west}
\inheritanchor[from=rectangle]{south east}
\anchor{center left}{
\pgf@process{\pgfutil@useanchor{dbox}{west}}
\advance\pgf@x by \pgfkeysvalueof{/pgf/dbox strib width}
\advance\pgf@x by \pgflinewidth
}
\anchor{center left above}{
\pgf@process{\pgfutil@useanchor{dbox}{north west}}
\advance\pgf@x by \pgfkeysvalueof{/pgf/dbox strib width}
\advance\pgf@x by \pgflinewidth
}
\anchor{center left below}{
\pgf@process{\pgfutil@useanchor{dbox}{south west}}
\advance\pgf@x by \pgfkeysvalueof{/pgf/dbox strib width}
\advance\pgf@x by \pgflinewidth
}
\anchor{center right}{
\pgf@process{\pgfutil@useanchor{dbox}{east}}
\advance\pgf@x by -\pgfkeysvalueof{/pgf/dbox strib width}
\advance\pgf@x by -\pgflinewidth
}
\anchor{center right above}{
\pgf@process{\pgfutil@useanchor{dbox}{north east}}
\advance\pgf@x by -\pgfkeysvalueof{/pgf/dbox strib width}
\advance\pgf@x by -\pgflinewidth
}
\anchor{center right below}{
\pgf@process{\pgfutil@useanchor{dbox}{south east}}
\advance\pgf@x by -\pgfkeysvalueof{/pgf/dbox strib width}
\advance\pgf@x by -\pgflinewidth
}
\beforebackgroundpath{
\pgfsetfillcolor{\pgfkeysvalueof{/pgf/dbox strib color}}
\pgfpathrectanglecorners{
\pgfpointadd{\southwest}{
\pgfpoint{\pgflinewidth}{\pgflinewidth}
}
}{
\pgfpointadd{\pgf@process{\pgfutil@useanchor{dbox}{north west}}}%
{\pgfpoint{\pgflinewidth+\pgfkeysvalueof{/pgf/dbox strib width}}{-\pgflinewidth}}
}
\pgfusepath{fill}
\pgfpathrectanglecorners{
\pgfpointadd{\northeast}{
\pgfpoint{-\pgflinewidth}{-\pgflinewidth}
}
}{
\pgfpointadd{\pgf@process{\pgfutil@useanchor{dbox}{south east}}}%
{\pgfpoint{-\pgflinewidth-\pgfkeysvalueof{/pgf/dbox strib width}}{\pgflinewidth}}
}
\pgfusepath{fill}
}
%
% Background path
%
\inheritbackgroundpath[from=rectangle]
}
\makeatother
\begin{document}
\begin{tikzpicture}
\begin{scope}[
ms/.style = {minimum height=17mm,minimum
width=6cm,draw,fill=cyan,shape=dbox,
dbox strib color=red!50!white,
},
]
\node[ms,
%dbox strib width=1cm,
] (BBb) at (0,2) {};
\node[ms,
dbox strib width=1cm,
] (BBa) at (0,0) {};
\end{scope}
\fill[green] (BBa.center left) circle (1mm);
\fill[green] (BBa.center left above) circle (1mm);
\fill[green] (BBa.center left below) circle (1mm);
\fill[green] (BBa.center right) circle (1mm);
\fill[green] (BBa.center right above) circle (1mm);
\fill[green] (BBa.center right below) circle (1mm);
\end{tikzpicture}
\end{document}
Here is what it looks like right now
• I'm on the phone but in a nutshell the accesible anchors are \savedanchors. See the manual for the nuance. \anchors are only computed during runtime. – percusse Sep 15 '15 at 14:10
• @percusse so I have to compute them using \savedanchor and then set the anchor to point to that macro? How annoying. – daleif Sep 15 '15 at 14:23
• @percusse, hmm, that does not translate very well. At least I cannot replace \anchor{xxx}{ by a simple \savedanchor\xxx{, if I do I get an error in \northeast (which is interited from rectangle). Weird. Presumably because of the use of \pgf@process – daleif Sep 15 '15 at 15:02
• @percusse, ahh, so because the normal rectangle shape does not set any saved anchors other than what corresponds to south west and north east, one really cannot rely on them in calculations. I'll have to define saved anchors for the anchors I'd like to use in my calculations. (sad smiley) – daleif Sep 15 '15 at 15:11
• Yet another stangeness, can't we use a saved anchor to define another one? – daleif Sep 15 '15 at 15:36
## 2 Answers
Here's a fully worked through example of (what I understand to be) the required shape from first principles (i.e., without inheritance). I exploit the (undocumented) \addtosavedmacro command which can be used inside a "saved macro" (see \savedmacro in the manual) to define multiple macros at once in side the \getdboxparameters macro.
All the "usual" anchors are defined but the image excludes the anchors based on the node having some text content.
In order to add a fill color it is necessary to add an dbox inner color key.
\documentclass[tikz,border=5]{standalone}
\usetikzlibrary{plotmarks}
\makeatletter
\pgfkeys{/pgf/.cd,
dbox strib width/.initial=5mm,
dbox strib color/.initial=red!50,
dbox inner color/.initial=blue!20
}
\pgfdeclareshape{dbox}{%
\savedmacro\getdboxparameters{%
\pgfmathsetlength\pgf@xa{\wd\pgfnodeparttextbox}%
\pgfmathsetlength\pgf@ya{\ht\pgfnodeparttextbox+\dp\pgfnodeparttextbox}%
\pgfextract@process\centerpoint{%
\pgfqpoint{.5\pgf@xa}{.5\pgf@ya}%
}%
\addtosavedmacro\centerpoint%
%
\pgfmathsetlengthmacro\dstrib{\pgfkeysvalueof{/pgf/dbox strib width}}%
\pgfmathsetlengthmacro\innerxsep{\pgfkeysvalueof{/pgf/inner xsep}}%
\pgfmathsetlengthmacro\innerysep{\pgfkeysvalueof{/pgf/inner ysep}}%
\pgfmathsetlengthmacro\outerxsep{\pgfkeysvalueof{/pgf/outer xsep}}%
\pgfmathsetlengthmacro\outerysep{\pgfkeysvalueof{/pgf/outer ysep}}%
\pgfmathsetlengthmacro\minimumwidth{\pgfkeysvalueof{/pgf/minimum width}}%
\pgfmathsetlengthmacro\minimumheight{\pgfkeysvalueof{/pgf/minimum height}}%
%
\pgfmathsetlengthmacro\halfwidth{max(\minimumwidth,%
\pgf@xa+2*(\innerxsep+\dstrib))/2}%
\pgfmathsetlengthmacro\halfheight{max(\minimumheight,%
\pgf@ya+2*(\innerysep))/2}%
\pgfextract@process\southwest{%
\pgfpointadd{\centerpoint}{%
\pgfpointadd{\pgfqpoint{-\halfwidth}{-\halfheight}}%
{\pgfqpoint{-\outerxsep}{-\outerysep}}}%
}%
\pgfextract@process\northeast{%
\pgfpointadd{\centerpoint}{%
\pgfpointadd{\pgfqpoint{\halfwidth}{\halfheight}}%
{\pgfqpoint{\outerxsep}{\outerysep}}}%
}%
\edef\linewidth{\the\pgflinewidth}%
\addtosavedmacro{\linewidth}%
\addtosavedmacro\dstrib%
\addtosavedmacro\outerxsep%
\addtosavedmacro\outerysep%
\addtosavedmacro\southwest%
\addtosavedmacro\northeast%
\addtosavedmacro\halfwidth%
}
\backgroundpath{%
\getdboxparameters%
\pgfpathrectanglecorners%
{\pgfpointadd{\southwest}{\pgfqpoint{\outerxsep}{\outerysep}}}%
{\pgfpointadd{\northeast}{\pgfqpoint{-\outerxsep}{-\outerysep}}}%
}
\behindbackgroundpath{%
\getdboxparameters%
\pgfpathrectanglecorners%
{\pgfpointadd%
{\southwest\pgf@xa=\pgf@x\northeast\pgf@x=\pgf@xa%
\advance\pgf@x by\dstrib}%
{\pgfqpoint{\outerxsep}{-\outerysep}}}%
{\pgfpointadd%
{\northeast\pgf@xa=\pgf@x\southwest\pgf@x=\pgf@xa%
\advance\pgf@x by-\dstrib}%
{\pgfqpoint{-\outerxsep}{\outerysep}}}%
\pgfsetfillcolor{\pgfkeysvalueof{/pgf/dbox inner color}}%
\pgfusepath{fill}%
\pgfpathrectanglecorners%
{\pgfpointadd{\southwest}{\pgfqpoint{\outerxsep}{\outerysep}}}%
{\pgfpointadd%
{\southwest\pgf@xa=\pgf@x\northeast\pgf@x=\pgf@xa%
\advance\pgf@x by\dstrib}%
{\pgfqpoint{\outerxsep}{-\outerysep}}%
}%
\pgfpathrectanglecorners%
{\pgfpointadd{\northeast}{\pgfqpoint{-\outerxsep}{-\outerysep}}}%
{\pgfpointadd%
{\northeast\pgf@xa=\pgf@x\southwest\pgf@x=\pgf@xa%
\advance\pgf@x by-\dstrib}%
{\pgfqpoint{-\outerxsep}{\outerysep}}%
}%
\pgfsetfillcolor{\pgfkeysvalueof{/pgf/dbox strib color}}%
\pgfusepath{fill}%
}
\anchorborder{%
\getdboxparameters%
\pgf@xb=\pgf@x%
\pgf@yb=\pgf@y%
\southwest%
\pgf@xa=\pgf@x% xa/ya is se
\pgf@ya=\pgf@y%
\northeast%
\advance\pgf@x by-\pgf@xa%
\advance\pgf@y by-\pgf@ya%
\pgf@xc=.5\pgf@x% x/y is half width/height
\pgf@yc=.5\pgf@y%
\advance\pgf@xa by\pgf@xc% xa/ya becomes center
\advance\pgf@ya by\pgf@yc%
\edef\pgf@marshal{%
\noexpand\pgfpointborderrectangle
{\noexpand\pgfqpoint{\the\pgf@xb}{\the\pgf@yb}}
{\noexpand\pgfqpoint{\the\pgf@xc}{\the\pgf@yc}}%
}%
\pgf@process{\pgf@marshal}%
\advance\pgf@x by\pgf@xa%
\advance\pgf@y by\pgf@ya%
}
\anchor{center}{\getdboxparameters\centerpoint}
\anchor{north}{\getdboxparameters\centerpoint%
\pgf@xa=\pgf@x\northeast\pgf@x=\pgf@xa}
\anchor{south}{\getdboxparameters\centerpoint%
\pgf@xa=\pgf@x\southwest\pgf@x=\pgf@xa}
\anchor{east}{\getdboxparameters\centerpoint%
\pgf@ya=\pgf@y\northeast\pgf@y=\pgf@ya}
\anchor{west}{\getdboxparameters\centerpoint%
\pgf@ya=\pgf@y\southwest\pgf@y=\pgf@ya}
\anchor{north west}{\getdboxparameters\southwest%
\pgf@xa=\pgf@x\northeast\pgf@x=\pgf@xa}
\anchor{south east}{\getdboxparameters\northeast%
\pgf@xa=\pgf@x\southwest\pgf@x=\pgf@xa}
\anchor{north east}{\getdboxparameters\northeast}
\anchor{south west}{\getdboxparameters\southwest}
\anchor{base}{\getdboxparameters\centerpoint\pgf@y=0pt\relax}
\anchor{base west}{\getdboxparameters\southwest\pgf@y=0pt\relax}
\anchor{base east}{\getdboxparameters\northeast\pgf@y=0pt\relax}
\anchor{mid}{\getdboxparameters\centerpoint%
\pgfmathsetlength\pgf@y{0.5ex}}
\anchor{mid west}{\getdboxparameters\southwest%
\pgfmathsetlength\pgf@y{0.5ex}}
\anchor{mid east}{\getdboxparameters\northeast%
\pgfmathsetlength\pgf@y{0.5ex}}
\anchor{center left}{\getdboxparameters%
\pgfpointadd{\southwest\pgf@xa=\pgf@x\centerpoint\pgf@x=\pgf@xa}%
{\pgfpoint{\dstrib+\outerxsep}{+0pt}}}
\anchor{center left above}{\getdboxparameters%
\pgfpointadd{\southwest\pgf@xa=\pgf@x\northeast\pgf@x=\pgf@xa}%
{\pgfpoint{\dstrib+\outerxsep}{+0pt}}}
\anchor{center left below}{\getdboxparameters%
\pgfpointadd{\southwest}%
{\pgfpoint{\dstrib+\outerxsep}{+0pt}}}
\anchor{center right}{\getdboxparameters%
\pgfpointadd{\northeast\pgf@xa=\pgf@x\centerpoint\pgf@x=\pgf@xa}%
{\pgfpoint{-\dstrib-\outerxsep}{+0pt}}}
\anchor{center right above}{\getdboxparameters%
\pgfpointadd{\northeast}%
{\pgfpoint{-\dstrib-\outerxsep}{+0pt}}}
\anchor{center right below}{\getdboxparameters%
\pgfpointadd{\southwest\pgf@ya=\pgf@y\northeast\pgf@y=\pgf@ya}%
{\pgfpoint{-\dstrib-\outerxsep}{+0pt}}}
}
\begin{document}
\begin{tikzpicture}
\fill [red] circle [radius=.1pt];
\node [draw=gray!50, line width=0.125in, dbox, dbox strib width=0.5in,
inner xsep=0.75in, inner ysep=0.5in] (s) {};
\foreach \anchor/\placement in
{north west/left, north/below, north east/right,
west/left, center/above, east/right,
south west/left, south/above, south east/right,
10/right, 190/below,
center left/above, center left above/above, center left below/below,
center right/above, center right above/above, center right below/below}
\draw[shift=(s.\anchor)] plot[mark=x] coordinates{(0,0)}
node[\placement] {\scriptsize\texttt{(s.\anchor)}};
\end{tikzpicture}
\end{document}
• I guess center left should lie on the right edge of the left red block. (Although it is called center.) And your center left above and center right below are strange. – Symbol 1 Sep 15 '15 at 16:24
• @Symbol1 some good point about where anchors logically should lie and the inconsistent anchors. I've updated the answer. I'm still using the OP's anchor names though. – Mark Wibrow Sep 15 '15 at 16:44
• Nice one, I'll learn a great deal by studying this. As for the center left above/below, I just added them for fun. In my application I'll only need center left/right for the rest of the drawing. – daleif Sep 15 '15 at 16:58
• What is the use of those 0.5ex? – daleif Sep 15 '15 at 17:01
• @daleif the mid anchors were conventionally defined as being 0.5ex above the baseline of the text in the node. A bit irrelevant if the node has no text, but I included it for reference. – Mark Wibrow Sep 15 '15 at 17:40
This is not a direct answer to your question, but just an example to say that you can use pic to draw your shape with anchors, as a method of not copying a lot of code to draw a few of these boxes.
\documentclass[tikz,border=5,convert={density=2100}]{standalone}
\tikzset{
dbox width/.store in=\dboxwidth,dbox width=10mm,
dbox height/.store in=\dboxheight,dbox height=5mm,
dbox color/.store in=\dboxcolor,dbox color=blue!50,
strip width/.store in=\stripwidth,strip width=2mm,
strip color/.store in=\stripcolor,strip color=red!50,
set box size/.style = {inner sep=0,minimum width=#1,minimum height=\dboxheight},
dbox/.pic = {
\node[pic actions,fill=\dboxcolor,set box size=\dboxwidth] (-main) at (0,0){};
\node[pic actions,fill=\stripcolor,below right,set box size=\stripwidth] (-left) at (-main.north west){};
\node[pic actions,fill=\stripcolor,below left,set box size=\stripwidth] (-right) at (-main.north east){};
}
}
\begin{document}
\begin{tikzpicture}
\pic[dbox color=green!70] (A) at (0,1) {dbox};
\pic[strip color=yellow!70, strip width=1mm] (B) at (0,0) {dbox};
\foreach \a in {center,north,south}
\fill[green] (A-left.\a) circle(.4pt);
\draw[-latex] (A-left.center) -- (B-right.center);
\end{tikzpicture}
\end{document}
• Interesting, more to the arsenal – daleif Sep 15 '15 at 19:56
|
2020-01-19 17:25:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6814313530921936, "perplexity": 14361.73235141673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00033.warc.gz"}
|
https://everything.explained.today/Distribution_(mathematics)/
|
Distribution (mathematics) explained
Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions than classical solutions, or where appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are singular, such as the Dirac delta function.
f
is normally thought of as on the in the function domain by "sending" a point in its domain to the point
f(x).
Instead of acting on points, distribution theory reinterprets functions such as
f
as acting on in a certain way. In applications to physics and engineering, are usually infinitely differentiable complex-valued (or real-valued) functions with compact support that are defined on some given non-empty open subset
U\subseteq\Rn
. (Bump functions are examples of test functions.) The set of all such test functions forms a vector space that is denoted by
infty(U) C c
or
l{D}(U).
Most commonly encountered functions, including all continuous maps
f:\R\to\R
if using
U:=\R,
can be canonically reinterpreted as acting via "integration against a test function." Explicitly, this means that
f
"acts on" a test function
\psi\inl{D}(\R)
by "sending" it to the number $\int_\R f \, \psi \, dx,$ which is often denoted by
Df(\psi).
This new action $\psi \mapsto D_f(\psi)$ of
f
is a scalar-valued map, denoted by
Df,
whose domain is the space of test functions
l{D}(\R).
This functional
Df
turns out to have the two defining properties of what is known as a : it is linear and also continuous when
l{D}(\R)
is given a certain topology called . The action of this distribution on a test function can be interpreted as a weighted average of the distribution on the support of the test function, even if the values of the distribution at a single point are not well-defined. Distributions like
Df
that arise from functions in this way are prototypical examples of distributions, but there are many that cannot be defined by integration against any function. Examples of the latter include the Dirac delta function and distributions defined to act by integration of test functions against certain measures. It is nonetheless still possible to reduce any arbitrary distribution down to a simpler of related distributions that do arise via such actions of integration.
More generally, a is by definition a linear functional on
infty(U) C c
that is continuous when
infty(U) C c
is given a topology called the . This leads to space of (all) distributions on
U
, usually denoted by
l{D}'(U)
(note the prime), which by definition is the space of all distributions on
U
(that is, it is the continuous dual space of
infty(U) C c
); it is these distributions that are the main focus of this article.
Definitions of the appropriate topologies on spaces of test functions and distributions are given in the article on spaces of test functions and distributions. This article is primarily concerned with the definition of distributions, together with their properties and some important examples.
History
The practical use of distributions can be traced back to the use of Green functions in the 1830s to solve ordinary differential equations, but was not formalized until much later. According to, generalized functions originated in the work of on second-order hyperbolic partial differential equations, and the ideas were developed in somewhat extended form by Laurent Schwartz in the late 1940s. According to his autobiography, Schwartz introduced the term "distribution" by analogy with a distribution of electrical charge, possibly including not only point charges but also dipoles and so on. comments that although the ideas in the transformative book by were not entirely new, it was Schwartz's broad attack and conviction that distributions would be useful almost everywhere in analysis that made the difference.
Notation
n
is a fixed positive integer and
U
is a fixed non-empty open subset of Euclidean space
\Rn.
\N=\{0,1,2,\ldots\}
denotes the natural numbers.
k
will denote a non-negative integer or
infty.
• If
f
is a function then
\operatorname{Dom}(f)
will denote its domain and the of
f,
denoted by
\operatorname{supp}(f),
is defined to be the closure of the set
\{x\in\operatorname{Dom}(f):f(x)0\}
in
\operatorname{Dom}(f).
• For two functions
f,g:U\to\Complex,
the following notation defines a canonical pairing: $\langle f, g\rangle := \int_U f(x) g(x) \,dx.$
• A of size
n
is an element in
\Nn
(given that
n
is fixed, if the size of multi-indices is omitted then the size should be assumed to be
n
). The of a multi-index
\alpha=(\alpha1,\ldots,\alphan)\in\Nn
is defined as
\alpha1+ … +\alphan
and denoted by
|\alpha|.
Multi-indices are particularly useful when dealing with functions of several variables, in particular we introduce the following notations for a given multi-index
\alpha=(\alpha1,\ldots,\alphan)\in\Nn
: $\beginx^\alpha &= x_1^ \cdots x_n^ \\\partial^\alpha &= \frac\end$ We also introduce a partial order of all multi-indices by
\beta\ge\alpha
if and only if
\betai\ge\alphai
for all
1\lei\len.
When
\beta\ge\alpha
we define their multi-index binomial coefficient as: $\binom := \binom \cdots \binom.$
Definitions of test functions and distributions
In this section, some basic notions and definitions needed to define real-valued distributions on are introduced. Further discussion of the topologies on the spaces of test functions and distributions are given in the article on spaces of test functions and distributions.
For all
j,k\in\{0,1,2,\ldots,infty\}
and any compact subsets and of, we have:$\beginC^k(K) &\subseteq C^k_c(U) \subseteq C^k(U) \\C^k(K) &\subseteq C^k(L) && \text K \subseteq L \\C^k(K) &\subseteq C^j(K) && \text j \le k \\C_c^k(U) &\subseteq C^j_c(U) && \text j \le k \\C^k(U) &\subseteq C^j(U) && \text j \le k \\\end$
Distributions on are continuous linear functionals on
infty(U) C c
when this vector space is endowed with a particular topology called the . The following proposition states two necessary and sufficient conditions for the continuity of a linear functional on
infty(U) C c
that are often straightforward to verify.
Proposition: A linear functional on
infty(U) C c
is continuous, and therefore a distribution, if and only if either of the following equivalent conditions are satisfied:
1. For every compact subset
K\subseteqU
there exist constants
C>0
and
N\in\N
(dependent on
K
) such that for all
f\in
infty(U) C c
with support contained in
K
, $|T(f)| \leq C \sup \$
x \in U, \alpha \leq N\
.
1. For every compact subset
K\subseteqU
and every sequence
\{fi\}
infty i=1
in
infty(U) C c
whose supports are contained in
K
, if
\{\partial\alphafi\}
infty i=1
converges uniformly to zero on
U
for every multi-index
\alpha
, then
T(fi)\to0.
Topology on Ck(U)
We now introduce the seminorms that will define the topology on
Ck(U).
Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used.
All of the functions above are non-negative
\R
-valued[1] seminorms on
Ck(U).
As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology.
Each of the following sets of seminorms $\beginA ~:= \quad &\ \\B ~:= \quad &\ \\C ~:= \quad &\ \\D ~:= \quad &\\end$generate the same locally convex vector topology on
Ck(U)
(so for example, the topology generated by the seminorms in
A
is equal to the topology generated by those in
C
).
With this topology,
Ck(U)
becomes a locally convex Fréchet space that is normable. Every element of
A\cupB\cupC\cupD
is a continuous seminorm on
Ck(U).
Under this topology, a net
(fi)i\in
in
Ck(U)
converges to
f\inCk(U)
if and only if for every multi-index
p
with
|p|<k+1
and every compact
K,
the net of partial derivatives
\left(\partialpfi\right)i
converges uniformly to
\partialpf
on
K.
For any
k\in\{0,1,2,\ldots,infty\},
any (von Neumann) bounded subset of
Ck+1(U)
is a relatively compact subset of
Ck(U).
In particular, a subset of
Cinfty(U)
is bounded if and only if it is bounded in
Ci(U)
for all
i\in\N.
The space
Ck(U)
is a Montel space if and only if
k=infty.
A subset
W
of
Cinfty(U)
is open in this topology if and only if there exists
i\in\N
such that
W
is open when
Cinfty(U)
is endowed with the subspace topology induced on it by
Ci(U).
Topology on Ck(K)
As before, fix
k\in\{0,1,2,\ldots,infty\}.
Recall that if
K
is any compact subset of
U
then
Ck(K)\subseteqCk(U).
If
k
is finite then
Ck(K)
is a Banach space with a topology that can be defined by the norm$r_K(f) := \sup_$ \left(\sup_ \left|\partial^p f(x_0)\right| \right).And when
k=2,
then
Ck(K)
is even a Hilbert space.
Trivial extensions and independence of Ck(K)'s topology from U
Suppose
U
is an open subset of
\Rn
and
K\subseteqU
is a compact subset. By definition, elements of
Ck(K)
are functions with domain
U
(in symbols,
Ck(K)\subseteqCk(U)
) so the space
Ck(K)
and its topology depends on
U;
to make this dependence on the open set
U
clear, temporarily denote
Ck(K)
by
Ck(K;U).
Importantly, changing the set
U
to a different open subset
U'
(with
K\subseteqU'
) will change the set
Ck(K)
from
Ck(K;U)
to
Ck(K;U'),
[2] so that elements of
Ck(K)
will be functions with domain
U'
U.
Despite
Ck(K)
depending on the open set (
UorU'
), the standard notation for
Ck(K)
makes no mention of it. This is justified because, as this subsection will now explain, the space
Ck(K;U)
is canonically identified as a subspace of
Ck(K;U')
(both algebraically and topologically).
It is enough to explain how to canonically identify
Ck(K;U)
with
Ck(K;U')
when one of
U
and
U'
is a subset of the other. The reason is because if
V
and
W
are arbitrary open subset of
\Rn
containing
K
then the open set
U:=V\capW
also contains
K,
so that each of
Ck(K;V)
and
Ck(K;W)
is canonically identified with
Ck(K;V\capW)
and now by transitivity,
Ck(K;V)
is thus identified with
Ck(K;W).
So assume
U\subseteqV
are open subset of
\Rn
containing
K.
Given
f\in
k(U), C c
its is the function
F:V\to\Complex
defined by:$F(x) = \beginf(x) & x \in U, \\0 & \text.\end$This trivial extension belongs to
Ck(V)
(because
f\in
k(U) C c
has compact support) and it will be denoted by
I(f)
(that is,
I(f):=F
). The assignment
f\mapstoI(f)
thus induces a map
I:
k(U) C c
\toCk(V)
that sends a function in
k(U) C c
to its trivial extension on
V.
This map is a linear injection and for every compact subset
K\subseteqU
(where
K
is also a compact subset of
V
since
K\subseteqU\subseteqV
), $\beginI\left(C^k(K; U)\right) &~=~ C^k(K; V) \qquad \text \\I\left(C_c^k(U)\right) &~\subseteq~ C_c^k(V).\end$If
I
is restricted to
Ck(K;U)
then the following induced linear map is a homeomorphism (linear homeomorphisms are called):$\begin \,& C^k(K; U) && \to \,&& C^k(K;V) \\ & f && \mapsto\,&& I(f) \\\end$and thus the next map is a topological embedding:$\begin \,& C^k(K; U) && \to \,&& C^k(V) \\ & f && \mapsto\,&& I(f). \\\end$Using the injection$I : C_c^k(U) \to C^k(V)$the vector space
k(U) C c
is canonically identified with its image in
k(V) C c
\subseteqCk(V).
Because
Ck(K;U)\subseteq
k(U), C c
through this identification,
Ck(K;U)
can also be considered as a subset of
Ck(V).
Thus the topology on
Ck(K;U)
is independent of the open subset
U
of
\Rn
that contains
K,
which justifies the practice of writing
Ck(K)
Ck(K;U).
Canonical LF topology
See main article: Spaces of test functions and distributions.
Recall that
k(U) C c
denote all those functions in
Ck(U)
that have compact support in
U,
where note that
k(U) C c
is the union of all
Ck(K)
as
K
ranges over all compact subsets of
U.
Moreover, for every
k,
k(U) C c
is a dense subset of
Ck(U).
The special case when
k=infty
gives us the space of test functions.
The canonical LF-topology is metrizable and importantly, it is than the subspace topology that
Cinfty(U)
induces on
infty(U). C c
However, the canonical LF-topology does make
infty(U) C c
into a complete reflexive nuclear Montel bornological barrelled Mackey space; the same is true of its strong dual space (that is, the space of all distributions with its usual topology). The canonical LF-topology can be defined in various ways.
Distributions
As discussed earlier, continuous linear functionals on a
infty(U) C c
are known as distributions on
U.
Other equivalent definitions are described below.
There is a canonical duality pairing between a distribution
T
on
U
and a test function
f\in
infty(U), C c
which is denoted using angle brackets by$\begin\mathcal'(U) \times C_c^\infty(U) \to \R \\(T, f) \mapsto \langle T, f \rangle := T(f)\end$
One interprets this notation as the distribution
T
acting on the test function
f
to give a scalar, or symmetrically as the test function
f
acting on the distribution
T.
Characterizations of distributions
Proposition. If
T
is a linear functional on
infty(U) C c
then the following are equivalent:
1. is a distribution;
2. is continuous;
3. is continuous at the origin;
4. is uniformly continuous;
5. is a bounded operator;
6. is sequentially continuous;
• explicitly, for every sequence
\left(fi\right)
infty i=1
in
infty(U) C c
that converges in
infty(U) C c
to some
f\in
infty(U), C c
$\lim_ T\left(f_i\right) = T(f);$[3]
1. is sequentially continuous at the origin; in other words, maps null sequences[4] to null sequences;
• explicitly, for every sequence
\left(fi\right)
infty i=1
in
infty(U) C c
that converges in
infty(U) C c
to the origin (such a sequence is called a), $\lim_ T\left(f_i\right) = 0;$
• a is by definition any sequence that converges to the origin;
1. maps null sequences to bounded subsets;
• explicitly, for every sequence
\left(fi\right)
infty i=1
in
infty(U) C c
that converges in
infty(U) C c
to the origin, the sequence
\left(T\left(fi\right)\right)
infty i=1
is bounded;
1. maps Mackey convergent null sequences to bounded subsets;
• explicitly, for every Mackey convergent null sequence
\left(fi\right)
infty i=1
in
infty(U), C c
the sequence
\left(T\left(fi\right)\right)
infty i=1
is bounded;
• a sequence
f\bull=\left(fi\right)
infty i=1
is said to be if there exists a divergent sequence
r\bull=\left(ri\right)
infty i=1
\toinfty
of positive real number such that the sequence
\left(rifi\right)
infty i=1
is bounded; every sequence that is Mackey convergent to the origin necessarily converges to the origin (in the usual sense);
1. The kernel of is a closed subspace of
infty(U); C c
1. The graph of is closed;
1. There exists a continuous seminorm
g
on
infty(U) C c
such that
|T|\leqg;
1. There exists a constant
C>0
and a finite subset
\{g1,\ldots,gm\}\subseteql{P}
(where
l{P}
is any collection of continuous seminorms that defines the canonical LF topology on
infty(U) C c
) such that
|T|\leqC(g1++gm);
[5]
1. For every compact subset
K\subseteqU
there exist constants
C>0
and
N\in\N
such that for all
f\inCinfty(K),
$|T(f)| \leq C \sup \$
x \in U, \alpha \leq N\
;
1. For every compact subset
K\subseteqU
there exist constants
CK>0
and
NK\in\N
such that for all
f\in
infty(U) C c
with support contained in
K,
[6] $|T(f)| \leq C_K \sup \$
x \in K, \alpha \leq N_K\
;
1. For any compact subset
K\subseteqU
and any sequence
\{fi\}
infty i=1
in
Cinfty(K),
if
\{\partialpfi\}
infty i=1
converges uniformly to zero for all multi-indices
p,
then
T(fi)\to0;
Topology on the space of distributions and its relation to the weak-* topology
The set of all distributions on
U
is the continuous dual space of
infty(U), C c
which when endowed with the strong dual topology is denoted by
l{D}'(U).
Importantly, unless indicated otherwise, the topology on
l{D}'(U)
is the strong dual topology; if the topology is instead the weak-* topology then this will be clearly indicated. Neither topology is metrizable although unlike the weak-* topology, the strong dual topology makes
l{D}'(U)
into a complete nuclear space, to name just a few of its desirable properties.
Neither
infty(U) C c
nor its strong dual
l{D}'(U)
is a sequential space and so neither of their topologies can be fully described by sequences (in other words, defining only what sequences converge in these spaces is enough to fully/correctly define their topologies).However, a in
l{D}'(U)
converges in the strong dual topology if and only if it converges in the weak-* topology (this leads many authors to use pointwise convergence to actually the convergence of a sequence of distributions; this is fine for sequences but this is guaranteed to extend to the convergence of nets of distributions because a net may converge pointwise but fail to converge in the strong dual topology).More information about the topology that
l{D}'(U)
is endowed with can be found in the article on spaces of test functions and distributions and in the articles on polar topologies and dual systems.
A map from
l{D}'(U)
into another locally convex topological vector space (such as any normed space) is continuous if and only if it is sequentially continuous at the origin. However, this is no longer guaranteed if the map is not linear or for maps valued in more general topological spaces (for example, that are not also locally convex topological vector spaces). The same is true of maps from
infty(U) C c
(more generally, this is true of maps from any locally convex bornological space).
Localization of distributions
There is no way to define the value of a distribution in
l{D}'(U)
at a particular point of . However, as is the case with functions, distributions on restrict to give distributions on open subsets of . Furthermore, distributions are in the sense that a distribution on all of can be assembled from a distribution on an open cover of satisfying some compatibility conditions on the overlaps. Such a structure is known as a sheaf.
Extensions and restrictions to an open subset
Let
V\subseteqU
be open subsets of
\Rn.
Every function
f\inl{D}(V)
can be from its domain to a function on by setting it equal to
0
on the complement
U\setminusV.
This extension is a smooth compactly supported function called the and it will be denoted by
EVU(f).
This assignment
f\mapstoEVU(f)
defines the operator
EVU:l{D}(V)\tol{D}(U),
which is a continuous injective linear map. It is used to canonically identify
l{D}(V)
as a vector subspace of
l{D}(U)
(although as a topological subspace). Its transpose (explained here) $\rho_ := ^E_ : \mathcal'(U) \to \mathcal'(V),$ is called the and as the name suggests, the image
\rhoVU(T)
of a distribution
T\inl{D}'(U)
under this map is a distribution on
V
called the restriction of
T
to
V.
The defining condition of the restriction
\rhoVU(T)
is:$\langle \rho_ T, \phi \rangle = \langle T, E_ \phi \rangle \quad \text \phi \in \mathcal(V).$If
VU
then the (continuous injective linear) trivial extension map
EVU:l{D}(V)\tol{D}(U)
is a topological embedding (in other words, if this linear injection was used to identify
l{D}(V)
as a subset of
l{D}(U)
then
l{D}(V)
's topology would strictly finer than the subspace topology that
l{D}(U)
induces on it; importantly, it would be a topological subspace since that requires equality of topologies) and its range is also dense in its codomain
l{D}(U).
Consequently, if
VU
then the restriction mapping is neither injective nor surjective. A distribution
S\inl{D}'(V)
is said to be if it belongs to the range of the transpose of
EVU
and it is called if it is extendable to
\Rn.
Unless
U=V,
the restriction to is neither injective nor surjective. Lack of surjectivity follows since distributions can blow up towards the boundary of . For instance, if
U=\R
and
V=(0,2),
then the distribution$T(x) = \sum_^\infty n \, \delta\left(x-\frac\right)$is in
l{D}'(V)
l{D}'(U).
Gluing and distributions that vanish in a set
Let be an open subset of .
T\inl{D}'(U)
is said to if for all
f\inl{D}(U)
such that
\operatorname{supp}(f)\subseteqV
we have
Tf=0.
vanishes in if and only if the restriction of to is equal to 0, or equivalently, if and only if lies in the kernel of the restriction map
\rhoVU.
Support of a distribution
This last corollary implies that for every distribution on, there exists a unique largest subset of such that vanishes in (and does not vanish in any open subset of that is not contained in); the complement in of this unique largest open subset is called . Thus$\operatorname(T) = U \setminus \bigcup \.$
If
f
is a locally integrable function on and if
Df
is its associated distribution, then the support of
Df
is the smallest closed subset of in the complement of which
f
is almost everywhere equal to 0. If
f
is continuous, then the support of
Df
is equal to the closure of the set of points in at which
f
does not vanish. The support of the distribution associated with the Dirac measure at a point
x0
is the set
\{x0\}.
If the support of a test function
f
does not intersect the support of a distribution then
Tf=0.
A distribution is 0 if and only if its support is empty. If
f\inCinfty(U)
is identically 1 on some open set containing the support of a distribution then
fT=T.
If the support of a distribution is compact then it has finite order and furthermore, there is a constant
C
and a non-negative integer
N
such that:$|T \phi| \leq C\|\phi\|_N := C \sup \left\ \quad \text \phi \in \mathcal(U).$
If has compact support then it has a unique extension to a continuous linear functional
\widehat{T}
on
Cinfty(U)
; this functional can be defined by
\widehat{T}(f):=T(\psif),
where
\psi\inl{D}(U)
is any function that is identically 1 on an open set containing the support of .
If
S,T\inl{D}'(U)
and
λ0
then
\operatorname{supp}(S+T)\subseteq\operatorname{supp}(S)\cup\operatorname{supp}(T)
and
\operatorname{supp}(λT)=\operatorname{supp}(T).
Thus, distributions with support in a given subset
A\subseteqU
form a vector subspace of
l{D}'(U).
Furthermore, if
P
is a differential operator in, then for all distributions on and all
f\inCinfty(U)
we have
\operatorname{supp}(P(x,\partial)T)\subseteq\operatorname{supp}(T)
and
\operatorname{supp}(fT)\subseteq\operatorname{supp}(f)\cap\operatorname{supp}(T).
Distributions with compact support
Support in a point set and Dirac measures
For any
x\inU,
let
\deltax\inl{D}'(U)
denote the distribution induced by the Dirac measure at
x.
For any
x0\inU
and distribution
T\inl{D}'(U),
the support of is contained in
\{x0\}
if and only if is a finite linear combination of derivatives of the Dirac measure at
x0.
If in addition the order of is
\leqk
then there exist constants
\alphap
such that:$T = \sum_$
\leq k
\alpha_p \partial^p \delta_.
Said differently, if has support at a single point
\{P\},
then is in fact a finite linear combination of distributional derivatives of the
\delta
function at . That is, there exists an integer and complex constants
a\alpha
such that$T = \sum_$
\leq m
a_\alpha \partial^\alpha(\tau_P\delta)where
\tauP
is the translation operator.
Global structure of distributions
The formal definition of distributions exhibits them as a subspace of a very large space, namely the topological dual of
l{D}(U)
(or the Schwartz space
l{S}(\Rn)
for tempered distributions). It is not immediately clear from the definition how exotic a distribution might be. To answer this question, it is instructive to see distributions built up from a smaller space, namely the space of continuous functions. Roughly, any distribution is locally a (multiple) derivative of a continuous function. A precise version of this result, given below, holds for distributions of compact support, tempered distributions, and general distributions. Generally speaking, no proper subset of the space of distributions contains all continuous functions and is closed under differentiation. This says that distributions are not particularly exotic objects; they are only as complicated as necessary.
Decomposition of distributions as sums of derivatives of continuous functions
By combining the above results, one may express any distribution on as the sum of a series of distributions with compact support, where each of these distributions can in turn be written as a finite sum of distributional derivatives of continuous functions on . In other words, for arbitrary
T\inl{D}'(U)
we can write:$T = \sum_^\infty \sum_ \partial^p f_,$where
P1,P2,\ldots
are finite sets of multi-indices and the functions
fip
are continuous.
Note that the infinite sum above is well-defined as a distribution. The value of for a given
f\inl{D}(U)
can be computed using the finitely many
g\alpha
that intersect the support of
f.
Operations on distributions
Many operations which are defined on smooth functions with compact support can also be defined for distributions. In general, if
A:l{D}(U)\tol{D}(U)
is a linear map which is continuous with respect to the weak topology, then it is possible to extend
A
to a map
A:l{D}'(U)\tol{D}'(U)
by passing to the limit.[7]
Preliminaries: Transpose of a linear operator
See main article: Transpose of a linear map.
Operations on distributions and spaces of distributions are often defined by means of the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well known in functional analysis.[8] For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general the transpose of a continuous linear map
A:X\toY
is the linear map $^A : Y' \to X' \qquad \text \qquad ^A(y') := y' \circ A,$ or equivalently, it is the unique map satisfying
\langley',A(x)\rangle=\left\langle{}tA(y'),x\right\rangle
for all
x\inX
and all
y'\inY'
(the prime symbol in
y'
does not denote a derivative of any kind; it merely indicates that
y'
is an element of the continuous dual space
Y'
). Since
A
is continuous, the transpose
{}tA:Y'\toX'
is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details).
In the context of distributions, the characterization of the transpose can be refined slightly. Let
A:l{D}(U)\tol{D}(U)
be a continuous linear map. Then by definition, the transpose of
A
is the unique linear operator
At:l{D}'(U)\tol{D}'(U)
that satisfies:$\langle ^A(T), \phi \rangle = \langle T, A(\phi) \rangle \quad \text \phi \in \mathcal(U) \text T \in \mathcal'(U).$
Since
l{D}(U)
is dense in
l{D}'(U)
(here,
l{D}(U)
actually refers to the set of distributions
\left\{D\psi:\psi\inl{D}(U)\right\}
) it is sufficient that the defining equality hold for all distributions of the form
T=D\psi
where
\psi\inl{D}(U).
Explicitly, this means that a continuous linear map
B:l{D}'(U)\tol{D}'(U)
is equal to
{}tA
if and only if the condition below holds:$\langle B(D_\psi), \phi \rangle = \langle ^A(D_\psi), \phi \rangle \quad \text \phi, \psi \in \mathcal(U)$where the right hand side equals
\langle{}tA(D\psi),\phi\rangle=\langleD\psi,A(\phi)\rangle=\langle\psi,A(\phi)\rangle=\intU\psiA(\phi)dx.
Differential operators
Differentiation of distributions
Let
A:l{D}(U)\tol{D}(U)
be the partial derivative operator
\tfrac{\partial}{\partialxk}.
In order to extend
A
we compute its transpose:$\begin\langle ^A(D_\psi), \phi \rangle&= \int_U \psi (A\phi) \,dx && \text \\&= \int_U \psi \frac \, dx \\[4pt]&= -\int_U \phi \frac\, dx && \text \\[4pt]&= -\left\langle \frac, \phi \right\rangle \\[4pt]&= -\langle A \psi, \phi \rangle = \langle - A \psi, \phi \rangle\end$
Therefore
{}tA=-A.
Therefore, the partial derivative of
T
with respect to the coordinate
xk
is defined by the formula$\left\langle \frac, \phi \right\rangle = - \left\langle T, \frac \right\rangle \qquad \text \phi \in \mathcal(U).$
With this definition, every distribution is infinitely differentiable, and the derivative in the direction
xk
is a linear operator on
l{D}'(U).
More generally, if
\alpha
is an arbitrary multi-index, then the partial derivative
\partial\alphaT
of the distribution
T\inl{D}'(U)
is defined by$\langle \partial^\alpha T, \phi \rangle = (-1)^$
\langle T, \partial^\alpha \phi \rangle \qquad \text \phi \in \mathcal(U).
Differentiation of distributions is a continuous operator on
l{D}'(U);
this is an important and desirable property that is not shared by most other notions of differentiation.
If
T
is a distribution in
\R
then$\lim_ \frac = T'\in \mathcal'(\R),$where
T'
is the derivative of
T
and
\taux
is translation by
x;
thus the derivative of
T
may be viewed as a limit of quotients.
Differential operators acting on smooth functions
A linear differential operator in
U
with smooth coefficients acts on the space of smooth functions on
U.
Given such an operator$P := \sum_\alpha c_\alpha \partial^\alpha,$we would like to define a continuous linear map,
DP
that extends the action of
P
on
Cinfty(U)
to distributions on
U.
In other words, we would like to define
DP
such that the following diagram commutes:$\begin\mathcal'(U) & \stackrel & \mathcal'(U) \\[2pt]\uparrow & & \uparrow \\[2pt]C^\infty(U) & \stackrel & C^\infty(U)\end$where the vertical maps are given by assigning
f\inCinfty(U)
its canonical distribution
Df\inl{D}'(U),
which is defined by: $D_f(\phi) = \langle f, \phi \rangle := \int_U f(x) \phi(x) \,dx \quad \text \phi \in \mathcal(U).$ With this notation the diagram commuting is equivalent to:$D_ = D_PD_f \qquad \text f \in C^\infty(U).$
In order to find
DP,
the transpose
{}tP:l{D}'(U)\tol{D}'(U)
of the continuous induced map
P:l{D}(U)\tol{D}(U)
defined by
\phi\mapstoP(\phi)
is considered in the lemma below. This leads to the following definition of the differential operator on
U
called which will be denoted by
P*
to avoid confusion with the transpose map, that is defined by$P_* := \sum_\alpha b_\alpha \partial^\alpha \quad \text \quad b_\alpha := \sum_ (-1)^$
\binom \partial^ c_\beta.
As discussed above, for any
\phi\inl{D}(U),
the transpose may be calculated by:$\begin\left\langle ^P(D_f), \phi \right\rangle &= \int_U f(x) P(\phi)(x) \,dx \\&= \int_U f(x) \left[\sum\nolimits_\alpha c_\alpha(x) (\partial^\alpha \phi)(x) \right] \,dx \\&= \sum\nolimits_\alpha \int_U f(x) c_\alpha(x) (\partial^\alpha \phi)(x) \,dx \\&= \sum\nolimits_\alpha (-1)^$
\int_U \phi(x) (\partial^\alpha(c_\alpha f))(x) \,d x\end
For the last line we used integration by parts combined with the fact that
\phi
and therefore all the functions
f(x)c\alpha(x)\partial\alpha\phi(x)
have compact support.[9] Continuing the calculation above, for all
\phi\inl{D}(U):
$\begin\left\langle ^P(D_f), \phi \right\rangle &=\sum\nolimits_\alpha (-1)^$
\int_U \phi(x) (\partial^\alpha(c_\alpha f))(x) \,dx && \text \\[4pt]&= \int_U \phi(x) \sum\nolimits_\alpha (-1)^
(\partial^\alpha(c_\alpha f))(x)\,dx \\[4pt]&= \int_U \phi(x) \sum_\alpha \left[\sum_{\gamma \le \alpha} \binom{\alpha}{\gamma} (\partial^{\gamma}c_\alpha)(x) (\partial^{\alpha-\gamma}f)(x) \right] \,dx && \text\\&= \int_U \phi(x) \left[\sum_\alpha \sum_{\gamma \le \alpha} (-1)^{|\alpha|} \binom{\alpha}{\gamma} (\partial^{\gamma}c_\alpha)(x) (\partial^{\alpha-\gamma}f)(x)\right] \,dx \\&= \int_U \phi(x) \left[\sum_\alpha \left[ \sum_{\beta \geq \alpha} (-1)^{|\beta|} \binom{\beta}{\alpha} \left(\partial^{\beta-\alpha}c_{\beta}\right)(x) \right] (\partial^\alpha f)(x)\right] \,dx && \text f \\&= \int_U \phi(x) \left[\sum\nolimits_\alpha b_\alpha(x) (\partial^\alpha f)(x) \right] \, dx && b_\alpha:=\sum_ (-1)^
\binom \partial^c_ \\&= \left\langle \left(\sum\nolimits_\alpha b_\alpha \partial^\alpha \right) (f), \phi \right\rangle\end
The Lemma combined with the fact that the formal transpose of the formal transpose is the original differential operator, that is,
P**=P,
enables us to arrive at the correct definition: the formal transpose induces the (continuous) canonical linear operator
P*:
infty(U) C c
\to
infty(U) C c
defined by
\phi\mapstoP*(\phi).
We claim that the transpose of this map,
{}tP*:l{D}'(U)\tol{D}'(U),
can be taken as
DP.
To see this, for every
\phi\inl{D}(U),
compute its action on a distribution of the form
Df
with
f\inCinfty(U)
:
$\begin\left\langle ^P_*\left(D_f\right),\phi \right\rangle &= \left\langle D_, \phi \right\rangle && \text P_* \text P\\&= \left\langle D_, \phi \right\rangle && P_ = P\end$
We call the continuous linear operator
DP:={}tP*:l{D}'(U)\tol{D}'(U)
the . Its action on an arbitrary distribution
S
is defined via:$D_P(S)(\phi) = S\left(P_*(\phi)\right) \quad \text \phi \in \mathcal(U).$
If
(Ti)
infty i=1
converges to
T\inl{D}'(U)
then for every multi-index
\alpha,(\partial\alphaTi)
infty i=1
converges to
\partial\alphaT\inl{D}'(U).
Multiplication of distributions by smooth functions
A differential operator of order 0 is just multiplication by a smooth function. And conversely, if
f
is a smooth function then
P:=f(x)
is a differential operator of order 0, whose formal transpose is itself (that is,
P*=P
). The induced differential operator
DP:l{D}'(U)\tol{D}'(U)
maps a distribution
T
to a distribution denoted by
fT:=DP(T).
We have thus defined the multiplication of a distribution by a smooth function.
We now give an alternative presentation of multiplication of a distribution
T
on
U
by a smooth function
m:U\to\R.
The product
mT
is defined by$\langle mT, \phi \rangle = \langle T, m\phi \rangle \qquad \text \phi \in \mathcal(U).$
This definition coincides with the transpose definition since if
M:l{D}(U)\tol{D}(U)
is the operator of multiplication by the function
m
(that is,
(M\phi)(x)=m(x)\phi(x)
), then$\int_U (M \phi)(x) \psi(x)\,dx = \int_U m(x) \phi(x) \psi(x)\,d x = \int_U \phi(x) m(x) \psi(x) \,d x = \int_U \phi(x) (M \psi)(x)\,d x,$so that
{}tM=M.
Under multiplication by smooth functions,
l{D}'(U)
is a module over the ring
Cinfty(U).
With this definition of multiplication by a smooth function, the ordinary product rule of calculus remains valid. However, a number of unusual identities also arise. For example, if
\delta
is the Dirac delta distribution on
\R,
then
m\delta=m(0)\delta,
and if
\delta'
is the derivative of the delta distribution, then$m\delta' = m(0) \delta' - m' \delta = m(0) \delta' - m'(0) \delta.$
The bilinear multiplication map
Cinfty(\Rn) x l{D}'(\Rn)\tol{D}'\left(\Rn\right)
given by
(f,T)\mapstofT
is continuous; it is however, hypocontinuous.
Example. The product of any distribution
T
with the function that is identically on
U
is equal to
T.
Example. Suppose
(fi)
infty i=1
is a sequence of test functions on
U
that converges to the constant function
1\inCinfty(U).
For any distribution
T
on
U,
the sequence
(fi
infty T) i=1
converges to
T\inl{D}'(U).
If
(Ti)
infty i=1
converges to
T\inl{D}'(U)
and
(fi)
infty i=1
converges to
f\inCinfty(U)
then
(fiTi)
infty i=1
converges to
fT\inl{D}'(U).
Problem of multiplying distributions
It is easy to define the product of a distribution with a smooth function, or more generally the product of two distributions whose singular supports are disjoint. With more effort it is possible to define a well-behaved product of several distributions provided their wave front sets at each point are compatible. A limitation of the theory of distributions (and hyperfunctions) is that there is no associative product of two distributions extending the product of a distribution by a smooth function, as has been proved by Laurent Schwartz in the 1950s. For example, if
\operatorname{p.v.}
1 x
is the distribution obtained by the Cauchy principal value$\left(\operatorname \frac\right)(\phi) = \lim_ \int_$
\geq \varepsilon
\frac\, dx \quad \text \phi \in \mathcal(\R).
If
\delta
is the Dirac delta distribution then$(\delta \times x) \times \operatorname \frac = 0$but,$\delta \times \left(x \times \operatorname \frac\right) = \delta$so the product of a distribution by a smooth function (which is always well defined) cannot be extended to an associative product on the space of distributions.
Thus, nonlinear problems cannot be posed in general and thus not solved within distribution theory alone. In the context of quantum field theory, however, solutions can be found. In more than two spacetime dimensions the problem is related to the regularization of divergences. Here Henri Epstein and Vladimir Glaser developed the mathematically rigorous (but extremely technical) . This does not solve the problem in other situations. Many other interesting theories are non linear, like for example the Navier–Stokes equations of fluid dynamics.
Several not entirely satisfactory theories of algebras of generalized functions have been developed, among which Colombeau's (simplified) algebra is maybe the most popular in use today.
Inspired by Lyons' rough path theory,[10] Martin Hairer proposed a consistent way of multiplying distributions with certain structure (regularity structures[11]), available in many examples from stochastic analysis, notably stochastic partial differential equations. See also Gubinelli–Imkeller–Perkowski (2015) for a related development based on Bony's paraproduct from Fourier analysis.
Composition with a smooth function
Let
T
be a distribution on
U.
Let
V
be an open set in
\Rn
and
F:V\toU.
If
F
is a submersion then it is possible to define$T \circ F \in \mathcal'(V).$
This is, and is also called, sometimes written$F^\sharp : T \mapsto F^\sharp T = T \circ F.$
The pullback is often denoted
F*,
although this notation should not be confused with the use of '*' to denote the adjoint of a linear mapping.
The condition that
F
be a submersion is equivalent to the requirement that the Jacobian derivative
dF(x)
of
F
is a surjective linear map for every
x\inV.
A necessary (but not sufficient) condition for extending
F\#
to distributions is that
F
be an open mapping.[12] The Inverse function theorem ensures that a submersion satisfies this condition.
If
F
is a submersion, then
F\#
is defined on distributions by finding the transpose map. Uniqueness of this extension is guaranteed since
F\#
is a continuous linear operator on
l{D}(U).
Existence, however, requires using the change of variables formula, the inverse function theorem (locally) and a partition of unity argument.[13]
In the special case when
F
is a diffeomorphism from an open subset
V
of
\Rn
onto an open subset
U
of
\Rn
change of variables under the integral gives:$\int_V \phi\circ F(x) \psi(x)\,dx = \int_U \phi(x) \psi \left(F^(x) \right) \left|\det dF^(x) \right|\,dx.$
In this particular case, then,
F\#
is defined by the transpose formula:$\left\langle F^\sharp T, \phi \right\rangle = \left\langle T, \left|\det d(F^) \right|\phi\circ F^ \right\rangle.$
Convolution
Under some circumstances, it is possible to define the convolution of a function with a distribution, or even the convolution of two distributions.Recall that if
f
and
g
are functions on
\Rn
then we denote by
f\astg
defined at
x\in\Rn
to be the integral$(f \ast g)(x) := \int_ f(x-y) g(y) \,dy = \int_ f(y)g(x-y) \,dy$provided that the integral exists. If
1\leqp,q,r\leqinfty
are such that $\frac = \frac + \frac - 1$ then for any functions
f\inLp(\Rn)
and
g\inLq(\Rn)
we have
f\astg\inLr(\Rn)
and
\|f\ast
g\| Lr
\leq
\|f\| Lp
\|g\| Lq
.
If
f
and
g
are continuous functions on
\Rn,
at least one of which has compact support, then
\operatorname{supp}(f\astg)\subseteq\operatorname{supp}(f)+\operatorname{supp}(g)
and if
A\subseteq\Rn
then the value of
f\astg
on
A
do depend on the values of
f
outside of the Minkowski sum
A-\operatorname{supp}(g)=\{a-s:a\inA,s\in\operatorname{supp}(g)\}.
Importantly, if
g\inL1(\Rn)
has compact support then for any
0\leqk\leqinfty,
the convolution map
f\mapstof\astg
is continuous when considered as the map
Ck(\Rn)\toCk(\Rn)
or as the map
k(\R C c
n)\to
k(\R C c
n).
Translation and symmetry
Given
a\in\Rn,
the translation operator
\taua
sends
f:\Rn\to\Complex
to
\tauaf:\Rn\to\Complex,
defined by
\tauaf(y)=f(y-a).
This can be extended by the transpose to distributions in the following way: given a distribution
T,
is the distribution
\tauaT:l{D}(\Rn)\to\Complex
defined by
\tauaT(\phi):=\left\langleT,\tau-a\phi\right\rangle.
[14]
Given
f:\Rn\to\Complex,
define the function
\tilde{f}:\Rn\to\Complex
by
\tilde{f}(x):=f(-x).
Given a distribution
T,
let
\tilde{T}:l{D}(\Rn)\to\Complex
be the distribution defined by
\tilde{T}(\phi):=T\left(\tilde{\phi}\right).
The operator
T\mapsto\tilde{T}
is called .
Convolution of a test function with a distribution
Convolution with
f\inl{D}(\Rn)
defines a linear map:$\beginC_f : \,& \mathcal(\R^n) && \to \,&& \mathcal(\R^n) \\ & g && \mapsto\,&& f \ast g \\\end$which is continuous with respect to the canonical LF space topology on
l{D}(\Rn).
Convolution of
f
with a distribution
T\inl{D}'(\Rn)
can be defined by taking the transpose of
Cf
relative to the duality pairing of
l{D}(\Rn)
with the space
l{D}'(\Rn)
of distributions. If
f,g,\phi\inl{D}(\Rn),
then by Fubini's theorem$\langle C_fg, \phi \rangle = \int_\phi(x)\int_f(x-y) g(y) \,dy \,dx = \left\langle g,C_\phi \right\rangle.$
Extending by continuity, the convolution of
f
with a distribution
T
is defined by$\langle f \ast T, \phi \rangle = \left\langle T, \tilde \ast \phi \right\rangle, \quad \text \phi \in \mathcal(\R^n).$
An alternative way to define the convolution of a test function
f
and a distribution
T
is to use the translation operator
\taua.
The convolution of the compactly supported function
f
and the distribution
T
is then the function defined for each
x\in\Rn
by$(f \ast T)(x) = \left\langle T, \tau_x \tilde \right\rangle.$
It can be shown that the convolution of a smooth, compactly supported function and a distribution is a smooth function. If the distribution
T
has compact support then if
f
is a polynomial (resp. an exponential function, an analytic function, the restriction of an entire analytic function on
\Complexn
to
\Rn,
the restriction of an entire function of exponential type in
\Complexn
to
\Rn
) then the same is true of
T\astf.
If the distribution
T
has compact support as well, then
f\astT
is a compactly supported function, and the Titchmarsh convolution theorem implies that:$\operatorname(\operatorname(f \ast T)) = \operatorname(\operatorname(f)) + \operatorname (\operatorname(T))$where
\operatorname{ch}
denotes the convex hull and supp denotes the support.
Convolution of a smooth function with a distribution
Let
f\inCinfty(\Rn)
and
T\inl{D}'(\Rn)
and assume that at least one of
f
and
T
has compact support. The of
f
and
T,
denoted by
f\astT
or by
T\astf,
is the smooth function:$\beginf \ast T : \,& \R^n && \to \,&& \Complex \\ & x && \mapsto\,&& \left\langle T, \tau_x \tilde \right\rangle \\\end$satisfying for all
p\in\Nn
:$\begin&\operatorname(f \ast T) \subseteq \operatorname(f)+ \operatorname(T) \\[6pt]&\text p \in \N^n: \quad\begin\partial^p \left\langle T, \tau_x \tilde \right\rangle = \left\langle T, \partial^p \tau_x \tilde \right\rangle \\\partial^p (T \ast f) = (\partial^p T) \ast f = T \ast (\partial^p f).\end\end$
If
T
is a distribution then the map
f\mapstoT\astf
is continuous as a map
l{D}(\Rn)\toCinfty(\Rn)
T
has compact support then it is also continuous as the map
Cinfty(\Rn)\toCinfty(\Rn)
and continuous as the map
l{D}(\Rn)\tol{D}(\Rn).
If
L:l{D}(\Rn)\toCinfty(\Rn)
is a continuous linear map such that
L\partial\alpha\phi=\partial\alphaL\phi
for all
\alpha
and all
\phi\inl{D}(\Rn)
then there exists a distribution
T\inl{D}'(\Rn)
such that
L\phi=T\circ\phi
for all
\phi\inl{D}(\Rn).
Example. Let
H
be the Heaviside function on
\R.
For any
\phi\inl{D}(\R),
$(H \ast \phi)(x) = \int_^x \phi(t) \, dt.$
Let
\delta
be the Dirac measure at 0 and
\delta'
its derivative as a distribution. Then
\delta'\astH=\delta
and
1\ast\delta'=0.
Importantly, the associative law fails to hold:$1 = 1 \ast \delta = 1 \ast (\delta' \ast H) \neq (1 \ast \delta') \ast H = 0 \ast H = 0.$
Convolution of distributions
It is also possible to define the convolution of two distributions
S
and
T
on
\Rn,
provided one of them has compact support. Informally, in order to define
S\astT
where
T
has compact support, the idea is to extend the definition of the convolution
\ast
to a linear operation on distributions so that the associativity formula$S \ast (T \ast \phi) = (S \ast T) \ast \phi$continues to hold for all test functions
\phi.
[15]
It is also possible to provide a more explicit characterization of the convolution of distributions. Suppose that
S
and
T
are distributions and that
S
has compact support. Then the linear maps$\begin\bullet \ast \tilde : \,& \mathcal(\R^n) && \to \,&& \mathcal(\R^n) && \quad \text \quad && \bullet \ast \tilde : \,&& \mathcal(\R^n) && \to \,&& \mathcal(\R^n) \\ & f && \mapsto\,&& f \ast \tilde && && && f && \mapsto\,&& f \ast \tilde \\\end$are continuous. The transposes of these maps:$^\left(\bullet \ast \tilde\right) : \mathcal'(\R^n) \to \mathcal'(\R^n) \qquad ^\left(\bullet \ast \tilde\right) : \mathcal'(\R^n) \to \mathcal'(\R^n)$are consequently continuous and it can also be shown that$^\left(\bullet \ast \tilde\right)(T) = ^\left(\bullet \ast \tilde\right)(S).$
This common value is called and it is a distribution that is denoted by
S\astT
or
T\astS.
It satisfies
\operatorname{supp}(S\astT)\subseteq\operatorname{supp}(S)+\operatorname{supp}(T).
If
S
and
T
are two distributions, at least one of which has compact support, then for any
a\in\Rn,
\taua(S\astT)=\left(\tauaS\right)\astT=S\ast\left(\tauaT\right).
If
T
is a distribution in
\Rn
and if
\delta
is a Dirac measure then
T\ast\delta=T=\delta\astT
; thus
\delta
is the identity element of the convolution operation. Moreover, if
f
is a function then
f\ast\delta\prime=f\prime=\delta\prime\astf
where now the associativity of convolution implies that
f\prime\astg=g\prime\astf
for all functions
f
and
g.
Suppose that it is
T
that has compact support. For
\phi\inl{D}(\Rn)
consider the function$\psi(x) = \langle T, \tau_ \phi \rangle.$
It can be readily shown that this defines a smooth function of
x,
which moreover has compact support. The convolution of
S
and
T
is defined by$\langle S \ast T, \phi \rangle = \langle S, \psi \rangle.$
This generalizes the classical notion of convolution of functions and is compatible with differentiation in the following sense: for every multi-index
\alpha.
$\partial^\alpha(S \ast T) = (\partial^\alpha S) \ast T = S \ast (\partial^\alpha T).$
The convolution of a finite number of distributions, all of which (except possibly one) have compact support, is associative.
This definition of convolution remains valid under less restrictive assumptions about
S
and
T.
[16]
The convolution of distributions with compact support induces a continuous bilinear map
l{E}' x l{E}'\tol{E}'
defined by
(S,T)\mapstoS*T,
where
l{E}'
denotes the space of distributions with compact support. However, the convolution map as a function
l{E}' x l{D}'\tol{D}'
is continuous although it is separately continuous. The convolution maps
l{D}(\Rn) x l{D}'\tol{D}'
and
l{D}(\Rn) x l{D}'\tol{D}(\Rn)
given by
(f,T)\mapstof*T
both to be continuous. Each of these non-continuous maps is, however, separately continuous and hypocontinuous.
Convolution versus multiplication
In general, regularity is required for multiplication products and locality is required for convolution products. It is expressed in the following extension of the Convolution Theorem which guarantees the existence of both convolution and multiplication products. Let
F(\alpha)=f\inl{O}'C
be a rapidly decreasing tempered distribution or, equivalently,
F(f)=\alpha\inl{O}M
be an ordinary (slowly growing, smooth) function within the space of tempered distributions and let
F
be the normalized (unitary, ordinary frequency) Fourier transform.[17] Then, according to,$F(f * g) = F(f) \cdot F(g) \qquad \text \qquad F(\alpha \cdot g) = F(\alpha) * F(g)$hold within the space of tempered distributions.[18] [19] [20] In particular, these equations become the Poisson Summation Formula if
g\equiv\operatorname{Ш
} is the Dirac Comb.[21] The space of all rapidly decreasing tempered distributions is also called the space of
l{O}'C
and the space of all ordinary functions within the space of tempered distributions is also called the space of
l{O}M.
More generally,
F(l{O}'C)=l{O}M
and
F(l{O}M)=l{O}'C.
[22] A particular case is the Paley-Wiener-Schwartz Theorem which states that
F(l{E}')=\operatorname{PW}
and
F(\operatorname{PW})=l{E}'.
This is because
l{E}'\subseteql{O}'C
and
\operatorname{PW}\subseteql{O}M.
In other words, compactly supported tempered distributions
l{E}'
belong to the space of
l{O}'C
andPaley-Wiener functions
\operatorname{PW},
better known as bandlimited functions, belong to the space of
l{O}M.
For example, let
g\equiv\operatorname{Ш
} \in \mathcal' be the Dirac comb and
f\equiv\delta\inl{E}'
be the Dirac delta;then
\alpha\equiv1\in\operatorname{PW}
is the function that is constantly one and both equations yield the Dirac-comb identity. Another example is to let
g
be the Dirac comb and
f\equiv\operatorname{rect}\inl{E}'
be the rectangular function; then
\alpha\equiv\operatorname{sinc}\in\operatorname{PW}
is the sinc function and both equations yield the Classical Sampling Theorem for suitable
\operatorname{rect}
functions. More generally, if
g
is the Dirac comb and
f\inl{S}\subseteql{O}'C\capl{O}M
is a smooth window function (Schwartz function), for example, the Gaussian, then
\alpha\inl{S}
is another smooth window function (Schwartz function). They are known as mollifiers, especially in partial differential equations theory, or as regularizers in physics because they allow turning generalized functions into regular functions.
Tensor products of distributions
Let
U\subseteq\Rm
and
V\subseteq\Rn
be open sets. Assume all vector spaces to be over the field
F,
where
F=\R
or
\Complex.
For
f\inl{D}(U x V)
define for every
u\inU
and every
v\inV
the following functions:$\beginf_u : \,& V && \to \,&& \mathbb && \quad \text \quad && f^v : \,&& U && \to \,&& \mathbb \\ & y && \mapsto\,&& f(u, y) && && && x && \mapsto\,&& f(x, v) \\\end$
Given
S\inl{D}\prime(U)
and
T\inl{D}\prime(V),
define the following functions:$\begin\langle S, f^\rangle : \,& V && \to \,&& \mathbb && \quad \text \quad && \langle T, f_\rangle : \,&& U && \to \,&& \mathbb \\ & v && \mapsto\,&& \langle S, f^v \rangle && && && u && \mapsto\,&& \langle T, f_u \rangle \\\end$where
\langleT,f\bullet\rangle\inl{D}(U)
and
\langleS,f\bullet\rangle\inl{D}(V).
These definitions associate every
S\inl{D}'(U)
and
T\inl{D}'(V)
with the (respective) continuous linear map:$\begin \,&& \mathcal(U \times V) & \to \,&& \mathcal(V) && \quad \text \quad && \,& \mathcal(U \times V) && \to \,&& \mathcal(U) \\ && f \ & \mapsto\,&& \langle S, f^ \rangle && && & f \ && \mapsto\,&& \langle T, f_ \rangle \\\end$
Moreover, if either
S
(resp.
T
) has compact support then it also induces a continuous linear map of
Cinfty(U x V)\toCinfty(V)
(resp.
denoted by
ST
or
TS,
is the distribution in
U x V
defined by:$(S \otimes T)(f) := \langle S, \langle T, f_ \rangle \rangle = \langle T, \langle S, f^\rangle \rangle.$
Spaces of distributions
For all
0<k<infty
and all
1<p<infty,
every one of the following canonical injections is continuous and has an image (also called the range) that is a dense subset of its codomain:$\beginC_c^\infty(U) & \to & C_c^k(U) & \to & C_c^0(U) & \to & L_c^\infty(U) & \to & L_c^p(U) & \to & L_c^1(U) \\\downarrow & &\downarrow && \downarrow \\C^\infty(U) & \to & C^k(U) & \to & C^0(U) \\\end$where the topologies on
q(U) L c
(
1\leqq\leqinfty
) are defined as direct limits of the spaces
q(K) L c
in a manner analogous to how the topologies on
k(U) C c
were defined (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in its codomain.
Suppose that
X
is one of the spaces
k(U) C c
(for
k\in\{0,1,\ldots,infty\}
) or
p L c(U)
(for
1\leqp\leqinfty
) or
Lp(U)
(for
1\leqp<infty
). Because the canonical injection
\operatorname{In}X:
infty(U) C c
\toX
is a continuous injection whose image is dense in the codomain, this map's transpose
{}t\operatorname{In}X:X'b\tol{D}'(U)=
infty(U)\right)' \left(C b
is a continuous injection. This injective transpose map thus allows the continuous dual space
X'
of
X
to be identified with a certain vector subspace of the space
l{D}'(U)
of all distributions (specifically, it is identified with the image of this transpose map). This transpose map is continuous but it is necessarily a topological embedding.A linear subspace of
l{D}'(U)
carrying a locally convex topology that is finer than the subspace topology induced on it by
l{D}'(U)=
infty(U)\right)' \left(C b
is called .Almost all of the spaces of distributions mentioned in this article arise in this way (for example, tempered distribution, restrictions, distributions of order
\leq
some integer, distributions induced by a positive Radon measure, distributions induced by an
Lp
-function, etc.) and any representation theorem about the continuous dual space of
X
may, through the transpose
{}t\operatorname{In}X:X'b\tol{D}'(U),
be transferred directly to elements of the space
\operatorname{Im}\left({}t\operatorname{In}X\right).
The inclusion map
\operatorname{In}:
infty(U) C c
\to
0(U) C c
is a continuous injection whose image is dense in its codomain, so the transpose
{}t\operatorname{In}:
0(U))' (C b
\tol{D}'(U)=
infty(U))' (C b
is also a continuous injection.
Note that the continuous dual space
0(U))' (C b
can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals
T\in
0(U))' (C b
and integral with respect to a Radon measure; that is,
• if
T\in
0(U))' (C b
then there exists a Radon measure
\mu
on such that for all $f \in C_c^0(U), T(f) = \int_U f \, d\mu,$ and
• if
\mu
is a Radon measure on then the linear functional on
0(U) C c
defined by sending $f \in C_c^0(U)$ to $\int_U f \, d\mu$ is continuous.
Through the injection
{}t\operatorname{In}:
0(U))' (C b
\tol{D}'(U),
every Radon measure becomes a distribution on . If
f
is a locally integrable function on then the distribution $\phi \mapsto \int_U f(x) \phi(x) \, dx$ is a Radon measure; so Radon measures form a large and important space of distributions.
The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally
Linfty
functions on :
A linear function
T
on a space of functions is called if whenever a function
f
that belongs to the domain of
T
is non-negative (that is,
f
is real-valued and
f\geq0
) then
T(f)\geq0.
One may show that every positive linear functional on
0(U) C c
is necessarily continuous (that is, necessarily a Radon measure). Lebesgue measure is an example of a positive Radon measure.
Locally integrable functions as distributions
One particularly important class of Radon measures are those that are induced locally integrable functions. The function
f:U\to\R
is called if it is Lebesgue integrable over every compact subset of . This is a large class of functions which includes all continuous functions and all Lp space
Lp
functions. The topology on
l{D}(U)
is defined in such a fashion that any locally integrable function
f
yields a continuous linear functional on
l{D}(U)
– that is, an element of
l{D}'(U)
– denoted here by
Tf,
whose value on the test function
\phi
is given by the Lebesgue integral:$\langle T_f, \phi \rangle = \int_U f \phi\,dx.$
Conventionally, one abuses notation by identifying
Tf
with
f,
provided no confusion can arise, and thus the pairing between
Tf
and
\phi
is often written$\langle f, \phi \rangle = \langle T_f, \phi \rangle.$
If
f
and
g
are two locally integrable functions, then the associated distributions
Tf
and
Tg
are equal to the same element of
l{D}'(U)
if and only if
f
and
g
are equal almost everywhere (see, for instance,). In a similar manner, every Radon measure
\mu
on
U
defines an element of
l{D}'(U)
whose value on the test function
\phi
is $\int\phi \,d\mu.$ As above, it is conventional to abuse notation and write the pairing between a Radon measure
\mu
and a test function
\phi
as
\langle\mu,\phi\rangle.
Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure.
Test functions as distributions
The test functions are themselves locally integrable, and so define distributions. The space of test functions
infty(U) C c
is sequentially dense in
l{D}'(U)
with respect to the strong topology on
l{D}'(U).
This means that for any
T\inl{D}'(U),
there is a sequence of test functions,
(\phii)
infty, i=1
that converges to
T\inl{D}'(U)
(in its strong dual topology) when considered as a sequence of distributions. Or equivalently,$\langle \phi_i, \psi \rangle \to \langle T, \psi \rangle \qquad \text \psi \in \mathcal(U).$
Distributions with compact support
The inclusion map
\operatorname{In}:
infty(U) C c
\toCinfty(U)
is a continuous injection whose image is dense in its codomain, so the transpose map
{}t\operatorname{In}:
infty(U))' (C b
\tol{D}'(U)=
infty(U))' (C b
is also a continuous injection. Thus the image of the transpose, denoted by
l{E}'(U),
forms a space of distributions.
The elements of
l{E}'(U)=
infty(U))' (C b
can be identified as the space of distributions with compact support. Explicitly, if
T
is a distribution on then the following are equivalent,
T\inl{E}'(U).
• The support of
T
is compact.
• The restriction of
T
to
infty(U), C c
when that space is equipped with the subspace topology inherited from
Cinfty(U)
(a coarser topology than the canonical LF topology), is continuous.
• There is a compact subset of such that for every test function
\phi
whose support is completely outside of, we have
T(\phi)=0.
Compactly supported distributions define continuous linear functionals on the space
Cinfty(U)
; recall that the topology on
Cinfty(U)
is defined such that a sequence of test functions
\phik
converges to 0 if and only if all derivatives of
\phik
converge uniformly to 0 on every compact subset of . Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from
infty(U) C c
to
Cinfty(U).
Distributions of finite order
Let
k\in\N.
The inclusion map
\operatorname{In}:
infty(U) C c
\to
k(U) C c
is a continuous injection whose image is dense in its codomain, so the transpose
{}t\operatorname{In}:
k(U))' (C b
\tol{D}'(U)=
infty(U))' (C b
is also a continuous injection. Consequently, the image of
{}t\operatorname{In},
denoted by
l{D}'k(U),
forms a space of distributions. The elements of
l{D}'k(U)
are The distributions of order
\leq0,
which are also called , are exactly the distributions that are Radon measures (described above).
For
0k\in\N,
a is a distribution of order
\leqk
that is not a distribution of order
\leqk-1
.
A distribution is said to be of if there is some integer
k
such that it is a distribution of order
\leqk,
and the set of distributions of finite order is denoted by
l{D}'F(U).
Note that if
k\leql
then
l{D}'k(U)\subseteql{D}'l(U)
so that
l{D}'F(U):=
infty cup n=0
l{D}'n(U)
is a vector subspace of
l{D}'(U)
and furthermore, if and only if
l{D}'F(U)=l{D}'(U).
Structure of distributions of finite order
Every distribution with compact support in is a distribution of finite order. Indeed, every distribution in is a distribution of finite order, in the following sense: If is an open and relatively compact subset of and if
\rhoVU
is the restriction mapping from to, then the image of
l{D}'(U)
under
\rhoVU
is contained in
l{D}'F(V).
The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures:
Example. (Distributions of infinite order) Let
U:=(0,infty)
and for every test function
f,
let $S f := \sum_^\infty (\partial^m f)\left(\frac\right).$
Then
S
is a distribution of infinite order on . Moreover,
S
can not be extended to a distribution on
\R
; that is, there exists no distribution
T
on
\R
such that the restriction of
T
to is equal to
S.
Tempered distributions and Fourier transform
Defined below are the , which form a subspace of
l{D}'(\Rn),
the space of distributions on
\Rn.
This is a proper subspace: while every tempered distribution is a distribution and an element of
l{D}'(\Rn),
the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in
l{D}'(\Rn).
Schwartz space
The Schwartz space,
l{S}(\Rn),
is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus
\phi:\Rn\to\R
is in the Schwartz space provided that any derivative of
\phi,
multiplied with any power of
|x|,
converges to 0 as
|x|\toinfty.
These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices
\alpha
and
\beta
define:$p_ (\phi) ~=~ \sup_ \left|x^\alpha \partial^\beta \phi(x) \right|.$
Then
\phi
is in the Schwartz space if all the values satisfy:$p_ (\phi) < \infty.$
The family of seminorms
p\alpha,\beta
defines a locally convex topology on the Schwartz space. For
n=1,
the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology:$|f|_ = \sup_$
\le m
\left(\sup_ \left\\right), \qquad k,m \in \N.
Otherwise, one can define a norm on
l{S}(\Rn)
via$\|\phi\|_ ~=~ \max_$
+ \beta \leq k
\sup_ \left| x^\alpha \partial^\beta \phi(x)\right|, \qquad k \ge 1.
The Schwartz space is a Fréchet space (that is, a complete metrizable locally convex space). Because the Fourier transform changes
\partial\alpha
into multiplication by
x\alpha
and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function.
A sequence
\{fi\}
in
l{S}(\Rn)
converges to 0 in
l{S}(\Rn)
if and only if the functions
(1+|x|)k(\partialpfi)(x)
converge to 0 uniformly in the whole of
\Rn,
which implies that such a sequence must converge to zero in
Cinfty(\Rn).
l{D}(\Rn)
is dense in
l{S}(\Rn).
The subset of all analytic Schwartz functions is dense in
l{S}(\Rn)
as well.
The Schwartz space is nuclear and the tensor product of two maps induces a canonical surjective TVS-isomorphisms$\mathcal(\R^m) \ \widehat\ \mathcal(\R^n) \to \mathcal(\R^),$where
\widehat{}
represents the completion of the injective tensor product (which in this case is the identical to the completion of the projective tensor product).
Tempered distributions
The inclusion map
\operatorname{In}:l{D}(\Rn)\tol{S}(\Rn)
is a continuous injection whose image is dense in its codomain, so the transpose
{}t\operatorname{In}:
n))' (l{S}(\R b
\tol{D}'(\Rn)
is also a continuous injection. Thus, the image of the transpose map, denoted by
l{S}'(\Rn),
forms a space of distributions.
The space
l{S}'(\Rn)
is called the space of . It is the continuous dual space of the Schwartz space. Equivalently, a distribution
T
is a tempered distribution if and only if$\left(\text \alpha, \beta \in \N^n: \lim_ p_ (\phi_m) = 0 \right) \Longrightarrow \lim_ T(\phi_m)=0.$
Lp(\Rn)
for
p\geq1
are tempered distributions.
The can also be characterized as, meaning that each derivative of
T
grows at most as fast as some polynomial. This characterization is dual to the behaviour of the derivatives of a function in the Schwartz space, where each derivative of
\phi
decays faster than every inverse power of
|x|.
An example of a rapidly falling function is
|x|n\exp(|x|\beta)
for any positive
n,λ,\beta.
Fourier transform
F:l{S}(\Rn)\tol{S}(\Rn)
is a TVS-automorphism of the Schwartz space, and the is defined to be its transpose
{}tF:l{S}'(\Rn)\tol{S}'(\Rn),
which (abusing notation) will again be denoted by
F.
So the Fourier transform of the tempered distribution
T
is defined by
(FT)(\psi)=T(F\psi)
for every Schwartz function
\psi.
FT
is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that$F \dfrac = ixFT$and also with convolution: if
T
is a tempered distribution and
\psi
is a smooth function on
\Rn,
\psiT
is again a tempered distribution and$F(\psi T) = F \psi * FT$is the convolution of
FT
and
F\psi.
In particular, the Fourier transform of the constant function equal to 1 is the
\delta
distribution.
Expressing tempered distributions as sums of derivatives
If
T\inl{S}'(\Rn)
is a tempered distribution, then there exists a constant
C>0,
and positive integers
M
and
N
such that for all Schwartz functions
\phi\inl{S}(\Rn)
$\langle T, \phi \rangle \le C\sum\nolimits_$
\le N, \beta \le M
\sup_ \left|x^\alpha \partial^\beta \phi(x) \right|=C\sum\nolimits_
\le N, \beta \le M
p_(\phi).
This estimate along with some techniques from functional analysis can be used to show that there is a continuous slowly increasing function
F
and a multi-index
\alpha
such that$T = \partial^\alpha F.$
Restriction of distributions to compact sets
If
T\inl{D}'(\Rn),
then for any compact set
K\subseteq\Rn,
there exists a continuous function
F
compactly supported in
\Rn
(possibly on a larger set than itself) and a multi-index
\alpha
such that
T=\partial\alphaF
on
infty(K). C c
Using holomorphic functions as test functions
The success of the theory led to investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example Feynman integrals.
Bibliography
• Book: Barros-Neto, José. An Introduction to the Theory of Distributions. Dekker. New York, NY. 1973.
• .
• Book: Folland, G.B.. Harmonic Analysis in Phase Space. Princeton University Press. Princeton, NJ. 1989.
• Book: Friedlander. F.G.. Joshi. M.S.. Introduction to the Theory of Distributions. Cambridge University Press. Cambridge, UK. 1998. .
• .
• .
• .
• .
• Book: Petersen, Bent E.. Introduction to the Fourier Transform and Pseudo-Differential Operators. Pitman Publishing. Boston, MA. 1983. .
• .
• .
• .
• .
• .
• Book: Woodward, P.M.. Philip_Woodward. Probability and Information Theory with Applications to Radar. Pergamon Press. Oxford, UK. 1953.
• M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals)
• V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis.
• .
• .
• .
• .
• .
Notes and References
1. The image of the compact set
K
under a continuous
\R
-valued map (for example,
x\mapsto\left|\partialpf(x)\right|
for
x\inU
) is itself a compact, and thus bounded, subset of
\R.
If
K\varnothing
then this implies that each of the functions defined above is
\R
-valued (that is, none of the supremums above are ever equal to
infty
).
2. Exactly as with
Ck(K;U),
the space
Ck(K;U')
is defined to be the vector subspace of
Ck(U')
consisting of maps with support contained in
K
endowed with the subspace topology it inherits from
Ck(U')
3. Even though the topology of
infty(U) C c
is not metrizable, a linear functional on
infty(U) C c
is continuous if and only if it is sequentially continuous.
4. A is a sequence that converges to the origin.
5. If
l{P}
is also directed under the usual function comparison then we can take the finite collection to consist of a single element.
6. See for example .
7. This approach works for non-linear mappings as well, provided they are assumed to be uniformly continuous.
8. .
9. For example, let
U=\R
and take
P
to be the ordinary derivative for functions of one real variable and assume the support of
\phi
to be contained in the finite interval
(a,b),
then since
\operatorname{supp}(\phi)\subseteq(a,b)
$\begin\int_\R \phi'(x)f(x)\,dx &= \int_a^b \phi'(x)f(x) \,dx \\&= \phi(x)f(x)\big\vert_a^b - \int_a^b f'(x) \phi(x) \,d x \\&= \phi(b)f(b) - \phi(a)f(a) - \int_a^b f'(x) \phi(x) \,d x \\&= \int_a^b f'(x) \phi(x) \,d x\end$where the last equality is because
\phi(a)=\phi(b)=0.
10. Lyons. T.. Differential equations driven by rough signals. 10.4171/RMI/240. Revista Matemática Iberoamericana. 215–310. 1998. free.
11. Hairer. Martin. A theory of regularity structures. Inventiones Mathematicae. 2014. 10.1007/s00222-014-0505-4. 198. 2. 269–504. 2014InMat.198..269H. 1303.5113.
12. See for example .
13. See .
14. See for example .
15. proves the uniqueness of such an extension.
16. See for instance and .
17. Book: Folland, G.B.. Harmonic Analysis in Phase Space. Princeton University Press. Princeton, NJ. 1989.
18. Book: Horváth, John. John Horvath (mathematician). Topological Vector Spaces and Distributions. Addison-Wesley Publishing Company. Reading, MA. 1966.
19. Book: Barros-Neto, José. An Introduction to the Theory of Distributions. Dekker. New York, NY. 1973.
20. Book: Petersen, Bent E.. Introduction to the Fourier Transform and Pseudo-Differential Operators. Pitman Publishing. Boston, MA. 1983.
21. Book: Woodward, P.M.. Probability and Information Theory with Applications to Radar. Pergamon Press. Oxford, UK. 1953.
22. Book: Friedlander. F.G.. Joshi. M.S.. Introduction to the Theory of Distributions. Cambridge University Press. Cambridge, UK. 1998.
|
2023-04-02 11:26:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 97, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613133668899536, "perplexity": 1471.176421119644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00638.warc.gz"}
|
https://wiki.formulae.org/mediawiki/index.php?title=Quantum_computing&diff=prev&oldid=3966
|
# Difference between revisions of "Quantum computing"
Jump to: navigation, search
This page is under construction
This page is a tutorial to perform (simulated) quantum computing in in Fōrmulæ.
Fōrmulæ has a quantum computing package, in this section you will learn how to use it.
## Introduction
### Deterministic algorithms
Most algorithms are deterministic. It means that they follow a precise and well defined set of steps. If a given input is introduced to a deterministic program multiple times, it will always perform exactly the same set of instructions, and it will always generate the same output.
### Probabilistic algorithms
There are several kinds of non-deterministic algorithms. The flavor we are interested in is named probabilistic algorithms.
Suppose we have to calculate the area of the following shape:
Since it is a irregular shape, we cannot use the well known formulae for regular shares, such as circles, polygons, etc.
One form of calculation consists in drawing a square around the shape, and select a number of randomly chosen points inside the square, such as follows:
Several points lie in the blue area, while several others do not. Lets calculate the reason of the points that lie in the blue area respect of the total of point. The area of the blue shape is approximately such that reason multiplied by the area of the enclosing square.
Of course it is an approximation, but this approximation is closed to the real area as we use more points.
It is an example of a probabilistic algorithm. Because it uses random numbers, results can differ even if we use the same input (the same shape, the same number of points, the same enclosing square) several times.
There are several probabilistic algorithms, the Monte Carlo method is a good example.
In XXXX, XXX showed a probabilistic algorithm that runs faster that any known deterministic algorithm.
### Quantum mechanics
We do not pretend to explain all the theory about quantum mechanics, we only discuss the concepts that we are going to use, using (hopefully) simple examples.
You have surely known about the Schrödinger's cat paradox: There is a closed box, containing a cat and a mechanism that, after a specific period of time (let us say, an hour), with a 50% probability, will break a poison killing the cat. Quantum mechanics states that after an hour, if we do not open the box to see the result, inside the box there is a ghostly, mixed state of both alive and dead cat. The name of this phenomena is quantum superposition.
Once we decide to open the box and observe, the quantum superposition collapses to a state of alive cat or dead cat.
A superposition state can be linked, for example, suppose that, before the box experiment we have opened a hole to the box, and attached to it a video camera objective to record the activity inside the box. Then we perform the experiment. after an hour, we take the cassette or memory from the video camera (without detaching it from the box, and without opening the box). According to quantum mechanics the cassette or memory does not contain an alive or dead cat recording, but a superposition of both. The superposition of the box is linked to the videotape superposition. This phenomena is called quantum entanglement.
If we perform an observation on a superposition state, it will not only collapse to a defined state, it will also collapse all their entangles states to a consistent state. In our example, if we decide to watch the cassette or memory, it will collapse and we will see recording of either alive or dead cat. if we later open the box, we will see the same result that the recording.
### Quantum computing
In order to differentiate terms, we use the term classic to refer to the traditional theories, so is common to say classic computer to a traditional computer, or quantum algorithm to refer to an algorithm that use quantum elements.
In classic computers, the minimal unit of information is a bit. It is able to store a value of either a 0 (zero) or a 1.
In quantum computers, the minimal unit of information is a qubit (a quantum bit). It is able to store a zero value, usually represented as , a 1 value, usually represented as , or a superposition of both states. More specifically, a qubit has an associated probability. What is this probability referred to ? It is the probability of the qubit to collapse to 1 once it is observed (or measured). So, a qubit with a probability of 0% will always collapse to 0 when observed, a qubit with a probability of 100% will always collapse to 1 when observed, a qubit with a probability of 25% will collapse to 0 at 1/4 oft times and to 1 75% of times when observed.
So, when qubits collapse, they become classical bits, in the same way that once we perform an Schrödinger's cat and make an observation, it collapses to a well defines state (an alive cat, or a dead cat) and the superposition disappears forever.
We can also create entangled qubits. Once we measure a qubit it collapse to be a classic bit, and their entangled qubits also collapse to be classical bits, in a consistent state.
### Construction of actual bits and qubits
Bits and qubits are abstractions, in order to perform actual classical/quantum computation, physical realization of bits/qubits need to be physically built.
physical realization of bits are usually done with semiconductors.
Realization of qubits is not easy and it is currently in development stages, several materials are being tested. The main problems are the isolation of element in order to impede the interaction with other materials, and the effect of spontaneous collapse of superposition quantum states. At the time of writing this article, quantum computers are very expensive and they exist in very specialized laboratories.
### What can quantum computing be used for ?
On XXXX, XXX proved that quantum computers can perform any task that classical computers can. Further, it was proved that using a quantum computer to perform a deterministic algorithm will not be faster than with a classical computer.
However, probabilistic algorithms are the land where quantum computing can grow and flourish. After all, qubits work probabilistically, as probabilistic algorithms do.
### Quantum oracles
Because quantum computing has no advantage on deterministic algorithms, the usual form of solve a problem is to separate the deterministic part of it, and run it with a classical computer, cheap and accessible, and let the probabilistic part to be done with a quantum computer. This scheme is called an hybrid architecture. In this architecture, the quantum computer is usually named a quantum oracle.
Since the probabilistic part called multiple times (such as the calculation area example), usually the classical part is for preparation and the loop of the quantum oracle invocation, so the invocation is performed a lot of times.
### Current model of quantum computing
How is the task specified to be performed by an quantum oracle ?
It is defined as required by the current model of quantum computation: by a quantum circuit.
The following is an example of a quantum circuit:
We can observe:
• The circuit is like a digital circuit, it consist of one or several wires. In quantum circuits they are not wires, they represent the timeline, from left to right of a qubit. It is also called qubit evolution.
• The initial values of the qubits are at the left
• The circuit contains quantum gates, that alter the nature of a qubit, or they let two or more qubits to interact with each other.
• There can be measurement operations.
• After (at the right of) a measurement operation in a given qubit there cannot be any quantum gate operating on it, because after the measurement, the qubiit will have collapsed to a bit.
### Simulation of quantum computers
Quantum computers are expensive at the moment of writing this, so simulating them is a good option.
Simulation of a quantum computer requires much calculation. The amount of operations grows exponentially of the size of the quantum circuit (number of qubits) and the number of quantum gates on it.
On the other hand, simulating (relatively small) quantum circuits are good for learning, teaching and experimentation.
## Quantum computing in Fōrmulæ
### Creation of quantum circuits and gates
#### Creation of a quantum circuit
To create a quantum circuit, select the expression Programming.Quantum.Circuit. Because the number of qubits is required, it will be asked for:
After entering the number, the circuit is like the following:
The vertical rectangle is a placeholder for quantum gates. You can add more placeholders as you wish by using the INS key:
#### Addition of quantum gates
Note. This article does not provide any explanation of how quantum gates work, if you are interested please consult specialized literature.
The following quantum gates can be added to a quantum circuit:
Quantum gate Expression Parameters Visual representation
Pauli X (or NOT) Programming.Quantum.gate.PauliX The index of qubit in the circuit[note 1]
Pauli Y Programming.Quantum.Gate.PauliY The index of qubit in the circuit[note 1]
Pauli Z Programming.Quantum.Gate.PauliZ The index of qubit in the circuit[note 1]
Hadamard Programming.Quantum.Gate.Hadamard The index of qubit in the circuit[note 1]
Square root of NOT Programming.Quantum.Gate.SqrtNot The index of qubit in the circuit[note 1]
Phase shift Programming.Quantum.Gate.PhaseShift The index of qubit in the circuit[note 1]
The value of the numerator[note 2]
The value of the denominator[note 2]
S Programming.Quantum.Gate.S The index of qubit in the circuit[note 1]
T Programming.Quantum.Gate.T The index of qubit in the circuit[note 1]
Swap Programming.Quantum.Gate.Swap The indexes of the swapping qubits in the circuit[note 1]
Controlling Programming.Quantum.Gate.Controlling The index of the controlling qubit in the circuit[note 1]
The controlled gate[note 3]
Measurement[note 4] Programming.Quantum.Measurement The index of qubit in the circuit[note 1]
1. Qubits are numbered downwards, so the first qubit is the topmost, and its index is 1, not 0.
2. The result is a phase shift of .
3. The controlled gate is given as the unique child expression of the controlling expression.
4. The measurement operation is not a quantum gate, but structurally it is taken as it were.
Speaking in expressions, a quantum circuit is an expression containing quantum gates as its subexpressions. For example, the following quantum circuit:
In expressions is:
##### Controlled gates
There are several other quantum gates, but they can be created under a special kind of gate, a controlled quantum gate. A controlled gate is when a quantum gate (such like the ones we have discussed right now) called the controlled gate works depending on the state of a specific qubit, called the controlling gate.
As expressions, a a quantum controlling gate is an expression that contains one subexpression: the controlled gate.
Because a controlling gate can contain any kind of gate (excluding a measurement), it can also contain another controlling gate, so it is possible to create a gate consisting of multiple controlling gates in cascade.
See the following examples:
Gate Visual representation Expression representation
Controlled NOT (also CNOT)
Toffoli (also CCNOT)
Fredkin (also CSWAP)
### Execution of a quantum circuit
To invoke the quantum oracle, we use the Quantum.Programming.ExecuteCircuit expression. It has the following characteristics:
• It takes as parameters, the quantum circuit and an array of the input qubits.
• If the circuit contains any qubit wire containing no measurement operator, it is considered as it had an implicit one at the end (right). It means that every qubit will be eventually (explicitely or implicitely) measured.
• It returns an array of bits (no qubits, because of the last point, all the qubits will be measured).
The simplest quantum circuit is a 1-qubit circuit with no gates. Let us use as input the qubit :
Remeber that if a qubit wire does not contain a measurement operator, it contains an implicit one at the end, so, from the point of view of the ExecuteCircuit expression, the circuit is equivalent to:
So the qubit is immediately measured. Because any qubit, when measured will collapse to a 0-bit or a 1-bit, and this qubit has an associated probability to collapse to 1-bit of 0%, it will necessarily collapse to the 0-bit, which is the result retrieved.
It is common to invoke the quantum oracle many times.
Let us create a function that takes a circuit, the input qubits, and a number of calls. It will return a chart of the results:
Let us use our function with the previous example, using 100 calls:
Even if we call the quantum oracle many times, we always get a 0-bit, because all the times we have a qubit with 0% of probability to collapse to a 1-bit.
Now, we can use more complex circuits. Let us start making a simple change, We can introduce a NOT gate. This gate will change the probability of the input qubit from 0% to 100% (to collapse to a 1-bit). The result, of course, will be always a 1-bit.
#### Creating a superposition
The Hadamard quantum gate is the most used gate to create a superposition. It will change the probability of 0% of a qubit , or the probability of 100% from a qubit to 50%. This is now a qubit with a superposition between the states qubit and qubit . It means that if we had several qubits in such that state, and they all were measured, approximately the half of them would collapse to a 0-bit, and the other half would collapse to a 1-bit:
This is an example. If the exercise would be run again we may get 50-50, of 51-49, because it is now a probabilistic program.
#### Creating independent qubits
Let us examine the following excercise, which uses a two-qubit circuit:
### A very simple quantum program
Input bits are A and B, giving the sum S with carry out C.
|
2020-11-25 04:49:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6919896602630615, "perplexity": 850.57721456637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00177.warc.gz"}
|
https://www.yourdictionary.com/dissolved
|
#### Sentence Examples
• Dr Maitland (essay on" The Universal Ordinary ") thinks, but without very much foundation, that great numbers especially of the more important causes were tried before these delegates; although the records have largely perished, since they were the records of courts ' which were dissolved as soon as their single cause had been decided.
• Lodphenol is obtained by the action of iodine a.nd iodic acid on phenol dissolved in a dilute solution of caustic potash.
• The residue is dissolved in alcohol and to the cold saturated solution a cold alcoholic solution of picric acid is added.
• The cobaltous salts are formed when the metal, cobaltous oxide, hydroxide or carbonate, are dissolved in acids, or, in the case of the insoluble salts, by precipitation.
• Submerged leaves are usually filamentous or narrowly ribbonshaped, thus exposing a large amount of surface to the water, some of the dissolved gases of which they must absorb, and into which they must also excrete certain gases.
|
2020-02-17 13:12:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189578652381897, "perplexity": 6946.178668337446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00343.warc.gz"}
|
https://socratic.org/questions/how-do-you-factor-15x-3-21x-2-20x-28
|
# How do you factor 15x^3-21x^2+20x-28?
$\left(5 x - 7\right) \left(3 {x}^{2} + 4\right)$
$15 {x}^{3} - 21 {x}^{2} + 20 x - 28$
|
2021-12-08 15:35:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25450748205184937, "perplexity": 8523.29417067636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00601.warc.gz"}
|
https://astronomy.stackexchange.com/questions/39851/did-mercury-clear-its-neighborhood?noredirect=1
|
# Did Mercury clear its neighborhood?
For a body to qualify as a planet according to the IAU definition it must have "cleared its neighborhood". What evidence is there Mercury indeed cleared its neighborhood? Perhaps it migrated there afterwards, when the neighborhood had already been cleared. Does the Grand Tack hypothesis impact our definition of the inner planets as planets?
https://en.m.wikipedia.org/wiki/IAU_definition_of_planet
The present definition of a planet is vulnerable as it seems connected to a model of formation of the solar system. The answer below states that in practice an operational definition used that I believe is adequate.
• Can you explain that the "Grand Tack Hypothesis" is? – fasterthanlight Nov 15 '20 at 13:17
• If we take the 2006 definition literally (which seemingly noone does) no planet 'cleared its neighbourhood'. en.wikipedia.org/wiki/List_of_Mercury-crossing_minor_planets – John Nov 15 '20 at 13:59
• That's because the IAU are scientists not lawyers. – James K Nov 15 '20 at 16:38
• @JamesK Scientists could have easily come up with a definition that would exclude Pluto, include 8 planets, and not state anything different or contradictory. – John Nov 15 '20 at 17:02
• I know. They did. – James K Nov 15 '20 at 17:23
• -1 because all the "we call" and "we don't look" language is unsupported with authoritative sources. In Stack Exchange we supply information, we do not generate it ourselves. We are not the authority here. We are merely a service provider. Currently there is no way to tell if this is authoritative and correct or purely guesswork because you don't cite any supporting sources. This is Stack Exchange not Quora. – uhoh Nov 17 '20 at 3:41
|
2021-04-22 18:36:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44289612770080566, "perplexity": 1929.1903120927984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00559.warc.gz"}
|
https://socratic.org/questions/how-do-you-divide-2x-3-9x-2-9x-1-2x-3-using-polynomial-long-division
|
# How do you divide (2x^3+9x^2-9x+1) / (2x-3) using polynomial long division?
$\frac{2 {x}^{3} + 9 {x}^{2} - 9 x + 1}{2 x - 3} = {x}^{2} + 6 x + \frac{9}{2} + \frac{\frac{29}{2}}{2 x - 3}$
#### Explanation:
We divide by long division method
$\text{ " " " " " } \underline{{x}^{2} + 6 x + \frac{9}{2}}$
$2 x - 3 \lceiling 2 {x}^{3} + 9 {x}^{2} - 9 x + 1$
" " " " " "underline(2x^3-3x^2" " " " " " " ")
$\text{ " " " " " " " " } 12 {x}^{2} - 9 x + 1$
" " " " " " " " " "underline(12x^2-18x" " " )
$\text{ " " " " " " " " " " " " " } + 9 x + 1$
" " " " " " " " " " " " " "underline(+9x-27/2" " " )
$\text{ " " " " " " " " " " " " " " " " " } + \frac{29}{2}$
The result is
$\frac{2 {x}^{3} + 9 {x}^{2} - 9 x + 1}{2 x - 3} = {x}^{2} + 6 x + \frac{9}{2} + \frac{\frac{29}{2}}{2 x - 3}$
Checking:
$\text{Divisor x Quotient"+"Remainder"="Dividend}$
$\left(2 x - 3\right) \left({x}^{2} + 6 x + \frac{9}{2}\right) + \frac{29}{2} = 2 {x}^{3} + 12 {x}^{2} + 9 x - 3 {x}^{2} - 18 x - \frac{27}{2} + \frac{29}{2}$
$\left(2 x - 3\right) \left({x}^{2} + 6 x + \frac{9}{2}\right) + \frac{29}{2} = 2 {x}^{3} + 9 {x}^{2} - 9 x - \frac{27}{2} + \frac{29}{2}$
$\left(2 x - 3\right) \left({x}^{2} + 6 x + \frac{9}{2}\right) + \frac{29}{2} = 2 {x}^{3} + 9 {x}^{2} - 9 x + \frac{2}{2}$
$\left(2 x - 3\right) \left({x}^{2} + 6 x + \frac{9}{2}\right) + \frac{29}{2} = 2 {x}^{3} + 9 {x}^{2} - 9 x + 1$
God bless....I hope the explanation is useful.
|
2020-03-30 09:22:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46917009353637695, "perplexity": 3521.4909254778154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496901.28/warc/CC-MAIN-20200330085157-20200330115157-00545.warc.gz"}
|
http://tex.stackexchange.com/questions/169579/big-square-parentheses-with-subscript
|
# big square parentheses with subscript
what I want to do is:
what I have done is:
\begin{align}
\tag*{(1)} & v_t(\mathrm{\textbf{K}})= \mathbb{E}\left[_{0\leq x_j\leq D_{jt},j \in J} \max_{\mathbf x \in \mathcal J(\mathbf K)}\right]
\end{align}
Output
Anyone has any idea about how to code the first part, I guess I have troubles with the subscript of the square parenthesis, I will really appreciate any help. Thanks a lot
-
I would second the advice in daleif's answer: specifically, using \substack and using \Biggl, \Biggr, \biggl, \biggr, etc. when appropriate to help make the expression easier to read. (I would add: using the spacing commands \!, \,, \:, and \; to improve the space to keep things from being too cluttered or too offset.) The following example is meant to suggest useful practises in (a) typesetting such expressions, and (b) formatting mathematics to keep it easy to read.
\documentclass{article}
\usepackage{amsmath, amssymb, amsthm, amsfonts}
\renewcommand\vec[1]{\mathbf{#1}}
\newcommand\cJ{\mathcal{J}}
\newcommand\bE{\mathbb{E}}
\renewcommand\le{\leqslant}
\begin{document}
$$v_t(\vec{K}) = \bE\Biggl[\, \max_{\substack{ \vec{x} \in \cJ(\vec{K}) \\ 0 \le x_j \le D_{jt} \forall j \in J }} \;\, \sum_{j \in J} r_j x_j + v_{t-1} \Biggl( \biggl\{ K_f - \sum_{j \in J_f} x_j \biggr\}_{\!\!f \in F} \Biggr) \Biggr]$$
\end{document}
-
Thanks a lot, my apologies about the wrong title and description. – stefy buri Apr 4 at 16:52
Did you try with \biggl (or \biggr) all around? Possibly adding some space after the opening bracket. – egreg Apr 4 at 16:53
@egreg: I didn't actually. With the \max there I would prefer myself to have the huge brackets; though I would personally choose to avoid having an expectation of a maximum with so many conditions. – Niel de Beaudrap Apr 4 at 16:57
@NieldeBeaudrap I tried; there's no reason for the brackets to fully enclose the subscript to \max. The white space at the top is surely worse. – egreg Apr 4 at 17:04
@egreg: After some consideration I've edited the answer, because while I'm not convinced that the result with \Biggl and \Biggr in the outer parens looks much better, it certainly does better illustrate the point of the answer: how to pick and choose your delimiter sizes. – Niel de Beaudrap Apr 7 at 17:54
Several problems
1. No need for \tag, one should never manually number equations; let LaTeX do its thing
2. Don't use \left...\right excessively as in the example, it makes it much harder to read; use manual scaling, ie \big, \Big or \bigg (there is one level more)
3. That is not a subscript to the [; that is a two level limit to max, typeset via \max_{\substack{limit 1 \\ limit 2}}
4. Next time please post a full minimal example including preamble, easier for us to copy'n'paste, much more likely you will get help
It might be an idea to read the manual for amsmath, you will find many interesting things.
-
I agree with daleif's answer that this is not a subscript to the left square bracket but a second subscript line for \max.
The following example also plays with the sizes of the fences until the size of the subscripts are ignored for the fences in the last equation:
\documentclass{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{mleftright}
\begin{document}
\begin{gather}
v_t(\mathbf K) = \mathbb E
\left[\,
\max_{\substack{\mathbf x \in \mathcal J (\mathbf K)\\
0 \leq x_j \leq D_{jt},\,
j \in J}}
\,\sum_{j \in J}
r_j x_j + v_{t-1}
\mleft(
\Biggl\{ K_f - \sum_{j \in J_f} x_j \Biggr\}_{\!j \in F}
\mright)
\right]
\\
v_t(\mathbf K) = \mathbb E
\left[
\,
\max_{\substack{\mathbf x \in \mathcal J (\mathbf K)\\
0 \leq x_j \leq D_{jt},\,
j \in J}}
\;
\sum_{j \in J}
r_j x_j + v_{t-1}
\Biggl(
\biggl\{ K_f - \sum_{j \in J_f} x_j \biggr\}_{\!j \in F}
\Biggr)
\right]
\\
v_t(\mathbf K) = \mathbb E
\left[
\,
\smash{
\max_{\substack{\mathbf x \in \mathcal J (\mathbf K)\\
0 \leq x_j \leq D_{jt},\,
j \in J}}
\;
\sum_{j \in J}
r_j x_j + v_{t-1}
\mleft(
\smash{
\mleft\{
\smash{K_f - \sum_{j \in J_f} x_j}
\vphantom{\sum}
\mright\}
_{\!j \in F}
}
\vphantom{\mleft\{\sum\mright\}}
\mright)
}
\vphantom{\mleft\{\sum\mright\}}
\right]
\end{gather}
\end{document}
• I have added some spaces \, and \; for clarity.
• \mleft and \mright of package mleftright avoid additional horizontal spacing that is not needed for being a argument of v.
• \Biggl and \Biggr uses a smaller set of braces that \left and \right would do. IMHO the formula looks nicer, because the braces do not need to cover the full subscript of the sum symbol.
• \! moves the subscript a little to the left of the curly closing brace.
• \smash sets the contents, but tells TeX that the height and depth are zero.
• \vphantom does not set its contents, but occupies the vertical space that would be needed by the contents.
• The vertical line, especially its height in the question's image is unclear to me.
-
I would make the fences even smaller, there is no need for them to be in different sizes when they are of different type. At the the curly and round ones ought to be smaller – daleif Apr 4 at 18:01
why use gather when there's only one line? equation would be more appropriate. – barbara beeton Apr 4 at 18:05
@barbarabeeton: Easier to extend it to three equations as in the last edit. – Heiko Oberdiek Apr 4 at 18:18
@daleif: I have added two more variants. The latest even ignores the sizes of the subscripts for the fence sizes. – Heiko Oberdiek Apr 4 at 18:19
Nice one, this also show quite clearly how much space is wasted by excessive fence scaling – daleif Apr 4 at 18:27
I'd be for avoiding \left and \right here, using, as others have shown, \substack:
\documentclass{article}
\usepackage{amsmath, amssymb, amsthm, amsfonts}
\renewcommand\vec[1]{\mathbf{#1}}
\newcommand\cJ{\mathcal{J}}
\newcommand\bE{\mathbb{E}}
\begin{document}
$$v_t(\vec{K}) = \bE\biggl[\, \max_{\substack{ \vec{x} \in \cJ(\vec{K}) \\ 0 \le x_j \le D_{jt},\, j \in J }} \, \sum_{j \in J} r_j x_j + v_{t-1} \biggl( \biggl\{ K_f - \sum_{j \in J_f} x_j \biggr\}_{\!f \in F} \biggr) \biggr]$$
\end{document}
There's no real reason for the outer bracket to encompass the big subscript, taking into account the big white space that would result at the top.
Probably the problem with the subscript could be solved in another way, by extending the notation, say by setting
$\cJ(\vec{K},\vec{D}_t)=\{\,\vec{x}\in\cJ(\vec{K}):0 \le x_j \le D_{jt},\, j \in J\,\}$
so that the big formula becomes possibly clearer:
$$v_t(\vec{K}) = \bE\biggl[\, \max_{\vec{x} \in \cJ(\vec{K},\vec{D}_t)}\, \sum_{j \in J} r_j x_j + v_{t-1} \biggl( \biggl\{ K_f - \sum_{j \in J_f} x_j \biggr\}_{\!f \in F} \biggr) \biggr]$$
-
|
2014-09-22 10:33:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9987719058990479, "perplexity": 2770.3191853217377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136966.6/warc/CC-MAIN-20140914011216-00255-ip-10-234-18-248.ec2.internal.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/introductionyou-will-assume-that-you-still-work-as-a-financial-analyst-for-airjet-best-par-q1735879
|
## need by tomorrow
Introduction
You will assume that you still work as a financial analyst for AirJet Best Parts, Inc. The company is considering a capital investment in a new machine and you are in charge of making a recommendation on the purchase based on (1) a given rate of return of 15% (Task 4) and (2) the firm’s cost of capital (Task 5).
Task 4. Capital Budgeting for a New Machine
A few months have now passed and AirJet Best Parts, Inc. is considering the purchase on a new machine that will increase the production of a special component significantly. The anticipated cash flows for the project are as follows:
Year 1 $1,100,000 Year 2$1,450,000
Year 3 $1,300,000 Year 4$950,000
You have now been tasked with providing a recommendation for the project based on the results of a Net Present Value Analysis. Assuming that the required rate of return is 15% and the initial cost of the machine is $3,000,000. 1. What is the project’s IRR? (10 pts) 2. What is the project’s NPV? (15 pts) 3. Should the company accept this project and why (or why not)? (5 pts) 4. Explain how depreciation will affect the present value of the project. (10 pts) 5. Provide examples of at least one of the following as it relates to the project: (5 pts each) a. Sunk Cost b. Opportunity cost c. Erosion 6. Explain how you would conduct a scenario and sensitivity analysis of the project. What would be some project-specific risks and market risks related to this project? (20 pts) Task 5: Cost of Capital AirJet Best Parts Inc. is now considering that the appropriate discount rate for the new machine should be the cost of capital and would like to determine it. You will assist in the process of obtaining this rate. 1. Compute the cost of debt. Assume AirJet Best Parts Inc. is considering issuing new bonds. Select current bonds from one of the main competitors as a benchmark. Key competitors include Raytheon, Boeing, Lockheed Martin, and the Northrop Grumman Corporation. a. What is the YTM of the competitor’s bond? You may use a number of sources, but we recommend Morningstar. Find the YTM of one 15 or 20 year bond with the highest possible creditworthiness. You may assume that new bonds issued by AirJet Best Parts, Inc. are of similar risk and will require the same return. (5 pts) b. What is the after-tax cost of debt if the tax rate is 34%? (5 pts) c. Explain what other methods you could have used to find the cost of debt for AirJet Best Parts Inc.(10 pts) d. Explain why you should use the YTM and not the coupon rate as the required return for debt. (5 pts) 2. Compute the cost of common equity using the CAPM model. For beta, use the average beta of three selected competitors. You may obtain the betas from Yahoo Finance. Assume the risk free rate to be 3% and the market risk premium to be 4%. a. What is the cost of common equity? (5 pts) b. Explain the advantages and disadvantages to use the CAPM model as the method to compute the cost of common equity. Compare and contrast this method with the dividend growth model approach. (10 pts) 3. Compute the cost of preferred equity assuming the dividend paid for preferred stock is$2.93 and the current value of the stock is \$50 per share.
a. What is the cost of preferred equity? (5 pts)
b. Is there any other method to compute this cost? Explain. (5 pts)
4. Assuming that the market value weights of these capital sources are 30% bonds, 60% common equity and 10% preferred equity, what is the weighted cost of capital of the firm? (10 pts)
5. Should the firm use this WACC for all projects? Explain and provide examples as appropriate. (10 pts)
6. Recompute the net present value of the project based on the cost of capital you found. Do you still believe that your earlier recommendation for accepting or rejecting the project was adequate? Why or why not? (5 pts)
• Anonymous commented
Pl make separate post for each question
• Anonymous commented
Multiple question are not allowed. Pl post separately
|
2013-05-20 16:14:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17792493104934692, "perplexity": 1526.9443757754814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/221561/euler-product-of-dirichlet-series
|
# Euler product of Dirichlet Series
For $n$ a positive integer, let $f(n)$ be the squarefree part of $n$.
Find the Euler product for $\mathfrak D_{f}(s)$ where $\mathfrak D_{f}(s)$ is the Dirichlet Series of $f$.
-
$$D_{f}(s)=\prod_{p}\sum_{k\geq 0}{\frac{f(p^{k})}{p^{ks}}}$$
Throughout this, I'm going to assume convergence of the various sums - I'll leave you to work out where everything is defined. Then, for a prime $p$, the squarefree part of $p^{k}$ is $f(p^{k})=1$ if $k$ is even, and $p$ if $k$ is odd. So, putting this into the formula, we get
$$D_{f}(s)=\prod_{p}\sum_{k\geq 0}{(\frac{1}{p^{2ks}}+\frac{p}{p^{(2k+1)s}}})=\prod_{p}(\frac{1}{1-p^{2s}}+\frac{p/p^{s}}{1-p^{2s}})=\prod_{p}\frac{p^{s}+p}{p^{s}(1-p^{2s})}.$$
|
2014-11-28 20:56:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635509848594666, "perplexity": 58.684451995038906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010938.65/warc/CC-MAIN-20141125155650-00014-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://www.lancaster.ac.uk/sci-tech/about-us/people/yemon-choi
|
# Dr Yemon Choi
Lecturer in Pure Mathematics
## Research Interests
### The keyword version
Banach and operator algebras. Noncommutative harmonic analysis. Categorical and homological perspectives on functional analysis.
### The longer version
I am interested in a range of topics and problems on the interface between algebra and functional analysis. This tends to be driven by particular families of examples, but my preference is for finding general frameworks that unify common features of these examples, once one has investigated individual examples in depth. "Hacking out lots of estimates and then trying to exploit functorial behaviour" is not the most accurate description, nor the most catchy one, but it gives some idea of what I spend my research time doing.In recent years I have worked intensively on various aspects of the Fourier algebras of locally compact groups: studying these objects involves a blend of (non-commutative) harmonic analysis and representation theory. Many of the developments over the last 20 years or so use the tools, or are guided by the philosophy, of the theory of operator spaces and completely bounded maps -- these may be thought of as enriched versions of Banach spaces and linear maps between them, where one allows matricial coefficients and not just scalar ones.
I am also interested in algebras of convolution operators arising from the canonical representation(s) of a group G on the Banach space Lp(G). When p=2 there is a rich theory available, from the world of C*-algebras and von Neumann algebras; for other values of p many basic structural questions remain unresolved, even for very explicit examples such as G=SLn(R) or SLn(Z).
A gap theorem for the ZL-amenability constant of a finite group
Choi, Y. 12/2016 In: International Journal of Group Theory. 5, 4, p. 27-46. 20 p.
Journal article
Triviality of the generalized Lau product associated to a Banach algebra homomorphism
Choi, Y. 10/2016 In: Bulletin of the Australian Mathematical Society. 94, p. 286-289. 4 p.
Journal article
Realization of compact spaces as cb-Helson sets
Choi, Y. 02/2016 In: Annals of Functional Analysis. 7, 1, p. 158-169. 12 p.
Journal article
Extension of derivations, and Connes-amenability of the enveloping dual Banach algebra
Choi, Y., Samei, E., Stokke, R. 14/12/2015 In: Mathematica Scandinavica. 117, 2, p. 258-303. 46 p.
Journal article
Directly finite algebras of pseudofunctions on locally compact groups
Choi, Y. 09/2015 In: Glasgow Mathematical Journal. 57, 3, p. 693-707. 15 p.
Journal article
Weak amenability for Fourier algebras of 1-connected nilpotent Lie groups
Choi, Y., Ghandehari, M. 15/04/2015 In: Journal of Functional Analysis. 268, 8, p. 2440-2463. 24 p.
Journal article
Weak and cyclic amenability for Fourier algebras of connected Lie groups
Choi, Y., Ghandehari, M. 1/06/2014 In: Journal of Functional Analysis. 266, 11, p. 6501-6530. 30 p.
Journal article
A nonseparable amenable operator algebra which is not isomorphic to a {$C^*$}-algebra
Choi, Y., Farah, I., Ozawa, N. 10/03/2014 In: Forum of Mathematics, Sigma. 2, 12 p.
Journal article
ZL-amenability and characters for the group algebras of restricted direct products of finite groups
Alaghmandan, M., Choi, Y., Samei, E. 1/03/2014 In: Journal of Mathematical Analysis and Applications. 411, 1, p. 314-328. 15 p.
Journal article
Singly generated operator algebras satisfying weakened versions of amenability
Choi, Y. 2014 In: Algebraic methods in functional analysis. Basel : Springer Verlag p. 33-44. 12 p. ISBN: 9783034805018. Electronic ISBN: 9783034805025.
Chapter
ZL-amenability constants of finite groups with two character degrees
Alaghmandan, M., Choi, Y., Samei, E. 2014 In: Canadian Mathematical Bulletin. 57, p. 449-462. 14 p.
Journal article
Approximately multiplicative maps from weighted semilattice algebras
Choi, Y. 08/2013 In: Journal of the Australian Mathematical Society. 95, 1, p. 36-67. 32 p.
Journal article
On commutative, operator amenable subalgebras of finite von Neumann algebras
Choi, Y. 05/2013 In: Journal für die reine und angewandte Mathematik (Crelle's Journal). 678, p. 201-222. 22 p.
Journal article
Quotients of Fourier algebras, and representations which are not completely bounded
Choi, Y., Samei, E. 20/03/2013 In: Proceedings of the American Mathematical Society. 141, 7, p. 2379-2388. 10 p.
Journal article
Simplicial cohomology of band semigroup algebras
Choi, Y., Gourdeau, F., White, M.C. 08/2012 In: Proceedings of the Royal Society of Edinburgh: Section A Mathematics. 142, 4, p. 715-744. 30 p.
Journal article
Characterizing derivations from the disk algebra to its dual
Choi, Y., Heath, M.J. 3/08/2011 In: Proceedings of the American Mathematical Society. 139, 3, p. 1073-1080. 8 p.
Journal article
Approximate amenability of Schatten classes, Lipschitz algebras and second duals of Fourier algebras
Choi, Y., Ghahramani, F. 2011 In: The Quarterly Journal of Mathematics. 62, 1, p. 39-58. 20 p.
Journal article
Group representations with empty residual spectrum
Choi, Y. 05/2010 In: Integral Equations and Operator Theory. 67, 1, p. 95-107. 13 p.
Journal article
Simplicial cohomology of augmentation ideals in $\ell^1(G)$
Choi, Y. 02/2010 In: Proceedings of the Edinburgh Mathematical Society. 53, 1, p. 97-109. 13 p.
Journal article
Hochschild homology and cohomology of $\ell^1(\mathbb Z_+^k)$
Choi, Y. 2010 In: The Quarterly Journal of Mathematics. 61, 1, p. 1-28. 28 p.
Journal article
Injective convolution operators on $\ell^infty(\Gamma)$ are surjective
Choi, Y. 2010 In: Canadian Mathematical Bulletin. 53, 3, p. 447-452. 6 p.
Journal article
Simplicial homology of strong semilattices of Banach algebras
Choi, Y. 2010 In: Houston Journal of Mathematics. 36, 1, p. 237-260. 24 p.
Journal article
Splitting maps and norm bounds for the cyclic cohomology of biflat Banach algebras
Choi, Y. 2010 In: Banach Algebras 2009. Warsaw : Polish Academy of Sciences p. 105-121. 17 p. ISBN: 9788386806102.
Chapter
Translation-finite sets and weakly compact derivations from $\ell^1(\mathbb Z_+)$ to its dual
Choi, Y., Heath, M.J. 2010 In: Bulletin of the London Mathematical Society. 42, 3, p. 429-440. 12 p.
Journal article
Uniform bounds for point cohomology of $\ell^1(\mathbb Z_+)$ and related algebras
Choi, Y. 15/10/2009 In: Journal of Mathematical Analysis and Applications. 358, 2, p. 249-260. 12 p.
Journal article
Approximate and pseudo-amenability of various classes of Banach algebras
Choi, Y., Ghahramani, F., Zhang, Y. 15/05/2009 In: Journal of Functional Analysis. 256, 10, p. 3158-3191. 34 p.
Journal article
Biflatness of $\ell^1$-semilattice algebras
Choi, Y. 10/2007 In: Semigroup Forum. 75, 2, p. 253-271. 19 p.
Journal article
Simplicial homology and Hochschild cohomology of Banach semilattice algebras
Choi, Y. 05/2006 In: Glasgow Mathematical Journal. 48, 2, p. 231-245. 15 p.
Journal article
|
2018-03-17 12:51:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6436204314231873, "perplexity": 3747.2860667219625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00194.warc.gz"}
|
https://deepai.org/publication/stochastic-blockmodels-with-growing-number-of-classes
|
# Stochastic blockmodels with growing number of classes
We present asymptotic and finite-sample results on the use of stochastic blockmodels for the analysis of network data. We show that the fraction of misclassified network nodes converges in probability to zero under maximum likelihood fitting when the number of classes is allowed to grow as the root of the network size and the average network degree grows at least poly-logarithmically in this size. We also establish finite-sample confidence bounds on maximum-likelihood blockmodel parameter estimates from data comprising independent Bernoulli random variates; these results hold uniformly over class assignment. We provide simulations verifying the conditions sufficient for our results, and conclude by fitting a logit parameterization of a stochastic blockmodel with covariates to a network data example comprising a collection of Facebook profiles, resulting in block estimates that reveal residual structure.
## Authors
• 1 publication
• 7 publications
• 18 publications
• ### Finite-sample risk bounds for maximum likelihood estimation with arbitrary penalties
The MDL two-part coding index of resolvability provides a finite-sampl...
12/29/2017 ∙ by W. D. Brinda, et al. ∙ 0
• ### Consistency of the maximum likelihood and variational estimators in a dynamic stochastic block model
We consider a dynamic version of the stochastic block model, in which th...
03/11/2019 ∙ by Léa Longepierre, et al. ∙ 0
• ### Maximum Likelihood Estimation of Stochastic Frontier Models with Endogeneity
We provide a closed-form maximum likelihood estimation of stochastic fro...
04/26/2020 ∙ by Samuele Centorrino, et al. ∙ 0
• ### Efficient closed-form estimation of large spatial autoregressions
Newton-step approximations to pseudo maximum likelihood estimates of spa...
08/27/2020 ∙ by Abhimanyu Gupta, et al. ∙ 0
• ### Consistency of Maximum Likelihood for Continuous-Space Network Models
Network analysis needs tools to infer distributions over graphs of arbit...
11/06/2017 ∙ by Cosma Rohilla Shalizi, et al. ∙ 0
• ### Analysis of Networks via the Sparse β-Model
Data in the form of networks are increasingly available in a variety of ...
08/08/2019 ∙ by Mingli Chen, et al. ∙ 0
• ### Finite-Sample Analysis of Image Registration
We study the problem of image registration in the finite-resolution regi...
01/12/2020 ∙ by Ravi Kiran Raman, et al. ∙ 0
## Code Repositories
### StochasticBlockmodel
Exploring inference in variants of a stochastic blockmodel for (directed) network data
### StochasticBlockmodel
Exploring inference in variants of a stochastic blockmodel for (directed) network data
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
The global structure of social, biological, and information networks is sometimes envisioned as the aggregate of many local interactions whose effects propagate in ways that are not yet well understood. There is increasing opportunity to collect data on an appropriate scale for such systems, but their analysis remains challenging (Goldenberg et al., 2009). Here we analyze a statistical model for network data known as the (single-membership) stochastic blockmodel. Its salient feature is that it partitions the nodes of a network into distinct classes whose members all interact similarly with the network. Blockmodels were first associated with the deterministic concept of structural equivalence in social network analysis (Lorrain & White, 1971), where two nodes were considered interchangeable if their connections were equivalent in a formal sense. This concept was adapted to stochastic settings and gave rise to the stochastic blockmodel in work by Holland et al. (1983) and Fienberg et al. (1985). The model and extensions thereof have since been applied in a variety of disciplines (Wang & Wong, 1987; Nowicki & Snijders, 2001; Girvan & Newman, 2002; Airoldi et al., 2005; Doreian et al., 2005; Newman, 2006; Handcock et al., 2007; Hoff, 2008; Airoldi et al., 2008; Copic et al., 2009; Mariadassou et al., 2010; Karrer & Newman, 2011).
In this work we provide a finite-sample confidence bound that can be used when estimating network structure from data modeled by independent Bernoulli random variates, and also show that under maximum likelihood fitting of a correctly specified -class blockmodel, the fraction of misclassified network nodes converges in probability to zero even when the number of classes grows with . As noted by Rohe et al. (2011), this is advantageous if we expect class sizes to remain relatively constant even as increases. Related results for fixed have been shown by Snijders & Nowicki (1997) for networks with linearly increasing degree, and in a stronger sense for sparse graphs with poly-logarithmically increasing degree by Bickel & Chen (2009).
Our results can be related to those of Rohe et al. (2011), who use spectral methods to bound the number of misclassified nodes in the stochastic blockmodel with increasing , although with the more restrictive requirement of nearly linearly increasing degree. As noted by those authors, this assumption may not hold in many practical settings. Our manner of proof requires only poly-logarithmically increasing degree, and is more closely related to the fixed- proof of Bickel & Chen (2009)
, although we note that spectral clustering as suggested by
Rohe et al. (2011) provides a computationally appealing alternative to maximum likelihood fitting in practice.
As discussed by Bickel & Chen (2009), one may assume exchangeability in lieu of a generative
-class blockmodel: An analogue to de Finetti’s theorem for exchangeable sequences states that the probability distribution of an infinite exchangeable random graph is expressible as a mixture of distributions whose components can be approximated by blockmodels
(Kallenberg, 2005; Bickel & Chen, 2009). An observed network can then be viewed as a sample drawn from this infinite conceptual population, and so in this case the fitted blockmodel describes one mixture component thereof.
## 2 Statement of results
### 2.1 Problem formulation and definitions
We consider likelihood-based inference for independent Bernoulli data , both when no structure linking the success probabilities is assumed, as well as the special case when a stochastic blockmodel of known order is assumed to apply. To this end, let denote the symmetric adjacency matrix of a simple, undirected graph on nodes whose entries for are assumed independent random variates, and whose main diagonal is fixed to zero. The average degree of this graph is , where is its expected number of edges. Under a -class stochastic blockmodel, these edge probabilities are further restricted to satisfy
Pij=θzizj(i=1,…,N;j=i+1,…,N) (1)
for some symmetric matrix
and membership vector
. Thus the probability of an edge between two nodes is assumed to depend only on the class of each node.
Let denote the log-likelihood of observing data matrix under a -class blockmodel with parameters , and its expectation:
L(A;z,θ) =∑i
For fixed class assignment , let denote the number of nodes assigned to class , and let denote the maximum number of possible edges between classes and ; i.e., if and . Further, let and be symmetric matrices in , with
^θ(z)ab =1nab∑i
defined whenever . Observe that comprises sample proportion estimators as a function of , whereas is its expectation under the independent model. Taken over all class assignments , the sets comprise a sufficient statistic for the family of -class stochastic blockmodels, and for each , maximizes . Analogously, the sets are functions of the model parameters , and maximize . We write and when the choice of is understood, and and to abbreviate and respectively.
Finally, observe that when a blockmodel with parameters is in force, then in accordance with (1), and consequently is maximized by the true parameter values :
¯LP(¯z,¯θ)−¯LP(z,θ)=∑i
where
denotes the Kullback–Leibler divergence of a
distribution from a one.
### 2.2 Fitting a K-class stochastic blockmodel to independent Bernoulli trials
Fitting a -class stochastic blockmodel to independent trials yields estimates of averages of subsets of the parameter set , with each class assignment inducing a partition of that set. We begin with a basic lemma that expresses the difference in terms of and , and follows directly from their respective maximizing properties.
###### Lemma 1
Let comprise independent trials. Then the difference can be expressed for as
L(A;z)−¯LP(z)=∑a≤bnabD(^θab∣∣¯θab)+X−E(X).
We first bound the former quantity in this expression, which provides a measure of the distance between and its estimand under the setting of Lemma 1. The bound is used in subsequent asymptotic results, and also yields a kind of confidence measure on in the finite-sample regime.
###### Theorem 1
Suppose that a -class stochastic blockmodel is fitted to data comprising independent trials, where, for any class assignment , estimate maximizes the blockmodel log-likelihood . Then with probability at least ,
maxz{∑a≤bnabD(^θab∣∣¯θab)}
Theorem 1 is proved in the Appendix via the method of types: for fixed , the probability of any realization of is first bounded by . A counting argument then yields a deviation result in terms of , and finally a union bound is applied so that the result holds uniformly over all possible choices of assignment vector .
Our second result is asymptotic, and combines Theorem 1
with a Bernstein inequality for bounded random variables, applied to the latter terms
in Lemma 1. To ensure boundedness we assume minimal restrictions on each ; this Bernstein inequality, coupled with a union bound to ensure that the result holds uniformly over all , dictates growth restrictions on and .
###### Theorem 2
Assume the setting of Theorem 1, whereby a -class blockmodel is fitted to independent random variates , and further assume that for all and . Then if and for some ,
maxz|L(A;z)−¯LP(z)|=oP(M).
Thus whenever each is bounded away from 0 and 1 in the manner above, the maximized log-likelihood function is asymptotically well behaved in network size as long as the network’s average degree grows faster than and the number of classes fitted to it grows no faster than .
### 2.3 Fitting a correctly specified K-class stochastic blockmodel
The above results apply to the general case of independent Bernoulli data , with no additional structure assumed amongst the set of success probabilities ; if we further assume the data to be generated by a -class stochastic blockmodel whose parameters are subject to suitable identifiability conditions, it is possible to characterize the behavior of the class assignment estimator under maximum likelihood fitting of a correctly specified -class blockmodel.
###### Theorem 3
If the conclusion of Theorem 2 holds, and data are generated according to a -class blockmodel with membership vector , then
¯LP(¯z)−¯LP(^z)=oP(M), (3)
with respect to the maximum-likelihood -class blockmodel class assignment estimator .
Let be the number of incorrect class assignments under , counted for every node whose true class under is not in the majority within its estimated class under . If furthermore the following identifiability conditions hold with respect to the model sequence:
(i) for all blockmodel classes , class size grows as ;
(ii) the following holds over all distinct class pairs and all classes :
min(a,b)maxc{D(¯θac∣∣¯θac+¯θbc2)+D(¯θbc∣∣¯θac+¯θbc2)}=Ω(MKN2),
then it follows from (3) that
Thus the conclusion of Theorem 3 is that under suitable conditions the fraction of misclassified nodes goes to zero in , yielding a convergence result for stochastic blockmodels with growing number of classes. Condition (i) stipulates that all class sizes grow a rate that is eventually bounded below by a single constant times , while condition (ii) ensures that any two rows of differ in at least one entry by an amount that is eventually bounded by a single constant times . Observe that if eventually and so that conditions on and sufficient for Theorem 2 are met, then since , it follows that goes to zero in .
## 3 Numerical results
We now present results of a small simulation study undertaken to investigate the assumptions and conditions of Theorems 13 above, in which -class blockmodels were fitted to various networks generated at random from models corresponding to each of the three theorems. Because exact maximization in of the blockmodel log-likelihood is computationally intractable even for moderate , we instead employed Gibbs sampling to explore the function and recorded the best value of visited by the sampler. As the results of Theorems 1 and 2 hold uniformly in , however, we expect and to be close to their empirical estimates whenever is sufficiently large, regardless of the approach employed to select . This fact also suggests that a single-class (Erdös-Rényi) blockmodel may come closest to achieving equality in Theorems 1 and 2, as many class assignments are equally likely a priori to have high likelihood. By similar reasoning, a weakly identifiable model should come closest to achieving the error bound in Theorem 3, such as one with nearly identical within- and between-class edge probabilities. We describe each of these cases empirically in the remainder of this section.
First, the tightness of the confidence bound of (2) from Theorem 1 was investigated by fitting -class blockmodels to Erdös-Rényi networks comprising independent trials, with nodes and 0075 chosen to match the data analysis example in the sequel, and . For each , the error terms and were recorded for each of 100 trials and compared to the respective 95% confidence bounds (005) derived from Theorem 1
. The bounds overestimated the respective errors by a factor of 3 to 7 on average, with small standard deviation. In this worst-case scenario the bound is loose, but not unusable; the errors never exceeded the 95% confidence bounds in any of the trials.
To test whether the assumptions of Theorem 2 are necessary as well as sufficient to obtain convergence of to , blockmodels were next fitted to Erdös-Rényi networks of increasing size, for in the range 50–1050. The corresponding normalized log-likelihood error for different rates of growth in the expected number of edges and the number of fitted classes is shown in Fig. 1. Observe from the leftmost panel that when and , as prescribed by the theorem, this error decreases in . If the edge density is reduced to , we observe in the center panel convergence when and divergence when . This suggests that the error as a function of follows Theorem 2 closely, but that the network can be somewhat more sparse than it requires.
To test the conditions of Theorem 3, blockmodels with parameters and increasing class size were used to generate data, and corresponding node misclassification error rates were recorded as a function of correctly specified -class blockmodel fitting. Model parameter was chosen to yield equally-sized blocks, so as to meet identifiability condition (i) of Theorem 3. Parameter was chosen to yield within-class and between-class success probabilities with the property that for any class pair , the condition was satisfied, with ; identifiability condition (ii) was thus met only in the case. The rightmost panel of Fig. 1 shows the fraction of misclassified nodes when and , corresponding to the setting in which convergence of to was observed above; this fraction is seen to decay when or , but to increase when . This behavior conforms with Theorem 3 and suggests that its identifiability conditions are close to being necessary as well as sufficient.
## 4 Network data example
### 4.1 Facebook social network dataset
To illustrate the use of our results in the fitting of -class stochastic blockmodels to network data, we employed a publicly available social network dataset containing undergraduate Facebook profiles from the California Institute of Technology (people.maths.ox.ac.uk/porterm/data/facebook5.zip). These profiles indicate whenever a pair of students have identified one another as friends, yielding a network of edges and accompanying covariate information including gender, class year, and hall of residence.
Traud et al. (2011) applied community detection algorithms to this network, and compared their output to partitions based on categorical covariates such as those identified above. They concludes that a grouping of students by residence hall was most similar to the best algorithmic grouping obtained, and thus that shared residence hall membership was the best predictor for the formation of community structure. This structure is reflected in the leftmost panel of Fig. 2, which shows the network adjacency structure under an ordering of students by residence hall.
### 4.2 Logit blockmodel parameterization and fitting procedure
Here we build on the results of Traud et al. (2011) by taking covariate information explicitly into account when fitting the Facebook dataset described above. Specifically, by assuming only that links are independent Bernoulli variates and then employing confidence bounds to assess fitted blocks by way of parameter , we examine these data for residual community structure beyond that well explained by the covariates themselves.
Since the results of Theorems 1 and 2 hold uniformly over all choices of blockmodel membership vector , we may select in any manner, including those that depend on covariates. For this example, we determined an approximate maximum likelihood estimate
under a logit blockmodel that allows the direct incorporation of covariates. The model is parameterized such that the log-odds ratio of an edge occurrence between nodes
and is given by
logPij1−Pij=~θzizj+x(i,j)Tβ(i=1,…,N;j=i+1,…,N), (4)
where a vector of covariates indicating shared group membership, and model parameters are estimated from the data. Four categorical covariates were used: the three indicated above, plus an eight-category covariate indicating the range of the observed degree of each node; see Karrer & Newman (2011) for related discussion on this point. Matrix is analogous to blockmodel parameter , vector specifies the blockmodel class assignment, and vector was implemented here with sum-to-zero identifiability constraints.
Because exact maximization of the log-likelihood function corresponding to (4
) is computationally intractable, we instead employed an approach that alternated between Markov chain Monte Carlo exploration of
while holding constant, and optimization of and while holding constant. We tested different initialization methods and observed that highest likelihoods were consistently produced by first fitting class assignment vector . This fitting procedure provides a means of estimating averages over subsets of the set , under the assumption that the network data comprise independent trials.
### 4.3 Data analysis
We fitted the logit blockmodel of (4) for values of ranging from to using the stochastic maximization procedure described in the preceding paragraph, and gauged model order by the Bayesian information criterion and out-of-sample prediction using five-fold cross validation, shown respectively in the center and rightmost panels of Fig. 2. These plots suggest a relatively low model order, beginning around . The corresponding 95% confidence bounds on the divergence of from provided by Theorem 1 also yield small values for in the range 4–7: for example, when , the normalized sum of Kullback–Leibler divergences is bounded by 00067. Corresponding normalized root-mean-square error bounds over this range of are approximately one order of magnitude larger.
We then examined approximate maximum likelihood estimates of for in the range 4–7, as shown in the top two rows of Fig. 3; larger values of also reveal block structure, but exhibit correspondingly larger confidence bound evaluations. The permuted adjacency structures under each estimated class assignment are shown in the top row, along with the corresponding values of below in the second row. The structure of over this range of suggests that after covariates are taken into account, it is possible to identify a subset of students who divide naturally into two residual “meta-groups” that interact less frequently with one another in comparison to the remaining subjects in the dataset; the precision of the corresponding estimates can be quantified by Theorem 1, as in the caption of Fig. 3.
As increases, these groups become more tightly concentrated, as extra blocks absorb students whose connections are more evenly distributed. While the exact membership of each group varied over , in part due to stochasticity in the fitting algorithm employed, we observed 199 students whose meta-group membership remained constant. The bottom row of Fig. 3 shows the 8 residence halls identified for these sets of students, with the ninth category indicating unreported; observe that the effect of residence hall is still visible in that the left-hand grouping has more students in halls 4–7, while the right-hand grouping has more students in halls 1, 2, and 8.
## Acknowledgement
Work supported in part by the National Science Foundation, National Institute of Health, Army Research Office and the Office of Naval Research, U.S.A. Additional funding provided by the Harvard Medical School’s Milton Fund.
## Appendix
### Proofs of Theorems 1 and 2
###### Proof (of Theorem 1)
To begin, observe that for any fixed class assignment , every is a sum of independent Bernoulli random variables, with corresponding mean . A Chernoff bound (Dubhashi & Panconesi, 2009) shows
pr(^θab ≥¯θab+t)≤e−nabD(¯θab+t∣∣¯θab),0
Since these bounds also hold respectively for , we may bound the probability of any given realization of in terms of the Kullback–Leibler divergence of from :
pr(^θab=ϑ)≤e−nabD(ϑ∣∣¯θab).
By independence of the , this implies a corresponding bound on the probability of any :
pr(^θ)≤exp{−∑a≤bnabD(^θab∣∣¯θab)}. (5)
Now, let denote the range of for fixed , and observe that since each of the lower-diagonal entries of can independently take on distinct values, we have that . Subject to the constraint that , we see that this quantity is maximized when for all , and hence
|ˆΘ|≤[(N2)/(K+12)+1](K+12)<(N2/K2+1)K2+K2<(N/K+1)K2+K. (6)
Now consider the event that is at least as large as some ; the probability of this event is given by for
ˆΘϵ={^θ∈ˆΘ:∑a≤bnabD(^θab∣∣¯θab)≥ϵ}. (7)
Since for all , we have from (5) and (7) that
pr(ˆΘϵ)=∑^θ∈ˆΘϵpr(^θ)≤∑^θ∈ˆΘϵe−∑a≤bnabD(^θab∣∣¯θab)≤∑^θ∈ˆΘϵe−ϵ=|ˆΘϵ|e−ϵ,
and since , we may use (6) to obtain, for fixed class assignment ,
pr{∑a≤bnabD(^θ∣∣¯θ)≥ϵ}<(N/K+1)K2+Ke−ϵ. (8)
Appealing to a union bound over all possible class assignments and setting then yields the claimed result.
###### Proof (of Theorem 2)
By Lemma 1, the difference can be expressed for any fixed class assignment as , where the first term satisfies the deviation bound of (8), and comprises a weighted sum of independent random variables.
To bound the quantity , observe that since by assumption , the same is true for each corresponding average . As a result, the random variables comprising are each bounded in magnitude by . This allows us to apply a Bernstein inequality for sums of bounded independent random variables due to Chung & Lu (2006, Theorems 2.8 and 2.9, p. 27), which states that for any ,
pr{|X−E(X)|≥ϵ}≤2exp{−ϵ22∑i
Finally, observe that since the event implies either the event or the event , we have for fixed assignment that
pr{|L(A;z)−¯LP(z) ≥2ϵM}≤pr[{∑a≤bnabD(^θab∣∣¯θab)≥ϵM}∪{|X−E(X)|≥ϵM}].
Summing the right-hand sides of (8) and (9), and then over all possible assignments, yields
pr{maxz|L(A;z)−¯LP(z)|≥2ϵM}≤exp{KlogN+(K2+K)log(N/K+1)−ϵM}+2exp{KlogN−ϵ2M8log2N+(4/3)ϵlogN},
where we have used the fact that in (9). It follows directly that if and , then for every fixed as claimed.
### Proof of Theorem 3
###### Proof (of Theorem 3)
To begin, note that Theorem 2 holds uniformly in , and thus implies that
|¯LP(¯z)−L(A;¯z)|+|¯LP(^z)−L(A;^z)|=oP(M).
Since is the maximum-likelihood estimate of class assignment , we know that , implying that for some . Thus, by the triangle inequality,
|¯LP(¯z)−¯LP(^z)+δ|≤|¯LP(¯z)−L(A;¯z)|+|¯LP(^z)−(L(A;¯z)+δ)|=oP(M),
and since under any blockmodel with parameter , we have .
Under conditions (i) and (ii) of Theorem 3, we will now show that also
¯LP(¯z)−¯LP(^z)=Ne(^z)NΩ(M), (10)
holds for every realization of , thus implying that and proving the theorem.
To show (10), first observe that any blockmodel class assignment vector induces a corresponding partition of the set according to . Formally, partitions into subsets via the mapping
ζij:(i=1,…,N;j=i+1,…,N)→(l=1,…,L).
This partition is separable in the sense that there exists a bijection between and the upper triangular portion of blockmodel parameter , such that we write for membership vector . More generally, for any partition of , we may define as the arithmetic average over all in the subset indexed by . Thus we may also define
¯L∗P(Π)=∑i
so that and coincide on partitions corresponding to admissible blockmodel assignments .
The establishment of (10) proceeds in three steps: first, we construct and analyze a refinement of the partition induced by any blockmodel assignment vector in terms of its error ; then, we show that refinements increase ; finally, we apply these results to the maximum-likelihood estimate .
###### Lemma 2
Consider a -class stochastic blockmodel with membership vector , and let denote the partition of its associated induced by any . For every , there exists a partition that refines and with the property that, if conditions (i) and (ii) of Theorem 3 hold,
¯LP(¯z)−¯L∗P(Π∗)=Ne(^z)NΩ(M), (11)
where counts the number of nodes whose true class assignments under are not in the majority within their respective class assignments under .
###### Lemma 3
Let be a refinement of any partition of the set ; then .
Since Lemma 2 applies to any admissible blockmodel assignment vector , it also applies to the maximum-likelihood estimate for any realization of the data; each in turn induces a partition of blockmodel edge probabilities , and (11) holds with respect to its refinement . By Lemma 3, . Finally, observe that by the definition of , and so , thereby establishing (10).
###### Proof (of Lemma 2)
The construction of will take several steps. For a given membership class under , partition the corresponding set of nodes into subclasses according to the true class assignment of each node. Then remove one node from each of the two largest subclasses so obtained, and group them together as a pair; continue this pairing process until no more than one nonempty subclass remains, then terminate. Observe that if we denote pairs by their node indices as , then by construction but .
Repeat the above procedure for each class under , and let denote the total number of pairs thus formed. For each of the pairs , find all other distinct indices for which the following holds:
D(Pik∣∣Pik+Pjk2)+D(Pjk∣∣Pik+Pjk2)≥CMKN2, (12)
where is the constant from condition (ii) of Theorem 3, and indices and in (12) are to be interpreted respectively as whenever , and whenever . Let denote the total number of distinct triples that can be formed in this manner.
We are now ready to construct the partition of the probabilities as follows: For each of the triples , remove (or if ) and (or ) from their previous subset assignment under , and place them both in a new, distinct two-element subset. We observe the following:
(i) The partition is a refinement of the partition induced by : Since nodes and have the same class label under in that , it follows that for any , and are in the same subset under .
(ii) Since for each class at most one nonempty subclass remains after the pairing process, the number of pairs is at least half the number of misclassifications in that class. Therefore we conclude .
(iii) Condition (ii) of Theorem 3 implies that for every pair of classes , there exists at least one class for which (12) holds eventually. Thus eventually, for any of the pairs , we obtain a number of triples at least as large as the cardinality of class . Condition (i) in turn implies that the cardinality of the smallest class grows as , and thus we may write .
We can now express the difference as a sum of nonnegative divergences , where is the assignment mapping associated to , and use (12) to lower-bound this difference:
¯LP(¯z)−¯L∗P(Π∗)=∑i
###### Proof (of Lemma 3)
Let be a refinement of any partition of the set , and given indexing , let
|
2021-04-12 18:32:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740214705467224, "perplexity": 819.9215206886653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00433.warc.gz"}
|
https://www.physicsforums.com/threads/equal-area-question.163658/
|
# Equal area question
## Homework Statement
"Find a horizontal line y=k that divides the area between y=x^2 and y=9 into two parts"
## The Attempt at a Solution
Found intersection at (-3,9), (3,9)
Found total area to be 36, half(the area needed for each portion) to be 18. Don't know where to go from here.
Find the two areas as a function of k.
Find the two areas as a function of k.
Can you elaborate more on that? I'm not quite sure what you mean.
y=k divides the total area into two parts, A1=A1(k) and A2=A2(k). You need to find an expression for each area as a function of k and then find the value of k for which A1(k)=A2(k)
Hurkyl
Staff Emeritus
Gold Member
Don't be intimidated by the variable k -- the fact it's there changes nothing. You know how to compute areas, so compute the area of one of the portions.
HallsofIvy
|
2021-03-02 05:42:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095378875732422, "perplexity": 863.251999345691}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00435.warc.gz"}
|
https://codeforces.com/blog/entry/52099
|
### hloya_ygrt's blog
By hloya_ygrt, history, 5 years ago, translation,
Setter's code
First accepted Zhigan.
Setter's code
First accepted polygonia.
Setter's code
First accepted div2: RaidenEi.
First accepted div1: Lewin.
Setter's code
First accepted div2: polygonia.
First accepted div1: V--o_o--V.
Setter's code
First accepted div2: xsup.
First accepted div1: anta.
Setter's code
First accepted: ksun48.
Setter's code
• +60
» 5 years ago, # | +3 O(1) solution is possible for 810A. Link
• » » 5 years ago, # ^ | +11 My shorter, haha. http://codeforces.com/contest/810/submission/27240056
• » » » 5 years ago, # ^ | 0 python is savage.
» 5 years ago, # | ← Rev. 2 → +18 it's good to mention the First accepted
» 5 years ago, # | ← Rev. 2 → +1 In "809 A — Do you want a date?", the last line of explanation says 2**(i-1) * 2**(n-i-1). I believe it should be (2**i - 1) * (2**(n-i) - 1), i.e. make sure at least one of [1, i] gets picked, and at least one of [i+1,n] gets picked.
• » » 5 years ago, # ^ | 0 Those subsets are allowed to be empty.
• » » » 5 years ago, # ^ | 0 x[i+1] - x[i] is added to final answer for all those subsets where there is at least one integer picked from [1,i] and at least one integer picked from [i+1,n]. Note the closed intervals, and the fact that the implementation of the solution is doing what I described.
• » » » » 5 years ago, # ^ | 0 Yea, you are right, I will fix editorial soon.
• » » » » 5 years ago, # ^ | 0 Oh, OK. I misinterpreted the ranges you were considering.
• » » 5 years ago, # ^ | ← Rev. 2 → +5 "They are adding to the answer xi — xi+1 in all the subsets, in which there is at least one point a ≤ i and at least one point b ≥ i + 1." I really don't understand the explanation: if a subset s contains xi and xi + 1 and also contains other two points xa been a ≤ i and xb been b ≥ i+1, as points are sorted xa < xi < xi+1 < xb by definition of then F(s) = | xb — xa |Can someone help me here ?
• » » » 5 years ago, # ^ | +7 I meant that I'm dividing the whole range a, a + 1, ..., i, i + 1, ...b in edges between neighbours, so the answer is xb - xa = (xa + 1 - xa) + (xa + 2 - xa + 1) + ... + (xi + 1 - xi) + ...(xb - xb - 1). We add each part separately.
• » » » » 5 years ago, # ^ | 0 anti-quicksort test ??? seriously ???Someone can explain me the point of that test cases ? Java use different algorithms just for memory issues not performance.. :(
• » » » » » 5 years ago, # ^ | 0 This test was added automatically when someone hacked someone's solution, so you should always be careful using sort on java.
• » » 5 years ago, # ^ | 0 why (i-1) in "2**(i-1) * 2**(n-i-1)"?Is it for empty subset or for single elements?
» 5 years ago, # | ← Rev. 2 → 0 This is my solution to 810B but it's giving the wrong answer. I don't know why. Can somebody explain?(https://code.hackerearth.com/ad4508S)
• » » 5 years ago, # ^ | 0 Your code fails for the below case. Ex: 21 Act: 19. Basically, you need to sort based on the extra value you get by doubling not just doubling. 5 2 4 4 2 4 3 6 6 6 1 2
» 5 years ago, # | 0 It seems that Div1B has weak data.I made some mistakes on edge cases that I can't pass the data 8 2 4 7 But I didn't fail the system test.How can I add this data?
• » » 5 years ago, # ^ | 0 While the event is in progress, you can challenge, and I am pretty sure that successful challenges are added to final tests (even though I could not find this stated anywhere). I don't think this can be done after the competition ended. One option could be creating a mock competition with that problem, but I am not sure how much you can change the data.
» 5 years ago, # | 0 in Div2E / Div1C can anybody prove me that the cell(i , j) is equal to (i — 1) xor (j — 1 ) + 1 ?
• » » 5 years ago, # ^ | +45 The formulation is just an obfuscated form of Nim with two piles.
• » » » 5 years ago, # ^ | -10 so ... No prove ?
• » » » 5 years ago, # ^ | 0 Can someone link an article about that or care to explain to help us understand that?
• » » » » 5 years ago, # ^ | +7 Found proof: link
• » » 5 years ago, # ^ | +6 I am unable to understand the solution to Div2E / Div1C mentioned above. Can anybody please explain the solution?
• » » » 5 years ago, # ^ | 0 I want to join your demand. Can someone, please, give a reference on any valuable information about this type of problems?
• » » » » 5 years ago, # ^ | +13 I will briefly go through probably the most confusing part of the codeif(a==1&&A[i]==0&&x==1)continue; // and the two lines followThis line ensures that the X values considered for dp are LT/EQ to X, another way of putting this line is:if(prefix_is_equal && x > X[corresponding_bit]) then skip this state. //If the prefix is smaller then there are no restrictions to the less significant bitsThe state transit below also makes use of the same ideadp[i+1][a&(A[i]==x)][b&(B[i]==y)][c&(C[i]==z)] // and the two lines followa&(A[i] == x) implies that the equality flag remains true iff prefix is equal AND the current bit is equal.mul(z<<(30-i),dp[i][a][b][c])Just to add the values of sum according to the XOR value and the significant of the bit. add(res,solve(x2,y2,k)); sub(res,solve(x2,y-1,k)); sub(res,solve(x-1,y2,k)); add(res,solve(x-1,y-1,k)); Just in case, this is a way of querying 2D plane.If you are interested in trying-out similar questions with similar difficulty I'd recommend 747F.(These types of DP questions mostly appear as the last Div2 question, it pretty much decides who will be the first of that round as you can see here)
• » » » » » 5 years ago, # ^ | ← Rev. 2 → +3 what does the dp and cnt refer to ? same goes for the dp parameters i failed to understand even tho i read the tutorial many times :( can u please clarify :)
• » » » » » » 5 years ago, # ^ | +3 dp and sum has the same parameters, namely:dp[bitmask position][equality flag for x coord][eq flag for y][eq flag for k]bitmask position (i): we are currently evaluating the value of (1<
• » » » » » » » 5 years ago, # ^ | 0 That explains it :D Thx
• » » » » 5 years ago, # ^ | ← Rev. 2 → +37 I have linked a proof for why cell (i, j) has value (i - 1)xor(j - 1) + 1 above. Let's make a function sum(i, j, k), which gives the sum of all integers less than or equal to k that are in the upper-left i * j fragment of the matrix. We can use that to calculate the answer for any fragment. Sum for fragment (x1, y1, x2, y2) is equal to sum(x2, y2, k) - sum(x1 - 1, y2, k) - sum(x2, y1 - 1, k) + sum(x1 - 1, y1 - 1). If you don't know why, research "2D prefix sum."How do we compute the answer for that function efficiently? Let's come up with an algorithm for computing sum of all x xor y, such that for each x and y such that x ≤ a and y ≤ b and (x xor y) ≤ k. Using that we can easily compute value of sum(). Let's define dp function bt() which will compute that value for us (we'll add arguments to that function one by one when we need them). For computing the value, let's add binary digits to possible values of x xor y one by one, starting from the most significant bit. Let's add argument i to function bt(), which is the index of the digit we're currently adding. Now, for every digit i, there are 4 possibilities: we make the ith bit of x 0 and of y 0 resulting in 0 for (x xor y), or 1 and 0 resulting in 1, or 0 and 1 resulting in 1, or 1 and 1 resulting in 0. But how do we know which of these possibilities is valid (i.e. doesn't result in exceeding the limit for x, y, or (x xor y))?Suppose we're adding bit 1 at index i in x (while adding bits one by one from most significant to less significant), which initially doesn't exceed a.x would exceed a if and only if we're adding bit 1 at index i, while the ith bit of a is 0, and suffix starting from bit i + 1 of x in binary representation is equal to suffix starting from bit i + 1 of a in binary representation. UPD: by suffix, I mean bits with index i+1 through MAXLOG, which can also be viewed as prefix depending on how you visualize it. I'll call it suffix in this post. By ith bit, I mean bit (1 << i). If it's unclear for you why that's true, here's an example: a = 110110000110 x = 11011xxxxxxx (so far) ^ In the above example, we can't add 1, because x would become bigger than a. Note that in this case suffix in a (11011) is equal to suffix in x (11011). Another example: a = 110110110000 x = 11001xxxxxxx (so far) ^ Here, we can safely add 1 without making x exceed a. That is because suffix in a (11011) doesn't equal suffix in x (11001). The same goes for y with b and for (x xor y) with k. Thus, we only need to keep track of whether the suffix of x currently equals suffix of a, same for y and k. Thus we add 3 boolean arguments to bt, which becomes bt(i, sufA, sufB, sufK). Back to the four possibilities: Pairs (1, 0) and (0, 1) result in 1, thus add to answer (1 << i)*numberOfCellsIn(i - 1, newSufA, newSufB, newSufK)*bt(i - 1, newSufA, newSufB, newSufK). Pairs (0, 0) and (1, 1) result in 0, thus just add tp the answer numberOfCellsIn(i, newSufA, newSufB, newSufK)*bt(i - 1, newSufA, newSufB, newSufK). We can compute numberOfCellsIn in a fashion similar to bt(). You can check my code if it's unclear how to compute newSufs. Also check out my code for how to apply that for computing answer of sum, since I'm too lazy to finish this tutorial and I think I already explained the confusing parts and the rest is easy.Note that the dp does \$lg(max(a, b,y))*2*2*2 operations, which is sufficient to solve the problem as there are only 10^4 queries. My code is too long because I made functions numberOfCellsIn and bt separately, but I think that only makes it clearer. Hope that helps
• » » » » » 5 years ago, # ^ | 0 I am very grateful to you. It realy helps.However, a = 000011011011 x = xxxxxxx11011 (so far) ^In the above example, we can't add 1, because x would become bigger than >>a. Why cant I add 1 at this place and follow it up with 0 afterwards?
• » » » » » » 5 years ago, # ^ | ← Rev. 2 → 0 I'm sorry, that should have been reversed, and it should be called prefix instead of suffix. The most significant bit is on the left instead of the right. I'll fix this EDIT: Updated
• » » » » » 5 years ago, # ^ | 0 Really thanks a lot. :)
• » » » » » 5 years ago, # ^ | 0 last problem of div 2 explained so easily. thanks
» 5 years ago, # | 0 27270104 This is my submission to div1 A and it gets wrong answer can someone help please
• » » 5 years ago, # ^ | +11 When you write ans += (arr[i] * ( (b-c) % mod ) ) % mod; modulo operation is performed only for (arr[i] * ( (b-c) % mod ) ) part. So ans is still without modulo
• » » » 5 years ago, # ^ | +10 Thanks
• » » 5 years ago, # ^ | +1 Here is the modified version of your code. 27270623
• » » » 5 years ago, # ^ | 0 Thanks
» 5 years ago, # | ← Rev. 2 → -10 Hey! I tried the problem 809A but got wrong ans.Please can someone suggest what I did wrong?Code:#include #define MAXN 100010 #define pb push_back #define mp make_pair #define ll long long #define mod 1000000007 using namespace std; int cmpfunc (const void * a, const void * b) { return ( (int)a — (int)b ); } int main(){ int n,sum=0,power=1; cin>>n; int arr[n]; //set arr:: iterator it,it1; for(int i=0;i>arr[i]; } sort(arr,arr+n); for(int i=0;i
• » » 5 years ago, # ^ | +13 It looks like an overflow of int
• » » » 5 years ago, # ^ | 0 but i tried with long long int too but still got wrong ans
• » » » » 5 years ago, # ^ | ← Rev. 2 → +10 You need to do modulo operations each time you do any other operation, cause it can lead to long long overflow otherwise. The real answer for this problem can be as big as 2^300000 is big, so the problem requires to print it modulo 10^9 + 7. Furthermore, your solution is not fast enough for passing all tests.
• » » » » » 5 years ago, # ^ | +10 Thnx for the help! :)
» 5 years ago, # | 0 Didn't get the tutorial for 809A/810C. Can someone help ?
• » » 5 years ago, # ^ | 0 Tutorial is very confusing. Here is a much simpler solution:http://codeforces.com/contest/809/submission/27245918What we are doing here is adding up the sum of each point and all the pairs it can make to the left. For any point i, when we transition to point i + 1, we must do dp[i + 1] = dp[i] * 2 + (2^i — 1) * (difference between i + 1 and i), where dp[i] represent the answer if i is the right endpoint. (^ denotes exponentiation) This is because we can view this as adding difference between i + 1 and i to all the intervals, and the 2^i — 1 is because we want to extend the leftmost interval by diff so we must multiply diff by 2^(i — 1) and the leftmost + 1 interval by diff * 2^(i — 2)... and so on, which adds to 2^i — 1.
• » » 5 years ago, # ^ | 0 If we sort points in ascending order, then we can made following thing. Make all possible subsets from all points from i-th to j-th (i and j are always present). Then for all those subsets maximal abs difference is X[j] — X[i]. We can count how many such subsets are there with elementary combinatorics. http://codeforces.com/contest/810/submission/27251202
» 5 years ago, # | 0 I don't quite understand how does the randomly generated prior value of a node helps maintaining the treap while merging in the implementation of Div1D, would someone mind explaining it to me?
• » » 5 years ago, # ^ | 0 If you are using classic binary tree, its depth could be O(n), if the values come in sorted order. When you are randoming priorities and use them while merging, your depth of tree would be O(logn)
• » » » 5 years ago, # ^ | 0 Ah, I see. That's much more faster to code compared to the spin method.
» 5 years ago, # | 0 In 809C , can someone help to prove that number in (i,j) is [(i-1)XOR(j-1)]+1 ?
• » » 5 years ago, # ^ | 0 The position (i, j) could be regarded as two nim stacks with size of (i-1, j-1), thus the mex value of the set {(a, j) | a < i} + {(i, b) | b < j} is equivalent to the grundy value of the nim state (i-1, j-1), i.e. (i-1) ^ (j-1), plus 1.(Search grundy theorem to learn more about the "grundy value")
• » » » 5 years ago, # ^ | 0 Perfect ! Thanks so much.
» 5 years ago, # | 0 In Problem Div2C/Div1A, we can also count the number of times xi is added and is subtracted from the result. Consequently, the answer becomes the sum of xi * (2^i - 2^(n-1-i)) for i=0,..,n-1.
• » » 5 years ago, # ^ | 0 That's right . I solve it this way , too.
» 5 years ago, # | 0 Can someone help me debug this code: This code is for "Do you want a Date?" problem. I am getting the wrong answer for test case 6. This will be a great help.
• » » 5 years ago, # ^ | 0 In line 13 1<
• » » » 5 years ago, # ^ | 0 so should I do modulus with 1000000007 while calculating 2^var or I use fast exponentiation?
• » » » 5 years ago, # ^ | 0 It's still not working after considering 1<
• » » » » 5 years ago, # ^ | 0 i<(pow(2,var))`Brute-forcing every possible subset is not feasible as you have to consider all 2^(3*10^5) sets, which is going to take a whole bunch of time. You should rethink your strategy a bit. =)
» 5 years ago, # | ← Rev. 2 → 0 For 809B, the editorial states:So we always know in which of the halves [l, mid], [mid + 1, r] exists at least one point.Why is this statement true?
• » » 5 years ago, # ^ | 0 Because if the half doesn't contain a point, it won't give us information, that the closest point is closer than in a half with a point. Why? Because any point in the interval is closer, than any point out.
» 5 years ago, # | 0 In the solution of Div1 E.Can you tell me how to prove the formula about the coefficients of C ??? Thanks :)
» 5 years ago, # | ← Rev. 4 → +13 Hi guys! I made a video editorial on the problem 'Glad to meet you'. I couldn't solve it during the contest, so I had to get back at it somehow. :-) We use binary search to find the two points. Here is the link: Div 415 — Glad to Meet You
» 5 years ago, # | +3 Why did 2B disappear? The problem seems to be removed, and tutorial is not available now.
|
2022-06-29 13:22:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5150701999664307, "perplexity": 1227.207719236294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00192.warc.gz"}
|
https://physics.stackexchange.com/questions/468050/is-this-air-water-thrust-drag-comparison-correct
|
# Is this air/water thrust/drag comparison correct?
On another list, someone asked regarding a blimp animal (hot-air balloon type object) propelling itself by using basically a bellows - and comparison to a nautilus (which bellows water to move around).
So,
"but seeing as air density is very low compared to that of water, you would need huge amounts of air expulsion pressure, so much so that I doubt a biological organism would be able to generate unlike one that lives in water..."
but then ..
"Air density being much lower than water means that the air jet produces less thrust, but also the blimp has to overcome less drag. I'd assume any potential thrust/drag ratio is the same for air as it is for water, since in both cases the same fluids are producing the thrust and drag."
The second comment seems wrong to me.
Is it?
How do "submarines" compare to "aircraft" in this? They both use propellors to push the fluid around. And for that matter does it make any difference if you're "squirting" the fluid?
Intuitively it seems to me easy for a sea being to move around by squirting water; intuitively it would seem all-but impossible for the Goodyear Blimp to move around by squirting air? (There seems to be "less" of the air; the drag doesn't seem relevant?)
• I would imagine it has more to do with the relative densities. The acceleration comes from the force pair between the submarine expelling the water and the water pusing back on the submarine. Since water is much denser than air you would need to accelerate a much higher volume of air to create the same pushback force. – cal Mar 22 '19 at 16:52
• got you @cal . Are you essentially saying the drag doesn't matter much ? (Which makes sense to me.) – Fattie Mar 22 '19 at 17:08
I believe I misunderstood the problem, after some thought the second comment does make sense. Suppose these Blimp animals are spewing out air at $$s_{ms^{-1}}$$ continuously (although not necesaraly as a constant rate) - for this to happen the creature must be putting power into the air: $$P= \frac{1}{2}\rho s^{2} \frac{dV}{dt}$$ As I mentioned, since this is a pair of forces the creature experiences a driving force in return: $$F = \frac{1}{2}\rho s \frac{dV}{dt}$$ Since both the driving force also involves the density of the medium, the terminal velocity of the creaure will not: $$v_{t}^{2} = \frac{s}{C_{d}A}\left(\frac{dV}{dt}\right)$$ This "formulation" is admittedly making a lot of assumptions but I think it helps illustrate the point in the second comment that the maximum veolicty of this animal isnt dependent on the density of the fluid it is moving thorugh. It is however function of the shape of the animal and the way in which it spits out the fluid.
|
2020-10-31 18:58:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6443062424659729, "perplexity": 419.19486382978226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922411.94/warc/CC-MAIN-20201031181658-20201031211658-00207.warc.gz"}
|
https://codereview.stackexchange.com/questions/149787/generic-sliding-average-in-c?noredirect=1
|
# Generic sliding average in C++
I tried hard to come up with a function template computing a sliding average. A sliding average over data $x_1, x_2, \dots, x_n$ with window length $k \leq n$ is a sequence $y_1, y_2, \dots, y_{n - k + 1}$, where $$y_i = \frac{1}{k}\sum_{j = 0}^{k - 1} x_{i + j}.$$
Here is my attempt:
mymath.h
#ifndef MYMATH_H
#define MYMATH_H
#include <iterator>
#include <sstream>
#include <stdexcept>
template<typename ForwardIterator, typename OutputIterator>
void sliding_average(ForwardIterator begin,
ForwardIterator end,
OutputIterator output,
size_t window_length)
{
if (window_length == 0)
{
std::stringstream ss;
ss << window_length;
throw std::runtime_error{ss.str()};
}
using T = typename std::iterator_traits<ForwardIterator>::value_type;
ForwardIterator finger = begin;
T sum {};
size_t count = 0;
while (finger != end and count < window_length)
{
sum += *finger++;
count++;
}
if (count < window_length)
{
std::stringstream ss;
ss << "The length of the range (";
ss << count;
ss << ") is too short. Must be at least ";
ss << window_length;
throw std::runtime_error{ss.str()};
}
*output++ = sum / window_length;
ForwardIterator window_tail = begin;
while (finger != end)
{
sum -= *window_tail++;
sum += *finger++;
*output++ = sum / window_length;
}
}
#endif // MYMATH_H
main.cpp
#include "mymath.h"
#include <iostream>
#include <iterator>
using std::cout;
using std::endl;
using std::begin;
using std::end;
int main(int argc, const char * argv[]) {
float input[15];
for (size_t i = 0; i < 15; ++i)
{
input[i] = i + 1;
}
float output[11];
sliding_average(begin(input), end(input), begin(output), 5);
for (auto& a : output)
{
cout << a << " ";
}
cout << endl;
return 0;
}
Critique request
I would like to receive comments regarding how to make my implementation more generic, and how to make it more idiomatic. Other comments are welcome as well.
Very good. There are small things that could be improved.
It is possible to give window_length type std::iterator_traits<ForwardIterator>::difference_type. I think std::size_t is fine for most cases.
ForwardIterator begin,
ForwardIterator end
Usually those are called first and last.
if (window_length == 0)
{
std::stringstream ss;
ss << window_length;
throw std::runtime_error{ss.str()};
}
Well, the result of ss.str() is obvious :) It is possible to write constexpr there, or throw the string right into the constructor. Also, runtime_error is a good fit, but it has child called invalid_argument, which perfectly matches the case.
if (count < window_length)
{
std::stringstream ss;
ss << "The length of the range (";
ss << count;
ss << ") is too short. Must be at least ";
ss << window_length;
throw std::runtime_error{ss.str()};
}
I think that using std::stringstream is an overkill here. Throwing just "The length of the range is too short. It must be at least of length window_length" is pretty good by itself, since most IDEs will probably stop execution, so that programmers could have a look. Even if they had a catch for this, they would need to parse a string to be actually able to do something. I don't think it worth the troubles it brings.
Some caveats:
Currently if T = int the algorithm is going to produce somewhat incorrect results. May be you could write something like warning mechanism that will warn when integer type is used. I would consider #pragma message("your warning message here"). It might get portability problems but the code will still compile since unrecognized #pragmas are ignored.
• I would define two types
using input_type = typename std::iterator_traits<ForwardIterator>::value_type;
using output_type = typename std::iterator_traits<OutputIterator>::value_type;
You then perform the summation with input_type and cast to output_type before division.
input_type sum {};
...
*output++ = static_cast<output_type>(sum) / window_length;
This way you get floating-point division if you want floating-point output and integer division if you want integer output.
• Your exceptions are too verbose for my liking. I would change them to the following:
throw std::runtime_error{"window_length must be greater than 0"};
and
throw std::runtime_error{"Input size must be greater than or equal to window_length"};
• Naming
I would change finger to window_end and window_tail to window_begin. Also possibly change window_length to window_size.
• I believe that overload resolution of operator/ should make it work without explicit cast, if there is no 2 step casting (one explicit + one implicit) being implied. – Incomputable Dec 14 '16 at 0:53
• @Incomputable Do not let the template fool you. If input is an array of int and output is an array of double then sum is of type int. Therefore, the expression sum / window_length is computed in integer arithmetic even though output holds double. My suggestion is that if the output holds double or float then use floating-point division. On the other hand, if output holds an integral type then use integer division irrespective of what input holds. – twohundredping Dec 14 '16 at 7:33
• If you keep output as float, change input to int, and put in a more interesting sequence like say the Fibonacci numbers then you will see the difference between the two approaches. My approach boils down to using as much precision as is available to you. – twohundredping Dec 14 '16 at 7:36
• Thanks, didn't consider that. Though I believe there should be some way to opt in for that cast, because otherwise I think it might be hiding something dangerous. – Incomputable Dec 14 '16 at 16:05
• @Incomputable I wouldn't muddy up the interface with such details. In my opinion the user is implicitly asking for floating-precision if they pass a floating-precision type in as output. This may not always be true, but you cannot please everyone. The user can post-process to remove decimal precision if they want. Regardless, I will edit the answer to address this cast in more detail. – twohundredping Dec 14 '16 at 18:36
|
2020-12-02 03:49:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3104853332042694, "perplexity": 3787.871370761169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141686635.62/warc/CC-MAIN-20201202021743-20201202051743-00381.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/p-two-blocks-b-mass-ma-mb-respectively-are-kept-contact-frictionless-table-experimenter-pushes-block-newton-s-second-law-motion_66476
|
# P Two Blocks a and B of Mass Ma and Mb , Respectively, Are Kept in Contact on a Frictionless Table. the Experimenter Pushes Block a from - Physics
Sum
Two blocks A and B of mass mA and mB , respectively, are kept in contact on a frictionless table. The experimenter pushes block A from behind, so that the blocks accelerate. If block A exerts force F on block B, what is the force exerted by the experimenter on block A?
#### Solution
Let F' = force exerted by the experimenter on block A and F be the force exerted by block A on block B.
Let a be the acceleration produced in the system.
For block A,
$F' - F = m_A a$ ...(1)
For block B,
F = mBa ...(2)
Dividing equation (1) by (2), we get:
$\frac{F'}{F} - 1 = \frac{m_A}{m_B}$
$\Rightarrow F' = F\left( 1 + \frac{m_A}{m_B} \right)$
∴ Force exerted by the experimenter on block A is
$F\left( 1 + \frac{m_A}{m_B} \right)$
Concept: Newton’s Second Law of Motion
Is there an error in this question or solution?
#### APPEARS IN
HC Verma Class 11, Class 12 Concepts of Physics Vol. 1
Chapter 5 Newton's Laws of Motion
Q 7 | Page 79
|
2022-05-16 11:57:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49228689074516296, "perplexity": 941.1581508380973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00049.warc.gz"}
|
https://aakashdigitalsrv1.meritnation.com/ask-answer/question/find-the-distance-between-the-point-3-4-6-and-its-image-in-t/introduction-to-three-dimensional-geometry/13174089
|
# Find the distance between the point (-3,4,-6)and its image in the XY plane. Pls answer fast..dont give any links..pls give explanation
If a point of an object has coordinates (x, y, z) then the image of this point (as reflected by a mirror in the xy plane) has coordinates (x, y, -z).
Hence the coordinate of the image of the point(-3,4,-6) in xy-plane will be (-3,4,6).
So the distance between them will be d = $\sqrt{\left({\mathrm{x}}_{2}-{\mathrm{x}}_{1}{\right)}^{2}+\left({\mathrm{y}}_{2}-{\mathrm{y}}_{1}{\right)}^{2}+\left({\mathrm{z}}_{2}-{\mathrm{z}}_{1}{\right)}^{2}}\phantom{\rule{0ex}{0ex}}=\sqrt{{\left(-3+3\right)}^{2}+{\left(4-4\right)}^{2}+{\left(6+6\right)}^{2}}\phantom{\rule{0ex}{0ex}}=\sqrt{\left(6+6{\right)}^{2}}\phantom{\rule{0ex}{0ex}}=12$
• 0
What are you looking for?
|
2022-08-20 02:37:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7638592720031738, "perplexity": 572.7632327702092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00485.warc.gz"}
|
https://www.physicsforums.com/threads/a-differential-equation-x-2-u-k-u.254487/
|
A differential equation : x^2 * u'' = k*u
1. Sep 7, 2008
obomov2
Other than guessing what is the formal way of solving following types of DE :
x^2 * u'' = k*u
or more generally :
x^n * u'' + x^m * u' = k*u
u is a function of x.
thanks.
2. Sep 8, 2008
HallsofIvy
Staff Emeritus
The first is an "Euler type" or "equi-potential" equation. The change of variable t= ln(x) changes it into an equation with constant coefficients:
$$\frac{du}{dx}= \frac{du}{dt}\frac{dt}{dx}= \frac{1}{x}\frac{du}{dt}$$
$$\frac{d^2u}{dx^2}= \frac{d }{dx}(\frac{1}{x}\frac{du}{dt})$$
$$= -\frac{1}{x^2}\frac{du}{dt}+ \frac{1}{x}\frac{d }{dt}\frac{du}{dt}$$
and changing that last d/dt to d/ds introduces another 1/x
$$= -\frac{1}{x^2}\frac{du}{dt}+ \frac{1}{x^2}\frac{d^2u}{dt^2}$$
Thus
$$x^2\frac{d^2u}{dx^2}= x^2(\frac{1}{x^2}\frac{d^2u}{dt^2}- \frac{1}{x^2}\frac{du}{dt}$$
$$= \frac{d^2u}{dt^2}- \frac{du}{dt}$$
So the first equation is just
$$\frac{d^2u}{dt^2}- \frac{du}{dt}= ku$$
a linear equation with constant coefficients which has characteristic equation
$$r^2- r- k= 0$$
That has roots
$$\frac{1\pm\sqrt{1+4k}}{2}[/itex] so the general solution of the equation in terms of t is [tex]u(t)= e^t\left(C_1e^{\sqrt{1+4k}t}+ C_2e^{-\sqrt{1+4k}t}\right)$$
In terms of x,
$$u(x)= e^{ln x}\left(C_1e^{\sqrt{1+4k}(ln x)}+ C_2e^{-\sqrt{1+4k}(ln x)}\right)$$
$$= x\left(C_1x^{\sqrt{1+4k}}+ C_2x^{-\sqrt{1+4k}}\right)$$
There is no general method for the general equation.
3. Sep 8, 2008
obomov2
That was nice. Thanks.
|
2017-04-25 01:22:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7176960110664368, "perplexity": 4378.925882452838}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00149-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/3rd-isomorphism-theory.118252/
|
# 3rd Isomorphism Theory
1. Apr 21, 2006
### moo5003
Problem:
" Prove (Third Isomorphism THeorem) If M and N are normal subgroups of G and N < or = to M, that (G/N)/(M/N) is isomorphic to G/M."
Work done so far:
Using simply definitions I have simplified (G/N)/(M/N) to (GM/N). Now using the first Isomorphism theorem I want to show that a homomorphism Phi from GM to G/M exists. Such that the Kernal of Phi is N.
I constructed phi such that GM -> G/M
where it sends all x |----> xN.
My problem is as follows: How do I know xN is actually in the set G/M. It may just be that I'm going about the proof in a way that is very complicated then it should be. Any help would be greatly appreciated.
2. Apr 21, 2006
### moo5003
Alright I've been looking at some online proofs and I can see were I went wrong. I should have constructed a phi from G/N to G/M.
My only question is how to show that phi from a gN to a gM is onto G/M. I was looking at the proofs online and they didnt seem to make any sense on this part.
3. Apr 21, 2006
### matt grime
The map is I presume the on induced by sending g to [g] its coset in G/M. This is surjective. N is in the kernel so it factors as G-->G/N-->G/M. And the second map must also be surjective.
THinking more concretely, each and every coset of M is a union of cosets of N, so your map from G/N to G/M just identifies these cosets of N.
|
2017-06-26 15:52:05
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877776563167572, "perplexity": 914.3090342400291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320823.40/warc/CC-MAIN-20170626152050-20170626172050-00628.warc.gz"}
|
https://mathematica.stackexchange.com/questions/224729/laplace-equation-with-mixed-boundary-conditions
|
# Laplace equation with mixed boundary conditions
I try to solve Laplace equation in 2D on square [2,3]x[2,3], with mixed boundary conditions, I did:
ClearAll[y, x1, x2];
pde = Laplacian[y[x1, x2], {x1, x2}];
bc = {y[x1, 2] == 2 + x1, y[x1, 3] == 3 + x1};
sol = NDSolve[{pde ==
NeumannValue[-1, x1 == 2] + NeumannValue[1, x1 == 3], bc},
y, {x1, 2, 3}, {x2, 2, 3}]
Plot3D[Evaluate[y[x1, x2] /. sol], {x1, 2, 3}, {x2, 2, 3},
PlotRange -> All, AxesLabel -> {"x1", "X2", "y[x1,x2]"},
BaseStyle -> 12]
The exact solution is y=x1+x2, the problem is the results is not high accurate when I evaluate the error.
• The exact solution is y=x1+x2 Are you sure about this? How does this solution satisfy the Neumann boundary conditions? – Nasser Jun 25 at 20:09
• @Nasser Erm. The function does satisfy the Neumann boundary condition: Its derivative in x1-direction is 1 and the sign flops stems from the fact that Neumann conditions are phrased in terms of outward normals... No? – Henrik Schumacher Jun 25 at 23:34
• @user62716 Using NeumannValue requires one to do integration by parts and one has to be careful about the signs. Try switching the sign of the Laplacian to pde = -Laplacian[y[x1, x2], {x1, x2}];. Then it should work. – Henrik Schumacher Jun 25 at 23:40
• @HenrikSchumacher is NeumannValue[-1, x1 == 2] different from saying that $\frac{\partial y}{\partial x_1}$ evaluated at $x_1=2$ is $-1$? And since the claim is that the solution is $y=x_1+x_2$ then $\frac{\partial y}{\partial x_1}=1$ this is evaluated at $x=2$ is $1$ and not $-1$?. How do you translate NeumannValue[-1, x1 == 2] to normal derivative then? I just did direct translation. May be we need a whole new topic on this. On top of all of this, moving NeumannValue from RHS to LHS changes the solution. I never liked NeumannValue and prefer to use normal derivatives... – Nasser Jun 26 at 0:46
• @Nasser $\frac{\partial y}{\partial \nu} (2,x_2) = - \frac{\partial y}{\partial x_1} (2,x_2)$ because the outward normal at the point $(2,x_2)$ is $\nu = (-1 , 0)$. But I agree that NeumannValue is a bit counter intuitive, but it makes perfect sense in regard of the weak formulation that is used in FEM. – Henrik Schumacher Jun 26 at 4:59
Relatively recently, Wolfram has created a nice Heat Transfer Tutorial and a Heat Transfer Verification Manual. I model with many codes and I usually start the Verification and Validation manual and build complexity from there. It is always embarrassing to build a complex model and find that your setup does not pass verification.
The Laplace equation is special case of the heat equation so we should be able to use a verified example as a template for a properly constructed model.
For NeumannValue's, if the flux is into the domain, it is positive. If the flux is out of the domain, it is negative.
At the tutorial link, they define a function HeatTransferModel to create operators for a variety of heat transfer cases that I shall reproduce here:
ClearAll[HeatTransferModel]
HeatTransferModel[T_, X_List, k_, ρ_, Cp_, Velocity_, Source_] :=
Module[{V, Q, a = k},
V = If[Velocity === "NoFlow",
Q = If[Source === "NoSource", 0, Source];
If[FreeQ[a, _?VectorQ], a = a*IdentityMatrix[Length[X]]];
If[VectorQ[a], a = DiagonalMatrix[a]];
a = PiecewiseExpand[Piecewise[{{-a, True}}]];
Inactive[Div][a.Inactive[Grad][T, X], X] + V - Q]
If we follow the recipe of tutorial, we should be able to construct and solve a PDE system free of sign errors as I show in the following workflow.
(* Create a Domain *)
Ω2D = Rectangle[{2, 2}, {3, 3}];
(* Create parametric PDE operator *)
pop = HeatTransferModel[y[x1, x2], {x1, x2}, k, ρ, Cp, "NoFlow",
"NoSource"];
(* Replace k parameter *)
op = pop /. {k -> 1};
(* Setup flux conditions *)
nv2 = NeumannValue[-1, x1 == 2];
nv3 = NeumannValue[1, x1 == 3];
(* Setup Dirichlet Conditions *)
dc2 = DirichletCondition[y[x1, x2] == 2 + x1, x2 == 2];
dc3 = DirichletCondition[y[x1, x2] == 3 + x1, x2 == 3];
(* Create PDE system *)
pde = {op == nv2 + nv3, dc2, dc3};
(* Solve and Plot *)
yfun = NDSolveValue[pde, y, {x1, x2} ∈ Ω2D]
Plot3D[Evaluate[yfun[x1, x2]], {x1, x2} ∈ Ω2D,
PlotRange -> All, AxesLabel -> {"x1", "x2", "y[x1,x2]"},
BaseStyle -> 12]
You can test that the solution matches that exact solution over the entire range:
Manipulate[
Plot[{x1 + x2, yfun[x1, x2]}, {x1, 2, 3}, PlotRange -> All,
AxesLabel -> {"x1", "y[x1,x2]"}, BaseStyle -> 12,
PlotStyle -> {Red,
Directive[Green, Opacity[0.75], Thickness[0.015], Dashed]}], {x2,
2, 3}, ControlPlacement -> Top]
• Dear Tim Laska, thank you for your great help, can we evaluate the error and plot it? – user62716 Jun 26 at 9:47
• I did it plot = Plot3D[ Abs[yfun[x1, x2] - (x1 + x2)], {x1, x2} [Element] [CapitalOmega]2D, PlotRange -> All, AxesLabel -> {"x1", "x2", "y[x1,x2]"}, PlotLabel -> err] – user62716 Jun 26 at 10:03
• Dear Tim Laska, I have other problem, Poisson equation with variable coefficients,shall post it in new question or here? – user62716 Jun 26 at 11:48
• @user62716 You should open a new question as it appears that you have. I will try to take a look at your other question when I can. – Tim Laska Jun 26 at 13:45
• Thank you Tim, I will be waiting. Best regards – user62716 Jun 26 at 13:50
By reversing the sign of the derivative on the left side from that given in NeumannValue, this can be solved by Mathematica analytically as well.
ClearAll[y, x1, x2];
pde = Laplacian[y[x1, x2], {x1, x2}] == 0;
bc = {y[x1, 2] == 2 + x1,
y[x1, 3] == 3 + x1,
Derivative[1, 0][y][2, x2] == 1,
Derivative[1, 0][y][3, x2] == 1};
solA = DSolve[{pde, bc}, y[x1, x2], {x1, x2}];
solA = solA /. {K[1] -> n,Infinity -> 20};
solA = Activate[solA];
Plot3D[y[x1, x2] /. solA, {x1, 2, 3}, {x2, 2, 3}, PlotRange -> All,
AxesLabel -> {"x1", "X2", "y[x1,x2]"}, BaseStyle -> 12]
The BC as given above are correct, and Mathematica's analytical solution is correct also, but I agree it can be simpler.
There might be a way to simplify the infinite Fourier sum given, but I could not find it.
To show the above formulation is correct, here is Maple's solution, using same B.C. Maple as above to give the simpler form of the solution, which is $$y=x_1+x_2$$.
restart;
pde:=VectorCalculus:-Laplacian(y(x1,x2),[x1,x2])=0;
bc:=y(x1,2)=2+x1,y(x1,3)=3+x1,D[1](y)(2,x2)=1,D[1](y)(3,x2)=1;
sol:=pdsolve([pde,bc],y(x1,x2))
We just have to remember, that negative NeumannValue on left edge, means positive derivative on that edge.
• Dear Nasser, thank you for your comments, the normal derivative at left side is -1 not 1, the above analytic solution is complicated since the exact is just y=x1+x2....thanks – user62716 Jun 26 at 9:38
• the normal derivative at left side is -1 not 1 no. It is +1. you set NeumannValue to be -1. Since NeumannValue points outwards, then this means the deivative is +1. Since -1 outwards, means +1 inwards. In addition, if you change the derivative (not NeumannValue) in the code I posted from +1 to -1 you will see the solution is no longer y=x1+x2 but becomes non-linear. You can compare this solution with the numerical solution. Do you see any difference? I agree the solution has complicated Fourier series sum, but this is what Mathemtica gave for the analytical solution. – Nasser Jun 26 at 10:01
• Dear Nasser, I still can not understand you, @Nasser ∂y∂ν(2,x2)=−∂y∂x1(2,x2) because the outward normal at the point (2,x2) is ν=(−1,0) so it is -1 on the left outwards. – user62716 Jun 26 at 11:16
• Dear Nasser, the code of Tim Laska is working, I highly appreciate you and you always help me and provide perfect answer. – user62716 Jun 26 at 11:19
• @user62716 you can see from the solution itself, i.e. from just looking at the plot, that the derivative is positive on the left edge. No math is needed if we look at the solution. The slope is moving upwards. So positive slope. You can also see from Maple solution I posted, that I used positive derivative to get same solution $x_1+x_2$ right there. NeumannValue is not the same as derivative. That is what the whole confusion was about. – Nasser Jun 26 at 11:33
|
2020-09-25 10:01:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4856671988964081, "perplexity": 2496.7887469288025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00464.warc.gz"}
|
https://manual.q-chem.com/5.4/subsec_dual-dyn.html
|
# 4.7.2 Dual-Basis Dynamics
(May 16, 2021)
The ability to compute SCF and MP2 energies and forces at reduced cost makes dual-basis calculations attractive for ab initio molecular dynamics simulations, which are described in Section 9.9. Dual-basis BOMD has demonstrated savings of 58%, even relative to state-of-the-art, Fock-extrapolated BOMD. Savings are further increased to 71% for dual-basis RI-MP2 dynamics. Notably, these timings outperform estimates of extended Lagrangian (“Car-Parrinello”) dynamics, without detrimental energy conservation artifacts that are sometimes observed in the latter.
Two algorithm improvements make modest but worthwhile improvements to dual-basis dynamics. First, the iterative, small-basis calculation can benefit from Fock matrix extrapolation. Second, extrapolation of the response equations (“$Z$-vector” equations) for nuclear forces further increases efficiency. (See Section 9.9.) Q-Chem automatically adjusts to extrapolate in the proper basis set when DUAL_BASIS_ENERGY is activated.
|
2021-06-24 17:50:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7006650567054749, "perplexity": 11359.832620924002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00498.warc.gz"}
|
http://www.r-bloggers.com/2011/12/page/25/
|
# Monthly Archives: December 2011
## Comparing model selection methods
December 2, 2011
By
The standard textbook analysis of different model selection methods, like cross-validation or validation sample, focus on their ability to estimate in-sample, conditional or expected test error. However, the other interesting question is to compare the...
## O’Reilly’s Data Science Kit – Books
December 2, 2011
By
It is not as if I don't have enough books (and material on the web) to read. But this list compiled by the O'Reilly team should make any data analyst salivate.http://shop.oreilly.com/category/deals/data-science-kit.doThe Books and Video included in the...
## Easy cell statistics for factorial designs
December 2, 2011
By
A common task when analyzing multi-group designs is obtaining descriptive statistics for various cells and cell combinations. There are many functions that can help you accomplish this, including aggregate() and by() in the base installation, summaryBy() in the doBy package, and … Continue reading →
December 2, 2011
By
## Wasting away again in Martingaleville
December 1, 2011
By
Alright, I better start with an apology for the title of this post. I know, it’s really bad. But let’s get on to the good stuff, or, perhaps more accurately, the really frightening stuff. The plot shown at the top of this post is a simulation of the martingale betting strategy. You’ll find code for
## Backtesting with Short positions
December 1, 2011
By
I want to illustrate Backtesting with Short positions using an interesting strategy introduced by Woodshedder in the Simple, Long-Term Indicator Near to Giving Short Signal post. This strategy was also analyzed in details by MarketSci in Woodshedder’s Long-Term Indicator post. The strategy uses the 5 day rate of change (ROC5) and the 252 day rate
## Interviews on Revolution R Enterprise 5.0
December 1, 2011
By
For those looking for more background behind the updates in Revolution R Enterprise 5.0, there are now a couple of interviews online where I talk about the new release. At IT Business Edge ("Revolution Analytics' Goal: Make R Analysis Enterprise-Friendly"), I had a chat with Loraine Lawson about how Revolution R Enterprise fits within the analytics stack, its big-data...
## A Friday round-up
December 1, 2011
By
Just a brief selection of items that caught my eye this week. Note that this is a Friday as opposed to Friday, lest you mistake this for a new, regular feature. 1. R/statistics ggbio A new Bioconductor package which builds on the excellent ggplot graphics library, for the visualization of biological data. R development master
|
2014-10-31 04:09:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23491410911083221, "perplexity": 4203.883156355222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898844.3/warc/CC-MAIN-20141030025818-00065-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://math.eretrandre.org/tetrationforum/showthread.php?tid=396&pid=4356&mode=threaded
|
• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
Transseries, nest-series, and other exotic series representations for tetration mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 11/28/2009, 06:36 AM (This post was last modified: 11/28/2009, 06:37 AM by mike3.) (11/28/2009, 04:56 AM)andydude Wrote: What are "magic" coefficients? On this page: http://eom.springer.de/s/s087230.htm there's a formula for the "Mittag-Leffler expansion in a star", which is not a Taylor series, but a different type of series that is a sum of polynomials that converges over a whole star (it's explained on the page -- and contrast this with a Taylor series which only converges in a circle when the function is not entire). It looks like a two nested sums: $f(z) = \sum_{n=0}^{\infty} \sum_{\nu=0}^{k_n} c_{\nu}^{(n)} \frac{f^{(\nu)}(a)}{\nu!} (z - a)^{\nu}$ (and is a special case of the "nested series" and "transseries" I mention in the thread title) The "magic" numbers are the polynomial degrees $k_n$ and the coefficients $c_{\nu}^{(n)}$ on the terms. According to the site these are "independent of the form of $f(z)$ and can be evaluated once and for all", yet how to do this is not explained. « Next Oldest | Next Newest »
Messages In This Thread Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/26/2009, 09:46 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by Daniel - 11/26/2009, 03:57 PM RE: Transseries, nest-series, and other exotic series representations for tetration - by bo198214 - 11/26/2009, 04:42 PM RE: Transseries, nest-series, and other exotic series representations for tetration - by Daniel - 11/29/2009, 09:09 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by bo198214 - 11/29/2009, 09:38 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by Daniel - 12/01/2009, 02:56 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by bo198214 - 12/01/2009, 09:08 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by tommy1729 - 12/01/2009, 10:22 PM RE: Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/27/2009, 01:29 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by andydude - 11/28/2009, 04:56 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/28/2009, 06:36 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by mike3 - 11/28/2009, 06:50 AM RE: Transseries, nest-series, and other exotic series representations for tetration - by kobi_78 - 12/14/2009, 07:17 PM
Possibly Related Threads... Thread Author Replies Views Last Post Calculating the residues of $$\beta$$; Laurent series; and Mittag-Leffler JmsNxn 0 129 10/29/2021, 11:44 PM Last Post: JmsNxn Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 139 10/26/2021, 02:12 AM Last Post: JmsNxn Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 697 07/22/2021, 03:37 AM Last Post: JmsNxn Perhaps a new series for log^0.5(x) Gottfried 3 4,383 03/21/2020, 08:28 AM Last Post: Daniel Taylor series of i[x] Xorter 12 22,954 02/20/2018, 09:55 PM Last Post: Xorter An explicit series for the tetration of a complex height Vladimir Reshetnikov 13 24,089 01/14/2017, 09:09 PM Last Post: Vladimir Reshetnikov Complaining about MSE ; attitude against tetration and iteration series ! tommy1729 0 3,279 12/26/2016, 03:01 AM Last Post: tommy1729 2 fixpoints , 1 period --> method of iteration series tommy1729 0 3,335 12/21/2016, 01:27 PM Last Post: tommy1729 Taylor series of cheta Xorter 13 25,147 08/28/2016, 08:52 PM Last Post: sheldonison Tetration series for integer exponent. Can you find the pattern? marraco 20 29,803 02/21/2016, 03:27 PM Last Post: marraco
Users browsing this thread: 1 Guest(s)
|
2021-12-01 04:18:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451886177062988, "perplexity": 7814.936674550273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00060.warc.gz"}
|
https://www.physicsforums.com/threads/klein-gordan-eqn.250120/
|
# Klein-gordan eqn
#### captain
I am having trouble understand why the klein-gordan eqn is accepted to describe spin 0 particles since it gives the wrong statistical interpretation (such as negative probablities).
Related Quantum Physics News on Phys.org
#### Fra
I am having trouble understand why the klein-gordan eqn is accepted to describe spin 0 particles since it gives the wrong statistical interpretation (such as negative probablities).
Why have we not observed any supposedly fundamental spin-0 particles in nature? :)
(higgs hasn't been observed)
/Fredrik
#### clem
I am having trouble understand why the klein-gordan eqn is accepted to describe spin 0 particles since it gives the wrong statistical interpretation (such as negative probablities).
The Klein-Gordan equation has to be second quantized to get to a QM wave function.
As a one particle equation, $$\psi^*\psi$$ relates to the charge density.
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-05-19 14:21:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6307481527328491, "perplexity": 4456.580966056403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254889.43/warc/CC-MAIN-20190519141556-20190519163556-00143.warc.gz"}
|
https://btechgeeks.com/du-sol-b-com-3rd-year-human-resource-management-notes-chapter-5/
|
# DU SOL B.Com 3rd Year Human Resource Management Notes Chapter 5 Human Resource Planning (Or Manpower Planning)
## DU SOL B.Com 3rd Year Human Resource Management Notes Chapter 5 Human Resource Planning (Or Manpower Planning)
Question 1.
What is manpower planning ? Discuss its objectives.
Or
What are the objects of human resource planning ?
Human Resource Planning Meaning and Objects :
In order to understand manpower planning, or human resouce planing, we should understand the two terms manpower and planning.
Manpower is the human resources employed in any enterprise, The manpower resource is the most vital factor for the survival and prosperity of a firm. An efficient management will always think of procuring for developing adequate talent for various-positions in the organisation. Planning is the thinking process, the organised foresight, the vision based on facts and experience that is required for intelligent action. Planning means determining what to do, how to do it, when to do it and who is to do it.
Manpower planning may be defined as strategy for the procurement, development, allocation and utilisation of an enterprise’s human resources. One of the functions of human resource management is the procurement of employees in sufficient number. The success of the organisation depends upon the right type of persons placed on the job. It is the responsibility of human resource management to see whether qualified personnel have been placed on the job in sufficient number. This requires planning.
Manpower planning is the planning for manpower resources. Manpower planning ensures adequate supplies, proper quantity and quality, as well as effective utilisation of human resources. Thomas H. Patten defines manpower planning as “the process by which an organisation ensures that it has the right number of people and the right kind of people at the right place at the right time, doing things for which they are economically most useful”.
In the words of Flippo, “An executive manpower planning programme can be defined as an appraisal of an organisation’s ability to perpetuate itself with respect to its management as a determination of measures necessary to provide the essential executive talent.” According to Geisler, “Manpower planning is the process by which a firm ensures tha right number of people and right kind of people at the right place at the right time doing things for which they are economically most useful.”
Thus, manpower planning is the process of developing and determining objective, policies and programmes that will develop, utilize and distribute manpower so as to achieve the goals of the organisation.
The definitions of manpower planning suggest the following features:
1. It aims at ascertaining the manpower needs of the organisation both quantitatively and qualitatively.
2. It includes an inventory of present manpower to determine the status of the present supply of available personnel and to discover developed talent within the organisation. •
3. Manpower planning, like other planning is forward looking or future oriented. If forecasts the needs of future manpower inventory.
4. Manpower planning must focus not only on the people involved but also on the working conditions and relationship in which they work.
5. The basic purpose of the human resource planning is to make optimum utilisation of organisation’s current and future human resources.
6. It is the primary responsibility of the management to ensure effective utilisation of human resources.
Thus manpower planning ensures that the required personnel of required skill are available at the right time. Manpower planning helps in both the selection and developmental activities as it ensures that adequate persons are selected wel I in advance. This would ensure smooth growth of the organisation.
Need Or Objectives Of Manpower Planning :
The following are the main objectives of manpower planning in an organisation –
1. To ensure optimum use of available manpower – In the process of planning, the personnel department takes stock of the present work force and their characteristics. Vacancies should be filled in from among the existing manpower working at the lower level taking into account the characteristics of the persons concerned and the job requirements. Thus available personnel can be employed fruitfully on the job.
2. Forecasting of the future requirements – At the time taking decision for the expansion of the plant, it is necessary to assess the future need of the manpower. The management should also take stock of the present available manpower in the organisation and then it should decide whether the new responsibilities should be given to the existing persons or to the new recruitees who are well qualified in the field. Forecasting thus, helps in filling the right type of job with the right type of man.
If expansion is not planned, changes in the organisation like discharge, retirement, lay-off, retrenchment, demotion, separation etc. all create need of additional workers. They cannot be made available at once. This all requires manpower planning so that right type of fnan may be made available at the right time.
3. Cope with charges – Manpower planning enables the enterprise to cope with charges in competitive forces, market, technology, products and government regulations. Such changes often change the job contents, skill demands, number and type of personnel in the organisation. To cope with the changes manpower planning suggests the new training.programmes to enable the existing personnel to share the responsibility of the changed jobs.
4. Help in recruitment and Selection – Sound manpower planning helps in recruitment and selection of right type of man at the right job, at the right time. The personnel department, in the process of planning might come to know what type of person, it is to.be recruited. It may recruit the persons after proper scrutiny. The rate of labour turn over is also reduced by an effective manpower planning.
5. Maintaining Production level – Manpower planning helps in maintaining production level. The labour turn over, absenteeism, illness and level of workers all reduce the strength of the workers. Manpower planning estimates all these hazards before hand and maintains the production level by arranging for the shortfall in the existing manpower. Two types of analysis are important in this connection – work load analysis and work force analysis.
6. Effective Employee development programme – An effective employee development programme cannot be worked out unless it is linked with the manpower .requirement of the organisation while developing a development programme, the talent, abilities and motives of the individuals as well as of the organisation should be taken into consideration. An effective manpower planning also aims at making the employee development programme effective.
7. Other objectives – Other objectives of manpower planning may be – (a) establishing good industrial relations, (b) reduction in labour costs, (c) coping with the national policy on employment, (d) linking human resource planing with organisation planning (e) identifying areas of surplus personnel so that corrective measures may be taken in time, (f) meeting needs of expansion and diversification.
Question 2.
Explain the importance of manpower planning.
Importance Of Human Resource Planning:
The sound personnel policy requires that there should be adequate number of persons of the right type tp attain its objectives. Personnel objectives cannot be achieved without proper manpower planning. The importance of manpower planning can be judged from the following benefits:
1. Increase in the size of Business – Manpower planning is very helpful when there is expansion of the plant. At the time of taking the decision for expansion of the plant, a large number of workers are required to be recruited. For this purpose a stock of the existing manpower should be taken and future need of the personnel should be assessed. It is very essential to know whether presonnel are to be recruited from outside or from inside and how the training facilities are to be arranged. For all this manpower planning is essential.
2. Effective recruitment and selection policy – Manpower planning helps in formulating effective recruitment and selection policy. Manpower planning is concerned with the right type of people from all sources to meet planned requirements. Manpower planning anticipates manpower needs to develop the existing manpower to fill the future gaps. Thus only right man on the right job at the right time may be recruited and selected.
3. Effective employee development programme – Manpower planning reveals the training needs of the working manpower with the result that training and development programmes become more effective. No effective employee development programme can be worked out unless it is linked with the manpower requirements of the organisation.
4. Reduction in labour cost – Manpower planning ensures recruitment and maintenance of better developed manpower resource which results in reduced manpowers costs. Forecasting of long term manpower needs to help the management to forecast the compensation costs involved.
5. Efficient work force – Manpower planning ensures on the one hand, development of personnel at work and on the other hand, high morale of the personnel. Manpower planning motivates the existing employees and creates favourable psychological climate for motivation. Management succession gets the best contribution from the workers.
6. Avoiding disruption in production – Manpower planning may v help the organisation in procuring the skilled and qualified workers because future needs of personnel may be estimated and they are selected and trained on the basis of a well developed selection and training policy thus ensuring j uninterrupted production.
7. Good Industrial Relations – Manpower planning helps the management in developing the good industrial relations. With the help of manpower planning management may plan to absorb the redundant workers to some new jobs after training in case redundancies of workers are caused by automation or any other reason.
8. National policy on employment – National policy on employment does not allow any employer to oust the worker once employed by the organisation. It is very essential to recruit the workers carefully according to the needs of the enterprise. Only manpower planning can help the organisation in this regard. ’
9. Replacement of Employees, Employees who retire, die, resign and ‘ become incapacitated need immediate replacements, to avoid disruption in production. Provision for replacement of personnel can be made only on the basis of human resource planing.
10. Technological Progress. Human resource planning is helpful in effective utilisation of technological progress. To meet the challenges of pew technology, existing employers are to be retrained or new employees are to be recruited. ,
Question 3.
What are the prerequisites for manpower planning ?
Prerequisites For Manpower Planning:
The implementation and development of manpower planning need following prerequisites:
1. Goals or Objectives of Business – Every business enterprise has some goals or objectives. The manpower planning must be integrated with business policies as regards to profitability, production, sales and development of resources. Any change in business objectives would certainly affect the manpower planning. For example a company decides to introduce computer system in the interprise. This change will affect the manpower planning i.e., company will have to recruit computer operators or it will train its existing employees in computer science. Thus, determination of business objectives clearly in advance is a prerequisite for the effective manpower planning.
2. Support of Top-level management – Manpower planning must have the initiative and support of top level management. Personnel manager of a staff authority can only advise or guide the top management, he cannot implement decisions. Action on decisions or suggestions of the personnel manager are to be taken only at the initiative of top executives. Thus support of top management is a must for the effective manpower planning.
3. Well organised human resourcedepartment – Manpower planning requires forecasting the requirements and developments of the personnel. For this purpose, there is a need of a well organised human resource department. This department collects, records, analyses, interprets and maintains the facts and figures relating to all the personnels in the organisation.
4. Determination of related personnel policies – Determination of personnel policies regarding promotion, transfer, wages, fringe benefits, training, leaves etc. is a prerequisite for manpower planning. Without these policies manpower planning will be of little use.
5. Responsibility – The responsibility of manpower planning should be assigned to some responsible senior personnel. He should be provided all figures relating to the planning.
6. Fixing Planning Period – Planning is concerned with problem of future. The planning period is divided into short term and long term. Planning period depends on the nature of the business and the social, economic and political environment. Long-term planning is preferable for basic and heavy industries. Consumer goods industries may not resort to long term plans. The other important factors are. rate of population growth education and training facilities, cost of training etc.
7. Manpower standards – In order to avoid the problems of overstaffing and understaffing, the optimum manpower standards should be determined on the basis of prevailing standards in similar organisations, past experiences and work measurement. These factors will reduce the cost of production and ‘will increase the quality of production and will help in preparation of manpower plans.
Question 4.
Explain the process of human resource planning.
Or
Discuss the steps in the process of human resource planning.
Process Of Human Resource Planning:
The process of manpower planning consists of the following steps:
1. Analysing Organisational Objectives and Plans. The first step in the process of human resource planning is to analyse the organisational objectives and plans. The ultimate objective of manpower planning is one of matching employee abilities to enterprise requirements with an emphasis on future instead of present arrangements. Objectives may be short term or long term.
Further, organisational plans concerning technology, production, marketing finance, expansion and diversification should be analysed in order to have idea about the volume of future work activity. The plans may further be analysed into sub-plans and detailed programmes. Future organisation structure and job-design should be made clear and any change in the organisation structure should be examined so as to anticipate the manpower . requirement. A company’s plans are based on economic forecast, company’s sales and expansion forecast, and the labour market forecasts.
2. Preparing Manpower Inventory. The main purpose of human ” resource planning is to avoid the situation of over-staffing and under-staffing and for this purpose, a stock of existing manpower is to be assessed.
Manpower inventory refers to the assessment of the present and the v potential capabilities of present employees qualitatively and quantitatively. It reveals the degree to which these capabilities are employed optimally and helps to identify the gaps that exist or that are likely to arise in the firm’s human resources. Preparation of manpower inventory involves determination of personnel to be inventories, cataloguing of factual background information of each individual, systematic appraisal of each individual and listing the present and potential abilities and aptitudes of each.
3, Forecasting Manpower Needs or Demands. Forecasting of future manpower requirement is the most important part of manpower planning. The forecasting is made on the basis of corporate and functional plans, future activity levels, and future needs for human resources in the organisation. The number of people and the skill levels needed in future depends on production and sales budgets, workload analysis, work-force analysis estimated absenteeism and labour turnover etc. For a given level of operation, certain other factors like technology used, make or buy decision, job contents, behavioural pattern and control system. It is thus necessary to make projection for the new posts to be created and the vacancies arising in current manpower. The forecasting should be qualitatively and quantitatively depending upon business objectives.
The major determinants of future human resource demands are –
• Employment Trends. Trends in companys manpower can be judged by examining the changes in the payroll over the last five years within each group. By this examination, expansion or contraction may be measured.
• Replacement Needs. This need arises due to death, retirement, resignation, and termination of employees. The replacement needs may rebate to specific manpower group e.g. supervisory, clerical skilled etc. This can be assessed on the basis of past experience and future retirement situations.
• Productivity. Improvement in productivity influence the manpower planning. Gains in productivity will decrease the requirement of manpower or vice-versa.
• Absenteeism. The demand for manpower depends upon rate of absenteeism. If it is high steps should be taken to reduce it to the minimum.
• Growth and Expansion. Company’s growth plans and expansion programme should be carefully analysed to judge their impact on human resource requirements in future. Steps must be taken for procuring or developing the talent required.
4. Expected loss of Manpower, From the present stock of manpower, a discount should be given for the likely changes in manpower during the period of planning. Potential losses of human resources may be caused due to death, disability, dismissals, resignations, promotions, transfers, retrenchments, or lay off, terminations, ill health, absenteeism, deputation etc. The study of potential loss of workers should be studied in order to make an estimate of the future needs of the work-force.
5. Estimating Manpower Gaps. A comparison between the existing work-force and the projected work-force or manpower demands should be made to identify the gap between the demand and supply of work-force. It will reveal either surplus or deficit of work-force in future. The deficit suggests the number of persons to be recuited from outside whereas surplus implies redundants to be redeployed or terminated. Similarly, gaps may occur in terms of knowledge, skill and aptitudes. Employees who are dificient qualitatively, can be trained whereas employees with higher skills can be redeployed over other jobs requiring higher skills.
6. Action Planning. Once the manpower gaps are identified, action plans are developed to bridge the gaps. Action plan to meet the surplus manpower may be prepared. The surplus manpower can either be redeployed in other department/units or can be retrenched. However, retrenchment should be made only in consultation with trade unions. People may be persuaded to quit voluntarily through golden hand shake.
Deficit, on the other hand, can be met through recruitment, selection, transfer, promotions and training plans. Realistic plans for the procurement and development of manpower should be made to ensure a continuing supply of trained people to take over jobs as and when they fall vacant, either by promotion or recruitment or through training. In this way, redundancies and shortages of manpower can be avoided in the long run. Necessary modifications in the plans may be made if manpower market situation warrants.
7. Monitoring and Control. Once the action plans are implemented, the human resource jsystem and structure need to be reviewed and regulated periodically. Zero base budgeting may be used to encourage managers to justify their plans. Monitoring and control phase involves allocations and utilisation of human resources over time. Review of manpower plans and programmes reveals the surplus or deficiencies. Corrective actions may be taken immediately to remove the surplus or deficiency. Necessary modification in action plans may be made in the light of changing environment and needs of the organisation. An appraisal of manpower plans serves as a guide in future manpower planning.
Question 5.
Briefly discuss the quantitative and qualitative aspects of human resource planning.
Quantitative And Qualitative Aspects Of Human Resource Planning:
The analysis of manpower planning leads to two broad aspects of the subject viz quantitative aspect and qualitative aspect. The quantitative aspect is concerned with the determination of right number of personnel required for each type of job in the organisation and the qualitative aspect relates to specifying the quality of personnel on each job laying down the educational, professional qualifications, work experience, psychological traits etc. we shall now discuss these two aspects of manpower planning.
Quantitative Aspect of Manpower Planning: Quantitative aspect of manpower planning relates to forecasting the demand and supply of man-power and fill up the gap if any on the basis of manpower productivity, capacity utilisation and costs to identity needs for improvement in productivity or reduction in costs. For this purpose various action plans are prepared and implemented. Manpower budgets are prepared for setting standards and monitoring the implementation of manpower plans.
A. Demand Forecasting Techniques. Demand forecasting is the pro-cess of estimating the number of personnel required in future taking the cor-porate and functional plans and future activity level of the personnel in the organisation. In a manufacturing concern, the sales budget is translated into a manufacturing plan giving number and types of products to be made in each period. But the human resource requirements for a given level of operations vary depending upon the production technology, process, make or buy decision of the managements, job contents, behaviour patterns and control systems.
There are three basic demand forecasting techniques –
1. Managerial Judgement
2. Statistical Techniques
3. Work study techniques
In many cases, a contribution of the above techniques may be used.
1. Managerial Judgement. Under this technique, the experienced manager at top level estimate the future need of different departments, on the basis of their knowledge of expected future work-loaded and employee efficiency. The top management takes advice of different concerned departments. These forecasts are reviewed and agreed with the departmental managers. This may be known as top-down approach.
Alternatively, a ‘bottom-up’ approach may also be considered. Under this approach like department managers estimate the workload of their re-spective departments and decide the number of people they need in future. ‘They submit the proposals with the top management for approval. Both these techniques may be sometimes combined to get the best results. This is a very simple and time saving method but is not suitable for large concerns because of its subjective character.
2. Statistical Techniques. The most commonly used statistical man-power forecasting techniques in ratio-trend analysis.
(i) Ratio-trend Analysis. Under this method certain ratios (e.g. total output to total number of workers, total sales to total saler person, direct workers made for absenteeism, overtime, idle time, labour turnover etc. The following example illustrates the procedure –
Illustration. Suppose a cement factory aims to produce 50,000 tons of cement during 2007-08. The standard manhours required to produce one tonne of cement are estimated to be 10. On the basis of past experience the factory estimates that on an average, the worker can contribute 2500 hours per year. The total work load and the number of workers required may be estimated as follows –
1. Production budgeted output , – 50,000 tonnes
2. Standard manhours required per tonne – 10 hours
3. Total man hours used for producing 50,000tons (ixii) – 5,00,000 hours
4. Manhours available per worker during the year – 2,500 hours
5. Number of workers required ((iii) / (iv)) – 200 workers
Thus, 200 workers will be needed in 2007-08 to achieve the production
target of 50,000 tonnes of cement. However, this is not a reliable estimate because of a number of factors such as absenteeism, availability of raw materials, power breakdown, strikes, lockout etc,.influence the production schedule and allowance should be given for these factors. Gaps in the existing workforce cannot be considered is the basis of above estimates unless a work force analysis is needs.
(ii) Work force Analysis. All the existing workers are not likely to be available everyday throughout the year. So, an allowance is made for absenteeism. labour turnover, and other contingencies. If we assume that on an average 5 percent of the workforce will remain absent and another 5 percent is lost due to resignation, retirement, deaths, terminations etc. Thus, l0 percent additional workforce should be provided for. In the above illustration 10% of the actual workforce required on the job should be recruited. In this way 220 workers are required during the year. This analysis involves a detailed study of part performance, part behaviour and retirement date of each and every employee. This analysis is called work force analysis.
B. Supply Forecasting Techniques. Supply forecasting measures the quantity of manpower that is likely to be available to fill up the vacant posts. Such sources of labour supply may be within or outside the organisation. The supply analysis covers –
1. Existing manpower resources,
2. Potential losses to existing resources through labour wastage.
3. Potential changes to.existing resources through promotion, transfer etc,
4. Effect of changes in conditions of work and absenteeism.
5. Sources of supply within the organisation.
1. Existing Manpower Resources. The first supply resource analysis is to identify the existing workforce by function, department, occupation, level of skill and states just to identify the resource centres. Consisting of broadly homogeneous groups to make supply forecasts.
Such analysis reveals the number of employees internally available if needed in future having special abilities and skills. It is just to know how many people will be available for promotion internally and where they can be found.
An analysis by age is also important to avoid problems arising from a sudden rush of retirement, a block in promotion aspect or a preponderance of older employees, lengths of service analysis is also important because it will provide evidence of survival rates, which are a necessary tool for use by planners in predicting future resources.
The study of existing ratios between different categories of staff is also important to know the areas where rapid changes are seen and which may result in manpower supply problem.
2. Labour wastage. Labour wastage due to labour leaving the organisation should be analysed in order to forecast future losses and to identify the reasons for leaving the organisation plans should be drawn to replace uncontrollable loses. The following are certain techniques to measure such losses’ –
(i) Labour Turnover Index. The one traditional formula for measuring wastage is labour turnover index which is given below –
$$\frac{\text { No.of leavers in a specified period }}{\text { Average no. of employees during the same period }} \times 100$$
The method is common in use because it is easy to calculate and to understand. The formula can be misleading also. The main objection to the measurement of labour turnover in terms of the figure may be inflated by the high turnover of a relatively small proportion of labour force.
The labour wastage percentage is a suspect if the average number of workers .employed upon which this percentage is based, is unrepresentative of recent trends because of considerable increases or decreases during the period in the number employ.
(ii) Labour stability Index. It is an improvement over the labour turn-over index. Labour stability index is shown as below –
$$\frac{\text { No. with one year’s service or more }}{\text { No. employed within the year }} \times 100$$
The formula shows a tendency of stay workers in the organisation and therefore shows the degree of which there is a continuity of employment.
(iii) Length of service Analysis. The analysis is made to know the av-erage length of service of peopl^ who leave the organisation. This also gives an index of labour turnover. It is also crude and not fair because it only deals with the total number of people who leave the organisation. A more refined analysis would be to calculate such an index for each category of employees and then compare them with previous figures.
(iv) Survival Rate. The survival rate of employees is the proportion of employees recruited within a certain period and who remain with the firm after so many months or years of service. Thus, if the analysis finds that the workers who have completed their apprenticeship time during last 2 years, only 50 percent are with the company, it means the survival rate is 50 percent > and the company has to train 100 workers during the next five years if company requirement is only of 50 workers.
3. Internal Promotions and Transfers. The supply forecast should indicate the number of vacancies that will have to be filled to meet the demand forecast. Vacancies in the organisation arise because people leave the organisation or due to expansion of the organisation. The vacancies are filled up by transfer or promotion within the department/organisation that may produce a chain reaction of replacements. In a large organisation, persistent pattern of promotion and transfer may develop and it may be possible to predict the proportion of employees who are likely to be promoted or moved in future, starting a chain reaction. For this purpose, management succession planning should be worked out in the organisation by reference to known, retirements and transfers.
4. Changing conditions on work and Absenteeism. A study should also.be undertaken to have effect of change in working conditions on work and absenteeism. This may cover factors like change in weekly working hours, overtime policy, length and timing of holidays, retirement policy for employing part timers, and shift system. The effect of absenteeism on future supply of labour should also be studied and trend in absenteeism should be analysed to trace causes and identify possible remedial actions.
5. Sources of Supply. Sources of labour supply internal as well as ex-ternal should also be established. Internal sources include the output from internal training schemes or the management development programmes and the reservoirs of skills and potentials.that exist within the organisation; When developing expansion plans outside sources should also be explored. Thus, insidp and outside availability of labour supply should be used when preparing development plans. If skilled or desirable persons are not available internally or externally, actions may be taken to develop or redevelop training or retraining programme to upgrade the available manpower to meet the company’s needs. There are so many local or national factors which have bearing on the supply of manpower.
Qualitative Aspect of Manpower Planning. Having done the exercise of determining the number of manpower for each job in the organisation, the next step comes to determine the quality of the people required for individual job. The quality of manpower required varies from job to job. Therefore, the quality of employees required for a job can be determined after determining the job requirements. The nature of job would help determining the minimum acceptable qualities of the person to be put up on the job.
The aspect is the qualitative aspect of manpower planning. The process of determining the na ture of the job together with the minimum acceptable qualities on the part of the personnel required for adequate performance of the job is termed as job analysis. To quote Endwin B. Flippo – “Job analysis is the process of studying and collecting information relating to the operation and responsibilities of a specific job. The immediate product of this analysis are job description and job specifications With the help of information obtained through job analysis are job- description and job specification. Job description is a summary of the tasks, duties and responsibilities in a job. Basically, the job description indicates what is done, why it is done and where it is done and briefly how it is done. In other words, it sets performance standards telling what performance, the job demands. The employee must know what is expected and what is below or above standards so that he may perform better.
A job specification on the other hand is a statement of the minimum acceptable quantities (educational qualification, mental abilities, special qualifications and physique etc.) necessary to perform a job properly. It designates the qualities required for acceptable performance. The major use of job specification is to guide in the recruiting and selecting of people to fill jobs.
Question 6.
Discuss the problems in human resource planning. How can these problems be tackled successfully ?
Problems In Or Limitations Of Human Resource Planning:
The problems in the process of human resource planning are as follows –
1. Inaccuracy – Human resource planning forecasts the demand for and supply of manpower during plan period. Forecaste can never be a cent percent projection, Longer the time horizon, greater are the chances of inaccuracy. Inaccuracy may be higher when departmental forecasts are aggregated without critical review or where variables in the environment are ignored.
2. Time and costs involved – Manpower planning is a time consuming and expensive exercise. A good deal of time and costs are involved in data collection and their analysis to make forecasting. Being a costly affair, only large sized firms can resort to manpower planning.
3. Resistance by Employees and Employers – Employees and trade unions feel the manpower planning a futile and useless exercise. They feel that due to large scale unemployment, people will be available as and when required. Moreover they feel that the employer tries to increase their work load through manpower planning. The manpower planning regulates them through productivity bargaining.
Employers also resist manpower planning because they feel that it increases the cost of labour. Managers and Jiuman resource planners do not fully understand the human resource planning process and lack a strong sense of purpose.
4. Inefficient Information System – In most of the Indian industries, the human resource information system is not satisfactory. In the absence of reliable human resource data, it is not possible to develop fully the human resource plans.
5. Uncertainties – There are certain uncertainties or constraints in the way of human resource planning. These are absenteeism, labour turnover seasonal employment, technological changes, and market fluctuations etc. It is therefore, risky to depend upon general estimates of manpower because of the rapid changes in the international and external environment.
6. No Top Management Support – There is lack of support and commitment from the top management. In the absence of support of top management, the human resource experts find it difficult to carry out the manpower plans in their true spirit. Sometimes, it happens that the process is started with great funfare but is not sustained due to lack of support from the top.” In some cases, sophisticated human resource technologies are adopted only because their rivals have introduced them. These may not yield results unless matched with the needs and environment of the particular industry or enterprise.
7. Too much focus on Quantitative Aspect – In some enterprises, too much emphasis is laid on the quantitative aspect of human resource planning to ensure a smooth flow of people in and out of the organisation. This sometimes overlooks the more important aspect i.e. qualitative aspect of manpower planning i.e., the quality of human resource, career planning and development, skill development morale etc.
Thus, limitations of human resource planning arise both from inherent limitations of forecasting and from human weaknesses.
Making Human Resource Planning Effective:
We have just studied various problems faced by, a human resource expert in manpower planning. The following steps may be taken to make the human resource planning effective –
1. Proper Organisation of Human Resource Functions – The human resource planning functions should be well organised. A separate cell, section, division or committee may be constituted within the human resource department to provide adequate focus and to coordinate the planning efforts at various levels.
2. Support from the Top – Top management must support and be committed to the human resource planning. Before starting any human resource planning process, top management must be consulted and its commitment should be ensured. Moreover, the exercise should be carried out within budget allocation. Other restraints should also be considered in detail. It is really useless to formulate plans which cannot be implemented due to financial and other supports from the management.
3. Participation – For the successful human resource planning, active participation of operative executives is required. If possible, trade union support should also be sought. Such participation will help to improve understanding of the process and thereby reduce resistance.
4. Information System – A systematic information system or data base should be developed in order to facilitate the human resource planning.
5. Tailor made – Human resource plans should be balanced with corporate plans of the enterprise. The method and techniques of human resource planning should commensurate with the corporate objectives, strategies and environment.
6. Balanced Focus – The quantity and qualify aspects, should be equally stressed. The stress in filling future vacancies should be to recruit right people to right job or other than to match the existing staff with existing jobs. Promotion of existing staff should be considered carefully.
DU SOL B.Com 3rd Year Human Resource Management Notes
|
2021-03-03 03:18:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2130071371793747, "perplexity": 2643.939751337606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00606.warc.gz"}
|
https://en.wikibooks.org/wiki/Geometry/Chapter_16
|
# Geometry/Chapter 16
A construction is when one constructs a figure (such as a square, an angle bisector, etc.) with a compass and a straightedge.
## Constructing a Congruent Segment to a Given Segment
To Construct a congruent segment to a given segment, set your compass to the both endpoints of the given segment. Then from a starting point on the second line mark off the same distance using your compass. This segment is congruent to the given segment.
## Equilateral Triangle
An example of a construction would be the construction of an equilateral triangle with two intersecting circles. Two circles are drawn with the same radius with a point from each circle intersecting the radius of the other circle.
Then lines are drawn from the radii to the intersection of the two circles and between the two radii.
Now we have an equilateral triangle.
## Angle Bisector
To Construct an angle Bisector: 1. Put the center of the compass on the angle and mark an intersection on each line.
2. Put the center of the compass on one of the marked intersections and then mark two spots near the center. Repeat this step on the other intersection.
3. Draw a line connecting the points of intersection and the angle.
## Perpendicular Bisector
To construct the perpendicular bisector of a segment given the segment AB, first draw a circle centered at point A. Next draw with your compass a circle with the same radius as the first circle but centered at point B. Lastly, connect the points, called C and D where the two circles intersect. This segment is the perpendicular bisector. Reasoning: Any point that is the same distance from the endpoints of a segment lies on the perpendicular bisector of that segment. Given two such points you can draw that line.
Image:PerpendicularBisector2JPEG.jpg
## Constructing a Congruent Angle to a Given Angle
To construct a congruent angle to a given angle first draw a segment on which you are going to construct the congruent angle. Place a point on this segment that will be the vertex point of the new congruent angle. Draw an arc(part of a circle) that crosses both sides of the original angle. Then using the same setting as for this arc draw an arc from the point on the segment so that this arc has the same radius as the previous arc. next use your compass to measure the distance between the points where the first arc crosses the sides of the angle. Now going to the second arc with the same setting mark the same distance on this arc as on the first arc. Connect this point with the original point on the segment.
## Constructing a Perpendicular Line through a Point on a Line
Given a point on a line, first draw a circle using this point as its center. Using the two points where the circle intersects the line draw two congruent arcs that intersect each other. This point is equal distance from both endpoints of the circle and thus on its perpendicular bisector as is the original point. Connect these two points to complete the construction.
## Constructing a Perpendicular to a Line From a Point Not on That Line
To construct a perpendicular to a line from a point not on that line, use the point as the center of an arc that intersects the line in two points. Your original point will lie on the perpendicular bisector of the segment formed by these two new points. Using these points as the centers of two congruent arcs that intersect one another you get another point on this same perpendicular bisector. Connect this new point with your original point to finish this construction.
## Constructing a Triangle congruent to a Given Triangle(SSS Method)
To construct a triangle congruent to a given triangle, first construct a base side in the same way as constructing a congruent segment. Measuring a second side of the given triangle with the compass draw an arc from one end of the constructed segment. Setting the compass to the length of the third side of the given triangle go to the second point of the constructed segment and draw another arc whose radius is the same as the third side and intersects the first arc. Connect this point of intersection with the endpoints of the constructed segment to finish the congruent triangle.
## Constructing a Triangle Congruent to a Given Triangle (the SAS Method)
To construct a Triangle congruent to a given triangle using the Side Angle Side method, you must first construct an angle congruent to one of the given angles of the first triangle. Then use the compass to measure one of the sides of the first triangle that is next to the angle and then measure the new side next to the new angle. Then measure the side on the other side of the angle of the first triangle and use it to construct the other side of the new triangle. Connect these two constructed points to make the third side of the new triangle.
|
2016-06-29 14:36:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.619899570941925, "perplexity": 223.87857408439254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00011-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/130288/what-is-the-complexity-of-k-clique-problem-with-a-predetermined-vertex-in-the-so
|
# What is the complexity of k-clique problem with a predetermined vertex in the solution?
Clique (from WikiPedia):
Clique is a subset of vertices of an undirected graph such that every two distinct vertices in the clique are adjacent; that is, its induced subgraph is complete.
K-Clique problem: Finding a clique of size K. This is NP-complete according to Wiki,
Cliques have also been studied in computer science: finding whether there is a clique of a given size in a graph (the clique problem) is NP-complete, but despite this hardness result many algorithms for finding cliques have been studied.
Let us consider a "constrained k-clique problem" - which is a k-clique problem with a constraint of having a predetermined vertex included in the solution. What would be the complexity of this problem? Is it a known problem in the literature?
It is still $$\mathsf{NP}$$-complete. Consider the following reduction from the normal $$\mathrm{Clique}$$ problem: Given a graph $$G$$ and some desired clique size $$k$$, add a new vertex $$v^\ast$$ which is connected to all vertices of $$G$$ to obtain a new graph $$G^\ast$$. Then $$(G^\ast, k + 1, v^\ast)$$ is a yes-instance of your modified $$\mathrm{Clique}$$ problem if and only if $$(G, k)$$ is a yes-instance of $$\mathrm{Clique}$$.
|
2021-04-19 16:46:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7229878902435303, "perplexity": 182.90702524470672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038887646.69/warc/CC-MAIN-20210419142428-20210419172428-00626.warc.gz"}
|
https://www.physicsforums.com/threads/a-question-about-amplitudes.711701/
|
The amplitude for a state $|\psi\rangle$ to be in the state $|\chi\rangle$, with both states represented as vectors in a complex Hilbert space, is a complex number whose modulus squared gives the probability that a system in the state $|\psi\rangle$ is found to be in the state $|\chi\rangle$ after performing a suitable measurement. My question is, is it possible to assign an amplitude between two subspaces of a Hilbert space, rather than just two vectors in a Hilbert space?
The reason I'm asking is that in practice physicists often do exactly that. For example, in discussions about spin physicists will talk about, say, the amplitude for an electron that's spin up in the z direction to be spin up in the x direction. But $|+z\rangle$ and $|+x\rangle$ aren't vectors in the electron's Hilbert space (which includes position eigenstates tensored with spin eigenstates), they're subspaces of the Hilbert space. Yet there doesn't appear to be any problem with assigning an amplitude between them.
I can give numerous other examples of this. It really is commonplace. Yet I haven't seen any procedure for constructing an inner product between subspaces in any textbook.
fzero
Homework Helper
Gold Member
The amplitude you are concerned with is equal to the inner product ##\langle \chi | \psi \rangle##, so it is better to speak of inner products. We can compute, as here, the inner product between two states, and as you say, the modulus squared ##|\langle \chi | \psi \rangle|^2## is the probability to find the system in the state ##|\chi\rangle## given that it was in the state ##\psi## before the measurement.
Given an appropriate orthonormal basis, it is even possible to express the probability that we find the system in one of the states of a subspace of the total space. Recall that, given a complete set of states ##n\rangle## forming an orthonormal basis for a Hilbert space ##\mathcal{H}##, we can write the identity operator as
$$\hat{1} = \sum_{n=1}^N | n \rangle \langle n|.$$
If we act on a normalized state ##|\psi\rangle## with this and then take the modulus squared, we can write
$$1 = \sum_{n=1}^N | \langle n| \psi \rangle |^2,$$
This is a fancy way of saying that, if we make a measurement on ##|\psi\rangle##, we are bound to find the system in one of the state in the total Hilbert space ##\mathcal{H}##.
Now if we have a subspace ##A \subset \mathcal{H}##, with orthonormal basis ##|a\rangle##, then we can always write
$$|n \rangle = \sum_{a=1}^{N_A} c_{na} | a\rangle + \sum_{\alpha=1}^{N-N_A} c_{n\alpha} | \alpha \rangle,$$
where ## \langle a | \alpha \rangle = 0 ## for all ##a,\alpha##. You should be able to convince yourself that
$$P_A = \sum_a | a \rangle \langle a|$$
is a projection operator from ##\mathcal{H}## to ##A##. Therefore if we want to compute the probability that we find the system in the subspace ##A## given a measurement on some state ##|\psi\rangle##, we can compute
$$\mathrm{Prob}(A|\psi) = \left| P_A | \psi \rangle \right|^2 = \sum_a | \langle a| \psi \rangle |^2.$$
If we're told that ##|\psi \rangle## belongs to some other subspace ##X## that intersects ##A##, then we can write
$$| \psi \rangle = \sum_{x=1}^{N_X} \psi_x | x \rangle,$$
as well as
$$\mathrm{Prob}(A|\psi) = \sum_{a,x} |\psi_x|^2 | \langle a | x \rangle |^2.$$
The quantities ##|\langle a | x \rangle |^2 ## are the probabilities that we find the system in a particular basis state of ##A## given that the system started in a basis state of ##X##. This collection of numbers (which can be written as a matrix) is probably as close as we can get to your "amplitude between two subspaces." These quantities are closely related to the same expressions that appear in any change of basis.
Also note that if we want to recover a c-number, rather than a matrix, we really need to supply the coefficients ##|\psi_x|^2## that specify the original state up to phase factors.
|
2021-05-13 15:56:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237975478172302, "perplexity": 236.52019795937971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00187.warc.gz"}
|
https://core.ac.uk/display/2183933
|
Skip to main content
Location of Repository
## $J/\psi$ production and elliptic flow parameter $v_2$ at LHC energy
### Abstract
We apply the recombination model to study $J/\psi$ production and its elliptic flow in the region of $10<p_T<20$ GeV/$c$ at LHC energy. We show the distribution of $J/\psi$ as function of the transverse momentum $p_T$ and the azimuthal angle $\phi$. If the contribution from the recombination of shower partons from two neighboring jets can not be ignored due to the high jet density at LHC, the elliptic flow parameter $v_2$ of $J/\psi$ is predicted to decrease with $p_T$.Comment: 8 pages; 6 figure
Topics: High Energy Physics - Phenomenology
Year: 2011
DOI identifier: 10.1088/0031-8949/84/03/035202
OAI identifier: oai:arXiv.org:1103.3346
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
• http://arxiv.org/abs/1103.3346 (external link)
• ### Suggested articles
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.
|
2018-12-14 20:33:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.60209059715271, "perplexity": 1906.5084717030284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00358.warc.gz"}
|
http://www.hpmuseum.org/forum/thread-10111.html
|
Generate the largest Prime Number
02-06-2018, 03:05 AM
Post: #1
Gamo Member Posts: 183 Joined: Dec 2016
Generate the largest Prime Number
Is there a way to create program on HP RPN programmable calculator to generate the largest possible digits of prime number? Just like generating the Decimal digits of Pi or e
If possible just wondering what is the largest Prime number that scientific calculator can generate.
Gamo
02-06-2018, 04:13 AM (This post was last modified: 02-06-2018 04:14 AM by Mike (Stgt).)
Post: #2
Mike (Stgt) Member Posts: 298 Joined: Jan 2014
RE: Generate the largest Prime Number
(02-06-2018 03:05 AM)Gamo Wrote: Is there a way to create program on HP RPN programmable calculator to generate the largest possible digits of prime number?
Yes, there is a way to create program on HP RPN programmable calculator to generate the largest possible digits of prime number. You ask for digits of a prime number, not the number itself. With the base 10 all possible digits are 0..9 (for example in prime 123456789059) where 9 and 8 are the largest ones.
If it's about prime numbers try primesieve. While waiting for a result consider what's your goal by using an HP RPN programmable calculator for prime numbers. Or digits thereof.
Ciao.....Mike
02-06-2018, 08:04 PM
Post: #3
Dieter Senior Member Posts: 1,831 Joined: Dec 2013
RE: Generate the largest Prime Number
(02-06-2018 03:05 AM)Gamo Wrote: Is there a way to create program on HP RPN programmable calculator to generate the largest possible digits of prime number? Just like generating the Decimal digits of Pi or e
The largest possible digits ?
In a decimal number the largest possible digit is 9. But I assume you knew that already. ;-)
(02-06-2018 03:05 AM)Gamo Wrote: If possible just wondering what is the largest Prime number that scientific calculator can generate.
It depends. On a standard 10-digit calculator the last digit of all numbers ≥ 1010 implicitely is zero, so these cannot be prime. The largest prime below this threshold is 9999999967. But on a regular RPN calculator it will take some time to confirm this.
Dieter
02-07-2018, 12:34 AM (This post was last modified: 02-07-2018 12:34 AM by Gamo.)
Post: #4
Gamo Member Posts: 183 Joined: Dec 2016
RE: Generate the largest Prime Number
For generating the digits of prime by mean of adding set of digits for next prime.
For example:
If this is the maximum digits that can show on screen x,xxx,xxx,xxx write this down and program can generate the next prime for the next set of digits.
x,xxx,xxx,xxx
x,xxx,xxx,xxx,yyy,yyy......
The program on computer I'm not sure how that work when they try to generate the largest prime.
Gamo
02-07-2018, 03:39 AM
Post: #5
Mike (Stgt) Member Posts: 298 Joined: Jan 2014
RE: Generate the largest Prime Number
(02-07-2018 12:34 AM)Gamo Wrote: For generating the digits of prime by mean of adding set of digits for next prime.
[...]
x,xxx,xxx,xxx
x,xxx,xxx,xxx,yyy,yyy......
Never heard of such a procedure. What is with all the primes between x,xxx,xxx,xxx and x,xxx,xxx,xxx,yyy,yyy? For example, first prime is 2, juxtapose digit 3 to get 23 and then another 3 to get 233, is that what you want? Please explain more details.
There may exist some prime 'breeding' procedures I don't know. Widely known are the sieve algorithms, Eratosthenes, Sundaram, and Atkin. Basicly they fill a field with candidates and kick out step by step what is not and 'harvest' the remaining primes. An example in 'pseudocode' (pst ... it is REXX):
Code:
/* SOE EXEC: Sieve of Eratosthenes */ p. = 1 /* "fill" the index field */ z = 999 /* limit search */ say 2 /* display first prime */ do n = 3 to z by 2 /* test only odd numbers */ if p.n then do /* if prime, then... */ say n /* display it */ do i = n * n to z by 2 * n /* go over the index field */ p.i = 0 /* and remove products of it */ end i end end n
The field are only flags, 1 = is prime, 0 = is not, the index is the number in question.
By testing only odd numbers there is no need to kick out even numbers. This could be extended to products of 2 and 3, or 2 and 3 and 5, see wheel factorization.
If you know something faster, I mean much faster, please let me know.
Ciao.....Mike
02-07-2018, 09:26 AM
Post: #6
Gamo Member Posts: 183 Joined: Dec 2016
RE: Generate the largest Prime Number
The highest prime number that fill 10 digits calculator screen is 9,999,999,997
The next prime is 10,000,000,019 which is 11 digits that can't fill in the screen.
so is this possible to produce prime by first show first 10 digits and when press R/S that will go on to the next result of prime digit.
1000000001 R/S result 9 > 10,000,000,019
1000000003 R/S result 3 > 10,000,000,033
1000000006 R/S result 1 > 10,000,000,061
.
.
.
1000000009 R/S result 7 > 10,000,000,097
1000000001 R/S result 03 > 10,000,000,103
.
.
Gamo
02-07-2018, 11:37 AM
Post: #7
Mike (Stgt) Member Posts: 298 Joined: Jan 2014
RE: Generate the largest Prime Number
(02-07-2018 09:26 AM)Gamo Wrote: [...]
1000000001 R/S result 9 > 10,000,000,019
[...]
1000000001 R/S result 03 > 10,000,000,103
[...]
First you could develop a technique how your calculator could discern what result you want for the same input.
/M.
02-07-2018, 07:33 PM (This post was last modified: 02-07-2018 07:35 PM by Dieter.)
Post: #8
Dieter Senior Member Posts: 1,831 Joined: Dec 2013
RE: Generate the largest Prime Number
(02-07-2018 09:26 AM)Gamo Wrote: The highest prime number that fill 10 digits calculator screen is 9,999,999,997
The next prime is 10,000,000,019 which is 11 digits that can't fill in the screen.
1000000001 R/S result 9 > 10,000,000,019
(...)
1000000001 R/S result 03 > 10,000,000,103
I assume the last line is supposed to read
1000000010 R/S result 3 > 10,000,000,103
Now, what do you want to get if you enter a 10-digit number like 1.000.000.001?
- The next prime with 11 digits?
That's 10.000.000.019, so the output is 9 ?
This means: determine the next prime after 10*x.
- The next prime with 12 digits?
That's 100.000.000.103, so the output is 03 ?
This means: determine the next prime after 100*x.
- The next prime with 13 digits?
That's 1.000.000.001.051, so the output is 051 ?
This means: determine the next prime after 1000*x.
Let's assume you mean the first case. "Determine the next prime" here simply means:
Check if the following numbers are prime:
10*x+1, 10*x+3, 10*x+7 and 10*x+9
So it boils down to an algorithm like this:
Code:
input x found=false p = 10*x+1 checkprime(p) p = 10*x+3 checkprime(p) p = 10*x+7 checkprime(p) p = 10*x+9 checkprime(p) if not found then print "No primes between " 10*x " and " 10*x+9 end subroutine checkprime(p): if isprime(p) then print p found=true end
Now, how do you check if an 11-digit number is divisible by, say, 7 while all you got is a 10-digit calculator? I'd say this can be done. Think hard. ;-)
Dieter
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
2018-02-18 20:17:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2159535437822342, "perplexity": 2034.6660118067425}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.18/warc/CC-MAIN-20180218192636-20180218212636-00069.warc.gz"}
|
https://joshhug.gitbooks.io/hug61b/content/chap17/chap171.html
|
# Trees and Traversals
## What is a tree?
Recall that a tree consists of:
• A set of nodes (or vertices). We use both terms interchangeably.
• A set of edges that connect those nodes.
• Constraint: There is exactly one path between any two nodes.
The left-most structure is a tree. It has a node. It has no edges. That's OK!
The second and third structures are trees.
The fourth is not a tree. Why? There are two paths from the top node to the bottom node, and so this does not obey our constraint.
Exercise 17.1.1. Determine the reason why the fifth structure is not a tree. Also, modify the invalid trees above so that they are valid.
## What is a rooted tree?
Recall that a rooted tree is a tree with a designated root (typically drawn as the top most node.)
This gives us the notion of two more definitions
• A parent. Every node except the root has exactly one parent.
• What if a node had 2 parents? Would it be a tree? (Hint: No.)
• A child. A node can have 0 or more children.
• What if a node has 0 children? It's called a leaf.
## What are trees useful for?
So far, we've looked at Search Trees, Tries, Heaps, Disjoint Sets, etc. These were extremely useful in our journey to create efficient algorithms: speeding up searching for items, allowing prefixing, checking connectedness, and so on.
But the fact of the matter is that they are even more ubiquitous than we realize. Consider an organization chart. Here, the President is the 'root'. The 'VP's are children of the root, and so on.
Another tree structure is the 61b/ directory on your Desktop (it is on your Desktop, isn't it?). As we can see, when you traverse to a subfolder it goes to subsequent subfolders and so on. This is exactly tree-like!
Exercise 17.1.2. Think of other common uses of trees that weren't mentioned above. Try and determine possible implementations or designs of these trees.
|
2019-04-25 02:48:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5376198291778564, "perplexity": 1041.2723365991776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578678807.71/warc/CC-MAIN-20190425014322-20190425040322-00336.warc.gz"}
|
https://www.physicsforums.com/threads/recoiling-inclined-plane.663865/
|
# Recoiling Inclined Plane
1. Jan 12, 2013
### WannabeNewton
1. The problem statement, all variables and given/known data
A block of mass m is on an inclined plane of mass M, inclined at angle θ, and slides on the plane without friction. Find the acceleration of the plane.
3. The attempt at a solution
I am using the usual cartesian coordinate system with no rotations and letting up and forward be the positive directions. Let A be the acceleration of the plane as it accelerates backwards from the recoil due to the moving block. Define a non - inertial reference frame that is co - moving with the plane. The equations of motion for the block in this frame are $m\ddot{y} = Ncos\theta - mg$,$m\ddot{x} = F_{apparent} = Nsin\theta - mA$ and we have, in this co - moving frame, the constraint $\ddot{y} = -tan\theta \ddot{x}$. The equation of motion for the plane in this frame is $0 = F'_{apparent} = -Nsin\theta - MA$. Combining the equations for the block we get that $N = mgcos\theta + mAsin\theta$ so $MA = -Nsin\theta = -mgcos\theta sin\theta - mAsin^{2}\theta$ therefore $A = -(\frac{mgcos\theta sin\theta }{M + msin^{2}\theta })$. The book has the same answer except it is positive. I'm not sure why mine is negative. They don't really list if they are taking the backwards direction to be positive or not so I don't know if that is all there is to the issue. Thanks.
2. Jan 12, 2013
### oli4
Hi Isaac
There is nothing in the statement that would give you a preferred direction, I read it as finding the magnitude of the acceleration of the plane. Otherwise, since the acceleration is a vector in the end, you could as well wonder about which component is which.
So I don't think there is any issue at all, you solved the problem :)
3. Jan 12, 2013
### haruspex
Assuming you have exactly reproduced the statement of the problem from the book, I would guess you are supposed to take the positive direction as being whichever way the wedge moves. You say you let "up and forward be the positive directions", but you don't say forward for which object. If you meant forward for the block then you would expect a negative result for the wedge.
(Good job getting the right answer - easy to go wrong with a question like this.)
|
2017-10-20 07:25:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6391015648841858, "perplexity": 226.52545589209353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00075.warc.gz"}
|
https://villasenor-derbez.com/publication/006_peake2018/
|
# Feeding ecology of invasive lionfish (Pterois volitans and Pterois miles) in the temperate and tropical western Atlantic
### Abstract
Numerous location-based diet studies have been published describing different aspects of invasive lionfish (Pterois volitans and Pterois miles) feeding ecology, but there has been no synthesis of their diet composition and feeding patterns across regional gradients. 8125 lionfish stomachs collected from 10 locations were analyzed to provide a generalized description of their feeding ecology at a regional scale and to compare their diet among locations. Our regional data indicate lionfish in the western Atlantic are opportunistic generalist carnivores that consume at least 167 vertebrate and invertebrate prey species across multiple trophic guilds, and carnivorous fish and shrimp prey that are not managed fishery species and not considered at risk of extinction by the International Union for Conservation of Nature disproportionately dominate their diet. Correlations between lionfish size and their diet composition indicate lionfish in the western Atlantic transition from a shrimp-dominated diet to a fish-dominated diet through ontogeny. Lionfish total length (TL) (mm) was found to predict mean prey mass per stomach (g) by the following equation $\text{mean prey mass}= 0.0002 \times TL^{1.6391}$, which can be used to estimate prey biomass consumption from lionfish length-frequency data. Our locational comparisons indicate lionfish diet varies considerably among locations, even at the group (e.g., crab) and trophic guild levels. The Modified Index of Relative Importance developed specifically for this study, calculated as the frequency of prey $a \times$ the number of prey $a$, can be used in other diet studies to assess prey importance when prey mass data are not available. Researchers and managers can use the diet data presented in this study to make inference about lionfish feeding ecology in areas where their diet has yet to be described. These data can be used to guide research and monitoring efforts, and can be used in modeling exercises to simulate the potential effects of lionfish on marine food webs. Given the large variability in lionfish diet composition among locations, this study highlights the importance of continued location-based diet assessments to better inform local management activities.
Type
Publication
Biological Invasions
Date
|
2022-11-29 17:57:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4024953544139862, "perplexity": 7314.683493706306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00826.warc.gz"}
|
https://codeforces.com/blog/entry/90212
|
### awoo's blog
By awoo, history, 13 months ago, translation,
1519A - Red and Blue Beans
Idea: adedalic
Tutorial
Solution (adedalic)
1519B - The Cake Is a Lie
Idea: adedalic
Tutorial
Solution (adedalic)
1519C - Berland Regional
Idea: BledDest
Tutorial
Solution (awoo)
1519D - Maximum Sum of Products
Idea: vovuh
Tutorial
Solution (Neon)
1519E - Off by One
Idea: BledDest
Tutorial
Solution (awoo)
1519F - Chests and Keys
Idea: BledDest
Tutorial
Solution (BledDest)
• +92
» 13 months ago, # | +5 Here is a problem which has the same idea with D. http://acm.hdu.edu.cn/showproblem.php?pid=6103
• » » 13 months ago, # ^ | 0 Are you a chinese person?
» 13 months ago, # | ← Rev. 2 → 0 Anyone knows why there's a major difference between predicted and actual rating changes?
• » » 13 months ago, # ^ | 0 At least in my case the predictor was using my rating from 2 contests prior. I think it must have scraped the ratings at a time when ratings changes were rolled back.
» 13 months ago, # | ← Rev. 2 → +1 Can anyone help me in identifying the mistake in Problem D, the approach is similar to the longest palindromic substring. It is failing in the 11th test case.Link to the submission: https://codeforces.com/contest/1519/submission/114709341Edit: the issue is resolved thanks
• » » 13 months ago, # ^ | 0 Use vector for a and b, because the when you multiply two int's, the compiler doesn't know to convert it to a long long unless you explicitly tell it to do so, or have the types originally as long long's.
• » » » 13 months ago, # ^ | 0 Yeah thanks a lot
» 13 months ago, # | 0 Aren't problems like E and F suitable better for div1? Why not use them in div1 then as creating div1 problems is harder.
• » » 13 months ago, # ^ | +14 Can't comment about F but problems like E are repeat or too obvious for Div1 contenders.
» 13 months ago, # | +27 Thanks for E, it was quite educational.
• » » 13 months ago, # ^ | 0 i couldn't keeep upto editorial of E after dfs tree. Can you explain what they mean by dfs tree and what after that.Thanks in advance.
• » » » 13 months ago, # ^ | 0 This is a very nice dfs tree entry: https://codeforces.com/blog/entry/68138
» 13 months ago, # | ← Rev. 2 → +1 Can anyone explain how the time complexity of C is nlogn
• » » 13 months ago, # ^ | 0 The time complexity is nlogn because you must to sort the elements of each university before the calculation of the prefixes.
• » » » 13 months ago, # ^ | 0 But for each k from 1..n we calculate the result, which requires to go through all schools from 1..n. Doesn't it give n^2?
• » » » » 13 months ago, # ^ | 0 On each university we iterate between the values 1 and the number of students in that university, because for higher values the university can not make a team. This has a complexity of O(n).
• » » » » » 13 months ago, # ^ | +1 thx, now I get it
» 13 months ago, # | 0 Very interesting contest, I really enjoyed it! Thanks!
» 13 months ago, # | ← Rev. 3 → +3 can anyone explain how forward edges are handled in last paragraph of editorial of E?upd: got it
» 13 months ago, # | 0 Will the rating roll back QAQ
» 13 months ago, # | 0 can anyone show me simple readable solution for C?
• » » 13 months ago, # ^ | 0
• » » » 13 months ago, # ^ | 0 thanks. yeah!,white house clean
» 13 months ago, # | 0 Can someone please give me a proof for B?
• » » 13 months ago, # ^ | 0 You can also use BFS
• » » » 13 months ago, # ^ | 0 Sadly haven't got to covering this yet, trying to solve most of the A problems and B problems from Div2 90% before I go on to covering DP, DFS and BFS problems. If you could tell me a way to prove this without assuming the formula, that would be amazing.
• » » » » 13 months ago, # ^ | 0 No one directly "assumed" the formula in my opinion how I approached it was trying to convert my intuition to mathematical proof. Try reading this article. Have a good day!
• » » 13 months ago, # ^ | 0 Try to draw a 2d matrix and try to compute the value at random points from taking both ways left and right then u will come up with the same formula
• » » » 12 months ago, # ^ | 0 Alright. Thank you
» 13 months ago, # | 0 the round was amazing, can anyone help me identify the error in 114652844 to problem c, my approach is similar to the edi.
• » » 13 months ago, # ^ | 0
» 13 months ago, # | ← Rev. 2 → 0 Short Video Editorial For Problems A — D I have a different solution for problem D (Maximum Sum of Products): $sum[i][j]$ stores the new sum on reversing the subarray $[j, i]$ $sum[i][j] = sum[i - 1][j + 1] + A[i] * B[j] + A[j] * B[i]$ We calculate the sum of elements we get on reversing every subarray $[j, i]$. To account for the rest of the array, loop over all subarrays and use prefix sums to add the remaining part. Take the best value over all subarrays. Time complexity: $O(N^2)$ See my code for clarity: 114583311
• » » 13 months ago, # ^ | +6 They're actually the same
• » » 13 months ago, # ^ | 0 why did you take max(dp[i][j], dp[i — 1][j + 1] + A[i] * B[j] + A[j] * B[i]), what's the use of taking maximum here??
• » » » 13 months ago, # ^ | 0 Yeah, you're right. There's no use of doing that here.
» 13 months ago, # | 0 hi! can some one point out my mistake in D. I have used the same approach as in editorail, except i have used a 2-D dp. test case 27 is showing wrong answer. https://codeforces.com/contest/1519/submission/114798610 thanks
» 13 months ago, # | 0 Hi, for problem C, can somebody explain why 114796728 gives TLE and 114799663 doesn't. The only difference is the usage of an array instead of a vector.
• » » 13 months ago, # ^ | 0 when you are declaring:vll pre(n+1),i dont why but on my system it fills it with garbage value, and run time is really slow on my pc as you have statedbut,when u use ll pre[n+1] ,pre is filled with all 0s, and run time is fast on my pc
• » » 13 months ago, # ^ | +1 Your code is fundamentally O($n^2$) because of f(i,1,n+1) { vll pre(n+1); As far as I can tell, the only reason why you didn't TLE on your 2nd submission is because the compiler was nice and effectively optimized your code to vll pre(n+1); f(i,1,n+1) { Some other things I noticed about your code: Remove cout.tie(NULL) from your code. Unlike cin.tie(NULL); the cout version does nothing and should never be used. Submit under C++17(64 bit) instead of C++11. You will see much better running times. You are basically handicapping yourself by using C++11.
• » » » 13 months ago, # ^ | -6 Thanks, I'll take note of it.
» 13 months ago, # | 0 Can anyone help me figure out why I am getting TLE (https://codeforces.com/contest/1519/submission/114826178)
• » » 13 months ago, # ^ | 0 Hey, I tried your approach as well, calculating answer for each value of k[1,n] , i got TLE as well, link similar to your solution(TLE): https://codeforces.com/contest/1519/submission/114824539Nevertheless, I got ACC , finally. Here's some tips, i can give you:1.) use vector instead of map db or use unordered_map2.) use a vector to store output of each value of k[1,n]3.) after sorting the inner vector, use the same inner vector to store cumulative sum 3.) iterate this inner vector after sorting and increment the value of output-vector according to this inner vectormy Accepted solution(not the most efficient): https://codeforces.com/contest/1519/submission/114825135
• » » » 13 months ago, # ^ | 0 thanks
• » » 13 months ago, # ^ | ← Rev. 6 → 0 Your code failed only because you used iterator as the second loop. The same reason my code failed and changing it will surely give AC. I also changed map to vector first and all the other optimizations and still it failed but this one small change gave AC. Reason why this happens is because when n >= 10^5, iterator being nested will be called in range of 10^5 times(depending on implementation). At this range it becomes really slow compared to normal for loop with [] operator. In my tests above 10^5 it was almost 1.2 — 2 times slower in this question(Can be more/less as I did these tests on my own PC but you can get the idea).Check the running time in both these codes where the only difference is for loop instead of for-each(uses iterator)Accepted Code — https://codeforces.com/contest/1519/submission/114687960
• » » » 13 months ago, # ^ | 0 thanks
• » » » 13 months ago, # ^ | 0 https://codeforces.com/contest/1519/submission/114986969can you figure out why am i still getting TLE?
» 13 months ago, # | 0 Can Any one help me in Problem D, i used BIT to calculate the prefix and suffix value and used Brute force for calculating the interval [i,j] , but still TLE at test case 9:
» 13 months ago, # | ← Rev. 2 → 0 for problem D, if I try to implement the brute force approach which would take O(n^3) time then will it be TLE ? As the constraint is n <= 5000.
• » » 13 months ago, # ^ | 0 Yes it would TLE as (5000)^3 = 125*10^9=1.25*10^11 As this is much greater than the recommended 10^9 it would not fit within the time limit and give a TLE error.
• » » » 13 months ago, # ^ | 0 Thanks a lot
» 13 months ago, # | 0 Can anyone Explain D a little bit in detail
• » » 13 months ago, # ^ | 0 I am sure this will help Video
• » » » 13 months ago, # ^ | 0 Love the explanation. Thanks. Cleared all the doubts
» 13 months ago, # | 0 What does if(nw.need == vector(a, a + n)) in F's solution mean?
» 13 months ago, # | 0 I believe I tried to solve with the same approach as explained in the editorial for C . i.e grouping the students according to university_id and then taking the prefix sum for individual university . however I kept getting TLE on test 4 . can anyone tell why ? 114624170
• » » 13 months ago, # ^ | 0 I have got the same issue .. try to sort the vector of vector in terms of size of v[i].size in greater(). Uh can check out my code too .. i can explain that
• » » » 13 months ago, # ^ | 0 why does erasing gives TLE ?
• » » » » 13 months ago, # ^ | 0 Erasing what ?
• » » » » 13 months ago, # ^ | ← Rev. 2 → 0 Erasing an arbitary element from a vector takes $O(n)$, since all elements after the deleted element need to be shifted.
» 13 months ago, # | 0 Please explain the proof for problem B little more briefly.. Why doesn't the cost depend on path taken?
• » » 13 months ago, # ^ | +5 Consider the two paths shown in the picture below: one which takes the blue route and one which takes the red route. The black parts of the two paths are the same in both paths.Either way, the contribution from the red section or the blue section is $+(i+j)$, so both paths have equal cost.You can change from any path to any other path via a sequence of paths which each differ only by one square, like in the picture above.
• » » » 13 months ago, # ^ | 0 Thank you very much. I get it now!
» 13 months ago, # | 0 Can anyone help me out in c question i had also used prefix sum but getting tle on 4 th case but it should not come 115057482
• » » 13 months ago, # ^ | +2 You must be missing tricky point. Check this explanation
» 13 months ago, # | 0 I believe there is a typo in the editorial for F: instead of $\sum_{i=1}^{a_i} - mincut$ it should be $\big(\sum_{i=1}^n a_i\big) -mincut$.
» 13 months ago, # | 0 $2$ more proofs for problem $B$:Mathematical proof:Assume that the sequence of movement is as follows: $x_1$ units down, then $y_1$ units right, then $x_2$ units down, then $y_2$ units right, ..., then $x_c$ units down, then finally $y_c$ units right, where $x_i, y_i \geq 0$, $1+\sum_{i=1}^{c}x_i=n$, and $1+\sum_{i=1}^{c}y_i=m$. The total cost will be:$x_1*1+y_1*(1+x_1)+x_2*(1+y_1)+y_2*(1+x_1+x_2)+...+(1+\sum_{i=1}^{c-1}y_i)*x_c+(1+\sum_{i=1}^{c}x_i)*y_c=$$\sum_{i=1}^{c}y_i+\sum_{i=1}^{c}x_i*(1+\sum_{i=1}^{c}y_i)=m-1+(n-1)*m=n*m-1$Ad hoc proof:A step from $(i,j)$ to $(i+1,j)$ spans all the cells from $(i+1,1)$ to $(i+1,j)$, and a step from $(i,j)$ to $(i,j+1)$ spans all the cells from $(1,j+1)$ to $(i,j+1)$. Which means that for every step down to a cell $x$, all the cells from $x$ to the left will be counted, and for every step right to a cell $y$, all the cells from $y$ to the top will be counted. At the end, all the cells in the grid will be counted except the first cell $(1,1)$, that is $n*m-1$.
» 13 months ago, # | 0 Can anyone tell me why this code is giving me tle :( but after reversing the conditions of last for loop it got accepted......Thanks in Advance :)#include using namespace std; int main(){ int t; cin >> t; while(t--){ int n; cin >> n; vector uni(n); vector score(n); vector combine(n,vector()); for(int i=0;i> uni[i]; } for(int i=0;i> score[i]; } for(int i=0;i()); } for(int i=0;i0) ans+= combine[j][x*i-1]; } cout << ans << " "; } //But after making this change All Ok :) vector ans(n,0); for(int i=0;i
» 13 months ago, # | 0 115108145 Please help! Why am I getting TLE ?
» 13 months ago, # | ← Rev. 2 → 0 115150110 Any of you guys have any idea why is this code failing for Test case 11 in Question D. Thanks in advance!
» 13 months ago, # | 0 In problem D (Maximum Sum of Products) , Is there an algorithm with Time Complexity less than O(n^2) ? Like O(n) or O(nlogn) ?? Further thanks..
» 13 months ago, # | 0 I can't find the problem in my solution for problem D (wrong answer test 9) 115467785, help please
• » » 2 months ago, # ^ | 0 Hey im also getting the same error on test case 11. Did you figure out the problem? Please give an update, if so.
» 12 months ago, # | 0 Can we do D with two segment trees s1 and s2, where s1 stores the sum of a[i]*b[i] over [l,r) and the s2 stores the similar sum but for the reversed array a over ranges [l,r), Afterward the ans can be brute forced by taking each possible segment that can be reversed and taking the maximum. The runtime will be O(n*n*log(n))?
» 12 months ago, # | 0 can anyone help me with my A.cpp code ? Why is it wrong :(( https://codeforces.com/contest/1519/submission/115942467
» 12 months ago, # | ← Rev. 3 → 0 Hi awooFor problem F, using the idea of saturating all out going edges from source to chests, I used this dynamic programming state : dp[i][j][a][b][c][d][e][f] = the minimal cost to saturate the first i out going flow edges from source, where the i-th out going edge currently has j units of residual remaining, and the 1st incoming edge to sink has a units of residual left , second incoming edge to sink has b units left , 3rd has c units left, 4th, 5th, 6th have d , e , f units left respectively. base case: dp[0][0][a][b][c][d][e][f] = 0 , for all a,b,c,d,e,f <= 6 transition: dp[i][j][a][b][c][d][e][f] = min{ dp[i][j - min(j , a)][a - min(j , a)][b][c][d][e][f] + C[i][1], dp[i][j - min(j , b)][a][b - min(j , b)][c][d][e][f] + C[i][2], dp[i][j - min(j , c)][a][b][c - min(j , c)][d][e][f] + C[i][3], dp[i][j - min(j , d)][a][b][c][d - min(j , d)][e][f] + C[i][4], dp[i][j - min(j , e)][a][b][c][d][e - min(j , e)][f] + C[i][5], dp[i][j - min(j , f)][a][b][c][d][e][f - min(j , f)] + C[i][6], } `Here is my submission https://codeforces.com/contest/1519/submission/116235067 I got wrong answer on test case 93, I have been trying to find the bug for a day and could not resolve it. You will be my life saver if you hint me on why my solution did not get AC!
» 12 months ago, # | 0 Can anyone help me in problem C? Why am I getting TLE again & again? My solution is here: https://codeforces.com/contest/1519/submission/116310052Plz someone help...
» 12 months ago, # | 0 solution of question D by neon is just wow................. great code
» 12 months ago, # | ← Rev. 2 → 0 Please help me. I don't know why I'm getting TLE on test case 3.Problem Link: https://codeforces.com/problemset/problem/1519/C
» 11 months ago, # | ← Rev. 2 → 0 This is my solution.This is an accepted solution.Can someone, please tell me why my solution is not being accepted? The time complexity of both the solutions seems same to me.Help me out ;( here.
» 7 months ago, # | 0 Even though I have used the exact same approach as given in editorial for Problem : 1519C — Berland Regional, my solution is giving TLE for some cases. I have used unordered_map for the universities. Pls can anyone help me out. My solution is 131142379
» 2 months ago, # | 0 PROBLEM D is <3, Loved the implementation with O(n) space!
|
2022-05-23 12:26:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28351283073425293, "perplexity": 2293.100382318671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00361.warc.gz"}
|
https://www.physicsforums.com/threads/about-the-action-s-in-quantum-field-theory.6535/
|
# About the action S in quantum field theory
1. Sep 30, 2003
### eljose79
Let,s suppose we have a Hamiltonian H so we can construct the action by H+dS/dt then why no use the action to solve the problem of quantization of non renormalizable theories?..
2. Sep 30, 2003
### jeff
I think you probably meant ∂S/∂t + H = 0. In any event, nonrenormalizability is no longer viewed as a problem. A theory is renormalizable if it's lagrangian need only contain a finite number of terms to absorb the different types of divergences that occur in it's perturbation theory. But now we know that it's perfectly alright to allow theories which require an infinite number of terms as long as their contribution to the theory are suppressed at higher energies.
Last edited: Sep 30, 2003
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
2017-02-25 00:42:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8532383441925049, "perplexity": 884.309744768816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00253-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://koasas.kaist.ac.kr/handle/10203/222155
|
#### Study on the effects of electron beam irradiation on the stabilization of polyacrylonitrile and their carbon fiber application = 폴리아크릴로니트릴의 안정화에 미치는 전자선 조사 효과 및 탄소섬유 적용 연구
Cited 0 time in Cited 0 time in
• Hit : 345
Polyacrylonitrile (PAN) is one of the most widely used in textiles and producing a fibrous precursors of carbon fibers. PAN fibers are particularly suitable to produce high performance carbon fibers due to their high melting point and a great carbon yield (>50% of the original precursor mass). Moreover, the PAN polymer is also attempted to fabricate various compositions with nanoadditives such as nanosilica, nanomagnetite, nanohydroapatite and carbon nanotubes. Carbon fibers are manufactured by controlled oxidative stabilization, carbonization and graphitization of PAN precursor fibers. Oxidative stabilization process takes long time due to the key process in the process of converting PAN fiber to high performance carbon fiber. To accelerate the stabilization process, fibers can be crosslinked by photo-stabilization. Ionizing radiation generates free radicals in PAN, which aid cyclization and crosslinking. This dissertation deals with the investigation of PAN fibers stabilized with various doses of beam irradiation. The effects of electron beam on the chemical, thermal and mechanical properties were analyzed. It was found that electron beam irradiation could induce chemical reactions so that the structural change which was transformed C$\equiv$N bonds into C=N bonds by changing PAN into their cyclized structure observed. Stabilizaion index values were increased with increasing the electron beam irradiation dose. This dissertation also deals with the investigation of highly effective method of pretreatment PAN fibers. The method consists of exposure the PAN precursor to electron beam at various doses, and then the fibers are thermal stabilized. Electron beam pretreatment was enhanced inducing chemical reactions that converted the structure and affected the physical properties of the PAN fibers. Carbon fibers are prepared through carbonization process at various carbonization temperatures. The degree of carbonization and mechanical properties of carbon fibers increases with the increase of carbonization temperature. Thermal treatment for stabilization of electron beam pretreated precursor enhanced the degree of carbonization and mechanical properties of carbon fibers. The prepared carbon fibers are expected to be applied in automobiles, sporting equipment, turbine blades, ressure vessel, transportation, and so on. First of all, this dissertation establishes the industrial application feasibility of producing carbon fibers from electron beam irradiation technique.
Park, Jung-Kiresearcher박정기researcher
Description
한국과학기술원 :생명화학공학과,
Publisher
한국과학기술원
Issue Date
2016
Identifier
325007
Language
eng
Description
학위논문(박사) - 한국과학기술원 : 생명화학공학과, 2016.2 ,[ix, 115 p. :]
Keywords
polyacrylonitrile; electron beam; stabilization; carbonization; carbon fiber; 폴리아크릴로니트릴 섬유; 전자선 조사; 안정화 반응; 탄화 반응; 탄소섬유
URI
http://hdl.handle.net/10203/222155
|
2021-05-13 16:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5139221549034119, "perplexity": 4680.855535489167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00307.warc.gz"}
|
https://dsp.stackexchange.com/questions/23143/adaptive-filter-gradient-descent
|
• @user1832413: If you have a quadratic form $x^TAx+b^Tx$, and if the matrix $A$ is positive (semi-)definite (as is the case with an autocorrelation matrix), then the function defined by the quadratic form is convex and has a minimum. A saddle point can only occur if the matrix $A$ is indefinite. – Matt L. May 3 '15 at 20:00
|
2019-09-18 12:23:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9716269969940186, "perplexity": 145.443954643754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00475.warc.gz"}
|
https://socratic.org/questions/5900bd8f11ef6b57e2faceb0
|
# What are examples of a "polar, aprotic solvent"?
Some examples are $\text{methylene chloride}$, $\text{diethyl ether}$, $\text{chloroform.......}$
A polar aprotic solvent is a molecule with significant charge separation (hence a $\text{polar solvent}$) that can be used as a solvent, which does undergo any acid-base equilibrium. And thus water and hydrogen fluoride, while they are certainly polar molecules, do not fall under this umbrella because they are protic, and readily exchange ${H}^{+}$ in their solutions.
|
2019-09-23 20:11:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071701526641846, "perplexity": 2059.5599623771946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00158.warc.gz"}
|
http://orbilu.uni.lu/browse?type=author&value=Nikolic%2C+Ivica+30000497
|
Advanced search
Browse ORBilu by Open Access ORBilu The ORBilu Project
ISSN: 2354-5011
ORBilu is a project developed by
References of "Nikolic, Ivica 30000497" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 1 to 9 of 9 1 Search for Related-Key Differential Characteristics in DES-Like Ciphers.Biryukov, Alex ; Nikolic, Ivica in Fast Software Encryption - 18th International Workshop (2011)We present the first automatic search algorithms for the best related-key differential characteristics in DES-like ciphers. We show that instead of brute-forcing the space of all possible differences in ... [more ▼]We present the first automatic search algorithms for the best related-key differential characteristics in DES-like ciphers. We show that instead of brute-forcing the space of all possible differences in the master key and the plaintext, it is computationally more efficient to try only a reduced set of input-output differences of three consecutive S-box layers. Based on this observation, we propose two search algorithms – the first explores Matsui’s approach, while the second is divide-and-conquer technique. Using our algorithms, we find the probabilities (or the upper bounds on the probabilities) of the best related-key characteristics in DES, DESL, and s^2DES. [less ▲]Detailed reference viewed: 73 (1 UL) Second-Order Differential Collisions for Reduced SHA-256.Biryukov, Alex ; Lamberger, Mario; Mendel, Florian et alin 17th International Conference on the Theory and Application of Cryptology and Information Security (2011)In this work, we introduce a new non-random property for hash/compression functions using the theory of higher order differentials. Based on this, we show a second-order differential collision for the ... [more ▼]In this work, we introduce a new non-random property for hash/compression functions using the theory of higher order differentials. Based on this, we show a second-order differential collision for the compression function of SHA-256 reduced to 47 out of 64 steps with practical complexity. We have implemented the attack and provide an example. Our results suggest that the security margin of SHA-256 is much lower than the security margin of most of the SHA-3 finalists in this setting. The techniques employed in this attack are based on a rectangle/boomerang approach and cover advanced search algorithms for good characteristics and message modification techniques. Our analysis also exposes flaws in all of the previously published related-key rectangle attacks on the SHACAL-2 block cipher, which is based on SHA-256. We provide valid rectangles for 48 steps of SHACAL-2. [less ▲]Detailed reference viewed: 67 (1 UL) Boomerang Attacks on BLAKE-32Biryukov, Alex ; Nikolic, Ivica ; Roy, Arnab in Fast Software Encryption - 18th International Workshop (2011)We present high probability differential trails on 2 and 3 rounds of BLAKE-32. Using the trails we are able to launch boomerang attacks on up to 8 round-reduced keyed permutation of BLAKE-32. Also, we ... [more ▼]We present high probability differential trails on 2 and 3 rounds of BLAKE-32. Using the trails we are able to launch boomerang attacks on up to 8 round-reduced keyed permutation of BLAKE-32. Also, we show that boomerangs can be used as distinguishers for hash/compression functions and present such distinguishers for the compression function of BLAKE-32 reduced to 7 rounds. Since our distinguishers on up to 6 round-reduced keyed permutation of BLAKE-32 are practical (complexity of only 212 encryptions), we are able to find boomerang quartets on a PC. [less ▲]Detailed reference viewed: 46 (0 UL) Automatic Search for Related-Key Differential Characteristics in Byte-Oriented Block Ciphers: Application to AES, Camellia, Khazad and OthersBiryukov, Alex ; Nikolic, Ivica in EUROCRYPT (2010)While di fferential behavior of modern ciphers in a single secret key scenario is relatively well understood, and simple techniques for computation of security lower bounds are readily available, the ... [more ▼]While di fferential behavior of modern ciphers in a single secret key scenario is relatively well understood, and simple techniques for computation of security lower bounds are readily available, the security of modern block ciphers against related-key attacks is still very ad hoc. In this paper we make a first step towards provable security of block ciphers against related-key attacks by presenting an efficient search tool for finding diff erential characteristics both in the state and in the key (note that due to similarities between block ciphers and hash functions such tool will be useful in analysis of hash functions as well). We use this tool to search for the best possible (in terms of the number of rounds) related-key diff erential characteristics in AES, byte-Camellia, Khazad, FOX, and Anubis. We show the best related-key diff erential characteristics for 5, 11, and 14 rounds of AES-128, AES-192, and AES-256 respectively. We use the optimal diff erential characteristics to design the best related-key and chosen key attacks on AES-128 (7 out of 10 rounds), AES-192 (full 12 rounds), byte-Camellia (full 18 rounds) and Khazad (7 and 8 out of 8 rounds). We also show that ciphers FOX and Anubis have no related-key attacks on more than 4-5 rounds. [less ▲]Detailed reference viewed: 102 (1 UL) Speeding up Collision Search for Byte-Oriented Hash FunctionsKhovratovich, Dmitry ; Biryukov, Alex ; Nikolic, Ivica in CT-RSA (2009)We describe a new tool for the search of collisions for hash functions. The tool is applicable when an attack is based on a differential trail, whose probability determines the complexity of the attack ... [more ▼]We describe a new tool for the search of collisions for hash functions. The tool is applicable when an attack is based on a differential trail, whose probability determines the complexity of the attack. Using the linear algebra methods we show how to organize the search so that many (in some cases — all) trail conditions are always satisfied thus significantly reducing the number of trials and the overall complexity. The method is illustrated with the collision and second preimage attacks on the compression functions based on Rijndael. We show that slow diffusion in the Rijndael (and AES) key schedule allows to run an attack on a version with a 13-round compression function, and the S-boxes do not prevent the attack. We finally propose how to modify the key schedule to resist the attack and provide lower bounds on the complexity of the generic differential attacks for our modification. [less ▲]Detailed reference viewed: 64 (0 UL) Rebound Attack on the Full Lane Compression FunctionMatusiewicz, Krystian; Naya-Plasencia, Maria; Nikolic, Ivica et alin Advances in Cryptology - ASIACRYPT 2009 (2009)In this work, we apply the rebound attack to the AES based SHA-3 candidate Lane. The hash function Lane uses a permutation based compression function, consisting of a linear message expansion and 6 ... [more ▼]In this work, we apply the rebound attack to the AES based SHA-3 candidate Lane. The hash function Lane uses a permutation based compression function, consisting of a linear message expansion and 6 parallel lanes. In the rebound attack on Lane, we apply several new techniques to construct a collision for the full compression function of Lane-256 and Lane-512. Using a relatively sparse truncated differential path, we are able to solve for a valid message expansion and colliding lanes independently. Additionally, we are able to apply the inbound phase more than once by exploiting the degrees of freedom in the parallel AES states. This allows us to construct semi-free-start collisions for full Lane-256 with 296 compression function evaluations and 2^{88} memory, and for full Lane-512 with 2^{224} compression function evaluations and 2^{128} memory. [less ▲]Detailed reference viewed: 36 (0 UL) Distinguisher and Related-Key Attack on the Full AES-256Biryukov, Alex ; Khovratovich, Dmitry ; Nikolic, Ivica in Advances in Cryptology - CRYPTO (2009)In this paper we construct a chosen-key distinguisher and a related-key attack on the full 256-bit key AES. We define a notion of differential q -multicollision and show that for AES-256 q-multicollisions ... [more ▼]In this paper we construct a chosen-key distinguisher and a related-key attack on the full 256-bit key AES. We define a notion of differential q -multicollision and show that for AES-256 q-multicollisions can be constructed in time q·267 and with negligible memory, while we prove that the same task for an ideal cipher of the same block size would require at least $O(q\cdot 2^{\frac{q-1}{q+1}128})$ time. Using similar approach and with the same complexity we can also construct q-pseudo collisions for AES-256 in Davies-Meyer mode, a scheme which is provably secure in the ideal-cipher model. We have also computed partial q-multicollisions in time q·237 on a PC to verify our results. These results show that AES-256 can not model an ideal cipher in theoretical constructions. Finally we extend our results to find the first publicly known attack on the full 14-round AES-256: a related-key distinguisher which works for one out of every 2^{35} keys with 2^{120} data and time complexity and negligible memory. This distinguisher is translated into a key-recovery attack with total complexity of 2^{131} time and 2^{65} memory. [less ▲]Detailed reference viewed: 101 (1 UL) Cryptanalysis of the LAKE Hash FamilyBiryukov, Alex ; Gauravaram, Praveen; Guo, Jian et alin Fast Software Encryption (2009)We analyse the security of the cryptographic hash function LAKE-256 proposed at FSE 2008 by Aumasson, Meier and Phan. By exploiting non-injectivity of some of the building primitives of LAKE, we show ... [more ▼]We analyse the security of the cryptographic hash function LAKE-256 proposed at FSE 2008 by Aumasson, Meier and Phan. By exploiting non-injectivity of some of the building primitives of LAKE, we show three different collision and near-collision attacks on the compression function. The first attack uses differences in the chaining values and the block counter and finds collisions with complexity 2^{33}. The second attack utilizes differences in the chaining values and salt and yields collisions with complexity 2^{42}. The final attack uses differences only in the chaining values to yield near-collisions with complexity 2^{99}. All our attacks are independent of the number of rounds in the compression function. We illustrate the first two attacks by showing examples of collisions and near-collisions. [less ▲]Detailed reference viewed: 58 (0 UL) Collisions for Step-Reduced SHA-256Nikolic, Ivica ; Biryukov, Alex in Fast Software Encryption - 15th International Workshop, Revised Selected Papers (2008)Detailed reference viewed: 46 (0 UL) 1
|
2019-05-21 19:01:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4210580587387085, "perplexity": 6759.634193592298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00273.warc.gz"}
|
https://boisemathcircles.org/bmc-sessions/fibonacci
|
This special discussion was led by Zach Teitler and concerned patterns in the famous Fibonacci sequence of numbers. This sequence begins with 1,1 and then each successive number is obtained by taking the sum of the previous two. Here are the next few in the list:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169
Which Fibonacci numbers are even? Which ones are odd? Which Fibonacci numbers are divisible by 3? In this discussion we answered these questions and many more.
Here were the core problems we discussed:
• Which Fibonacci numbers are divisible by 2? By 3? By 5?
• What patterns do you notice?
• Next try the sequence $2^n-1$. This one begins 1, 3, 7, 15, 31, 63, 127, 255, 511, .... Which of these numbers are divisible by 3? By 7? By 15?
• How does this pattern compare with the patterns in the fibonacci numbers? Can you explain why the pattern works?
We also worked on some fun graphical problems:
• We have some bricks that are 1 unit by 2 units. We want to make a wall 2 units high, and n units wide. How many ways can it be done? The bricks can be vertical or horizontal.
• Now you have a row of n chairs and you want to put students in some of the chairs. The students are taking an exam so they are not allowed to sit right next to each other. How many ways are there to do this?
Finally we looked at some more fun patterns:
• Look at the “running totals” of the Fibonacci numbers. Here are the first few:
1+1 = 2
1+1+2 = 4
1+1+2+3 = 7
1+1+2+3+5 = 12
...
Do you see any patterns in this new sequence? Can you explain it?
• What if you only add up every other Fibonacci number
1+2 =
1+2+5 =
1+2+5+13 =
1+2+5+13+34 =
...
What do you get? Can you explain this pattern?
• Look at the "diagonals" in Pascal's triangle:
Can you explain this pattern?
|
2019-09-19 15:46:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4096117317676544, "perplexity": 308.4211169300965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00197.warc.gz"}
|
http://digitalfreepen.com/2017/06/21/consistent-distance-fields.html
|
Jun 21 2017
# Consistent distance fields for ray marching
## Introduction
The classical way of writing a ray tracer represents objects in your scene with a triangle mesh. You then shoot some light rays out of your camera and do some algebra to see if it intersects with any triangles.
In recent years, an alternative to triangle-based ray tracing has emerged. While slightly less general, it’s fast enough to ray-trace scenes in real-time. It’s been popular in the hobbyist community for being able to produce cool results with few lines of code. The key is to use something called a “distance field”.
A distance function $$D(x,y,z)$$ tells you, for any point $$(x,y,z)$$ in space, how far the closest object is. It doesn’t tell you what direction, just the distance to the closest object. Using such a function, however, makes ray marching really easy. You start the ray at the position of the camera. You calculate $$d = D(x,y,z)$$. This tells you that you can move the ray forward by distance $$d$$ without hitting any object. Repeat this process until you get within some epsilon distance of the object.
The code for this process is equally simple:
float castRay(vec3 pos, vec3 rayDir) {
float dist = 0.0;
for (int i = 0; i < ITERATIONS; i++)
{
float res = distanceFn(pos + rayDir * dist);
if (res < EPSILON || dist > MAX_DIST) break;
dist += res;
}
return dist;
}
How do you find these distance functions? Constructing them for some primitives is very simple. Here is a sphere and the distance function of a sphere:
float distanceSphere(vec3 pos, float radius)
{
}
You can find many more examples of primitive shapes, as well as how to combine them and modify them, on Inigo Quilez’s website.
It can be difficult, however, to create distance functions for complex shapes using only primitive operations. What people will often do in practice is to create functions that give you an estimate of the distance to the closest object. But regardless of the nature of the estimate, points where the distance function $$d(x, y, z) = 0$$ represent the surface of the object (think for a second why this is true).
For example, I’ve obtained the funky-looking shape on the left by using the distance function the right.
float distanceFunkySphere(vec3 pos, float radius, float frequency)
{
float noise = sin(pos.x*frequency)*sin(pos.y*frequency)*sin(pos.z*frequency);
return length(pos) - radius + noise;
}
Where the original distanceSphere function gave me the exact distance to the closest object, the new distanceFunkySphere function doesn’t do that. It gives me an estimate to the distance of some surface (that’s no longer just a sphere). But my ray marching code never needed the exact distance to begin with, since it gets closer to the surface step by step.
Finally, you can render this super fast because each of these ray marching procedures on distance functions can be evaluated independently for each pixel on the screen (you just change the ray direction). This is the kind of work that the GPU is best at!
If this is new to you so far and you find this interesting, I recommend taking a look at cool demos on Shadertoy and exploring Inigo Quilez’s website who gets a lot of credit for popularizing this technique. If you’re already familiar with distance field ray marching, read on, as I’m going to get into some technical details that are important for producing good distance functions.
## Overstepping
In the example above, I multiplied some sin functions and added it to a distance function of a sphere to get a sphere with bubbles. If you’re acquainted with the hobbyist community that uses distance functions, something about it should feel familiar. You do some random math operations, get something physically incorrect, stick in some arbitrary constants for adjustment, wave your hands and say “hey, the result looks nice!”.
A friend of mine, Veit Heller coined a good term for this: voodoo math.
Getting a physically incorrect result isn’t always a problem. You could, for example, divide a distance function by 2. It’ll never answer the question “how far is the closest point?” correctly, but at worst it’ll take twice as many steps for ray marching to hit the surface of the object.
Problems occur when your distance function gives you a result that’s too large. Then you might take a ray marching step past the surface of your object.
This kind of overstepping will either fail to render thin objects (jump past it) or render the inside of an object, leading to weird results.
To see how this is important, consider the animation below. On the left side, I’m ray marching 25% faster than the safe “speed limit” (more on that in a second). On the right side, I have the reference, correct image. If you squint your eyes and look carefully at the surface as it’s moving, you should see some “ripples” that appear at random times. That’s caused by overstepping.
I used this particular shape as an example since it’s used in the primitives demo. I noticed that the rippling happens as the object gets close to the camera.
## Preventing overstepping
How do you guarantee that overstepping won’t happen? A useful heuristic is to look at how fast the function changes. If the magnitude of the gradient of the function is less or equal to one everywhere that it’s defined, then you will never overstep.
To get an intuition of how this is true, consider the one-dimensional case, where $$D(x)$$ is the distance to a point at $$x = 0$$. Then the line $$D(x) = x$$ is the maximum allowed value of $$D(x)$$ and has gradient (derivative) equal to 1 everywhere except at $$x = 0$$ where it’s not defined. If the gradient can only be smaller, $$D(x)$$ can also only be smaller.
This is a rather strong condition. You could have functions that are consistent distance fields (that do not overstep) and that have a large gradient in some small area that doesn’t matter, like the one below.
In that case, dividing by the gradient magnitude would be unnecessarily conservative. However, if we start adding it to other distance functions, than that small area with large gradient suddenly starts to matter.
In the case of our sin noise function, that is what we do.
Anyway, what is the gradient of our noise function? Recall that it’s defined as:
Then $$\nabla n = (w\cos{wx}\sin{wy}\sin{wz}, w\sin{wx}\cos{wy}\sin{wz}, w\sin{wx}\sin{wy}\cos{wz})$$. Next we find an upper bound for the gradient magnitude.
So the gradient of $$n(x, y, z)$$ is at most $$w$$, the noise frequency. That means that $$\frac{n(x,y,z)}{w}$$ is a distance function guaranteed not to overstep.
But wait, the distance function that we used was $$D(x,y,z) = D_{sphere}(x,y,z) + D_{noise}(x,y,z)$$!
The gradient of $$D_{sphere}$$ is exactly 1 everywhere (exercise for the reader - this is also true for all exact distance functions). We can use the triangle inequality to get that $$\norm{\nabla D(x,y,z)} <= \norm{\nabla D_{sphere}(x,y,z)} + \norm{\nabla D_{noise}(x,y,z)} = 1 + w$$. So it suffices to divide $$D(x,y,z)$$ by $$(1 + w)$$. Now, using that distance function, we are guaranteed not to overstep.
In practice, we adjust the strength of the noise component by using linear interpolation $$D(x,y,z) = (1 - ratio) * D_{sphere}(x,y,z) + ratio * D_{noise}(x,y,z) / w$$. In GLSL, this can be done with the mix function.
Linearity: $$\nabla(\alpha F(\vec{v}) + \beta G(\vec{v})) = \alpha \nabla F(\vec{v}) + \beta \nabla G(\vec{v})$$
Product rule: $$\nabla(F(\vec{v}) G(\vec{v})) = F(\vec{v}) \nabla G(\vec{v}) + G(\vec{v}) \nabla F(\vec{v})$$
Chain rule: $$\nabla(g(F(\vec{v}))) = g’(F(\vec{v})) \nabla F(\vec{v})$$
Minimum: $$\nabla (\min(F(\vec{v}), G(\vec{v}))) \le \max(\nabla F(\vec{v}), \nabla G(\vec{v}))$$
Maximum: $$\nabla (\max(F(\vec{v}), G(\vec{v}))) \le \max(\nabla F(\vec{v}), \nabla G(\vec{v}))$$
Absolute value: $$\nabla (abs(F(\vec{v}))) = \nabla F(\vec{v})$$.
Triangle inequality: If $$\vec{a} = \vec{b} + \vec{c}$$, then $$\norm{\vec{a}} \le \norm{\vec{b}} + \norm{\vec{c}}$$.
## Fast ray marching
The method we used to find a scaling factor for our distance function uses the maximum of the magnitude of the gradient as well as the triangle inequality. It gives us a conservative estimate, based on the potential worst cases. This means that we are probably overestimating the scaling factor and marching slower than we need to. Can we do better? See my next blog post.
|
2019-03-26 00:03:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5401579141616821, "perplexity": 634.1672236449343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204736.6/warc/CC-MAIN-20190325234449-20190326020449-00215.warc.gz"}
|
https://gyre.readthedocs.io/en/v6.0/user-guide/interpreting-output.html
|
# Interpreting Output Files¶
This chapter reviews the summary and detail output files written during a GYRE run, and demonstrates how to read and plot them in Python. Further information about these files is provided in the Output Files chapter.
## File Categories¶
Summary files collect together global properties, such as eigenfrequencies and radial orders, of all modes found. By contrast, a detail file stores spatial quantities, such as eigenfunctions and differential inertias, for an individual mode. The choice of which specific data actually appear in output files isn’t hardwired, but rather determined by the summary_item_list and mode_item_list parameters of the &ad_output and &nad_output namelist groups. Changing these parameters allows you to tailor the files to contain exactly the data you need.
## File Formats¶
Summary and detail files are written by GYRE in either TXT or HDF format. Files in the TXT format are human-readable, and can be reviewed on-screen or in a text editor; whereas files in the HDF format are intended to be accessed through a suitable HDF5 interface. Unless there’s a good reason to use TXT format, HDF format is preferable; it’s portable between different platforms, and takes up significantly less disk space
## PyGYRE¶
PyGYRE is a Python package, maintained separately from GYRE, providing a set of routines that greatly simplify the analysis of summary and detail files. Detailed information about PyGYRE can be found in the full documentation; here, we demonstrate how to use it to read and plot the output files from the Example Walkthrough section.
As a preliminary step, you’ll need to install PyGYRE from the Python Package Index (PyPI). This can be done using the pip command, via
span.prompt1:before {
content: "\$ ";
}
pip install pygyre
pip install --upgrade pygyre
## Analyzing a Summary File¶
With PyGYRE installed, change into your work directory and fire up your preferred interactive Python environment (e.g., Jupyter). Import PyGYRE and the other modules needed for plotting:
# Import modules
import pygyre as pg
import matplotlib.pyplot as plt
import numpy as np
(you may want to directly cut and paste this code). Next, read the summary file in the work directory into the variable s:
# Read data from a GYRE summary file
The pg.read_output function is able to read both TXT- and HDF-format files, returning the data in a Table object (from the Astropy project). To inspecting the data on-screen, simply evaluate the table:
# Inspect the data
s
From this, you’ll see that there are three columns in the table, containing the harmonic degree l, radial order n_pg and frequency freq of each mode found during the GYRE run.
Next, plot the frequencies against radial orders via
# Plot the data
plt.figure()
plt.plot(s['n_pg'], s['freq'].real)
plt.xlabel('n_pg')
plt.ylabel('Frequency (cyc/day)')
(the values in the freq column are complex, and we plot the real part). The plot should look something along the lines of Fig. 1.
Fig. 1
The straight line connecting the two curves occurs because we are plotting both the dipole and quadrupole modes together. To separate them, the table rows can be grouped by harmonic degree:
# Plot the data, grouped by harmonic degree
plt.figure()
sg = s.group_by('l')
plt.plot(sg.groups[0]['n_pg'], sg.groups[0]['freq'].real, label=r'l=1')
plt.plot(sg.groups[1]['n_pg'], sg.groups[1]['freq'].real, label=r'l=2')
plt.xlabel('n_pg')
plt.ylabel('Frequency (cyc/day)')
plt.legend()
The resulting plot, in Fig. 2 looks much better.
Fig. 2
## Analyzing a Detail File¶
Now let’s take a look at one of the detail files, for the mode with $$\ell=1$$ and $$n_{\rm pg}=-7$$. As with the summary file, pg.read_output can be used to read the file data into a Table object:
# Read data from a GYRE detail file
Inspecting the data using
# Inspect the data
d
shows there are 7 columns: the fractional radius x, the radial displacement eigenfunction xi_r, the horizontal displacement eigenfunction xi_h, and 4 further columns storing structure coefficients (see the Detail Files section for descriptions of these data). Plot the two eigenfunctions using the code
# Plot displacement eigenfunctions
plt.figure()
plt.plot(d['x'], d['xi_r'].real, label='xi_r')
plt.plot(d['x'], d['xi_h'].real, label='xi_h')
plt.xlabel('x')
plt.legend()
Fig. 3 The radial ($$\txir$$) and horizontal ($$\txih$$) displacement eigenfunctions of the $$\ell=1$$, $$n_{\rm pg}=-7$$ mode, plotted against the fractional radius $$x$$. (Source)
The plot should look something along the lines of Fig. 3. From this figure , we see that the radial wavelengths of the eigenfunctions become very short around a fractional radius $$x \approx 0.125$$. To figure out why this is, we can take a look at the star’s propagation diagram:
# Evaluate characteristic frequencies
l = d.meta['l']
omega = d.meta['omega']
x = d['x']
V = d['V_2']*d['x']**2
As = d['As']
c_1 = d['c_1']
Gamma_1 = d['Gamma_1']
d['N2'] = d['As']/d['c_1']
d['Sl2'] = l*(l+1)*Gamma_1/(V*c_1)
# Plot the propagation diagram
plt.figure()
plt.plot(d['x'], d['N2'], label='N^2')
plt.plot(d['x'], d['Sl2'], label='S_l^2')
plt.axhline(omega.real**2, dashes=(4,2))
plt.xlabel('x')
plt.ylabel('omega^2')
plt.ylim(5e-2, 5e2)
plt.yscale('log')
Note how we access the mode harmonic degree l and dimensionless eigenfrequency omega through the table metadata dict d.meta. The resulting plot (cf. Fig. 4) reveals that the Brunt-Väisälä frequency squared is large around $$x \approx 0.125$$; this feature is a consequence of the molecular weight gradient zone outside the star’s convective core, and results in the short radial wavelengths seen there in Fig. 3.
Fig. 4 Propagation diagram for the $$5\,\Msun$$ model, plotting the squares of the Brunt-Väisälä ($$N^{2}$$) and Lamb ($$S_{\ell}^{2}$$) frequencies versus fractional radius $$x$$. The horizontal dashed line shows the frequency squared $$\omega^{2}$$ of the $$\ell=1$$, $$n_{\rm pg}=-7$$ mode shown in Fig. 3. Regions where $$\omega^{2}$$ is smaller (greater) than both $$N^{2}$$ and $$S_{\ell}^{2}$$ are gravity (acoustic) propagation regions; other regions are evanescent. (Source)
|
2023-03-31 12:33:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3741128742694855, "perplexity": 3557.1034806357775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00118.warc.gz"}
|
https://math.stackexchange.com/questions/3523116/sheldon-axlers-proof-that-every-operator-on-a-complex-vector-space-has-an-eigen/3523139
|
# Sheldon Axler's proof that every operator on a complex vector space has an eigenvalue
Since the proof is valid for any $$v \in V$$, the proof makes sure that any vector $$v$$ is an eigenvector which is of course not true. What is the error in this line of thought? Thanks a lot !
• You can get rid of the vector $v$ if you don't like it. Since $\mathcal L(V)$ is $n^2$-dimensional, $I,T,T^2,\ldots,T^{n^2}$ cannot be linearly independent and $p(T)=0$ for some nonzero polynomial $p$ of degree $m\le n^2$. This does not make $m\le n$ as in Axler's proof, but the rest of the argument remains essentially the same. – user1551 Jan 26 at 16:35
• The proof doesn't show that $v$ is an eigenvector... – David C. Ullrich Jan 26 at 19:43
We can have, for example, $$(T-\lambda_mI)v=u\neq0$$ and $$(T-\lambda_{m-1}I)u=0$$.
At the end of the proof it is only asserted that $$T-\lambda_i I$$ is not injective for some $$i$$. It does not give you $$(T-\lambda_i I)v=0$$ and we cannot say that $$v$$ is eigen vector corresponding to $$T-\lambda_i I$$.
The last statement does not mean that you can find such $$i$$ that $$(\hat{T}-\lambda_i \hat{I})\vec{v} = 0$$ for any $$\vec{v}$$. It rather means, that you can decompose you vector $$\vec{v} = a_1 \vec{u}_1 + \dots + a_m \vec{u}_m$$ so that every $$u_i$$ can find "its own" $$(\hat{T}-\lambda_i \hat{I})$$ and become zero.
|
2020-04-01 21:31:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909375786781311, "perplexity": 102.55048933956179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00137.warc.gz"}
|
https://schulte-mecklenbeck.com/post/2017-11-13-professor-priming-or-not/
|
# Professor priming - or not
This was my first contribution to a Registered Replication Report (RRR). Being one of 40 participating labs was an interesting exercise – it might seem straightforward to run the same study in different labs, but we learned that such small things as ü, ä and ö can generate a huge amount of problems and work (read this if you are into these kind of things).
<p>
Here is one of the central results:
</p>
<p>
<img class="aligncenter size-full wp-image-579" src="/uploads//2017/11/Screen-Shot-2017-11-13-at-22.19.39.png" alt="" width="718" height="609" srcset="2017/11/Screen-Shot-2017-11-13-at-22.19.39.png 718w, 2017/11/Screen-Shot-2017-11-13-at-22.19.39-300x254.png 300w, 2017/11/Screen-Shot-2017-11-13-at-22.19.39-589x500.png 589w, 2017/11/Screen-Shot-2017-11-13-at-22.19.39-500x424.png 500w" sizes="(max-width: 718px) 100vw, 718px" />
</p>
<p>
So overall not a lot of action … our lab was actually the one with larges effect size (in the predicted direction).
</p>
<p>
Here is the abstract of the whole paper and here the <a href="https://www.psychologicalscience.org/redesign/wp-content/uploads/2017/11/Dijksterhuis_RRRcommentary_ACPT.pdf">Commentary by Ap Dijksterhuis</a> naturally, he sees things a bit different: Dijksterhuis and van Knippenberg (1998) reported that participants primed with an intelligent category (“professor”) subsequently performed 13.1% better on a trivia test than participants primed with an unintelligent category (“soccer hooligans”). Two unpublished replications of this study by the original authors, designed to verify the appropriate testing procedures, observed a smaller difference between conditions (2-3%) as well as a gender difference: men showed the effect (9.3% and 7.6%) but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multi-lab Registered Replication Report (RRR). A total of 40 laboratories collected data for this project, with 23 laboratories meeting all inclusion criteria. Here we report the meta-analytic result of those 23 direct replications (total N = 4,493) of the updated version of the original study, examining the difference between priming with professor and hooligan on a 30-item general knowledge trivia task (a supplementary analysis reports results with all 40 labs, N = 6,454). We observed no overall difference in trivia performance between participants primed with professor and those primed with hooligan (0.14%) and no moderation by gender.
</p>
</div>
##### Michael Schulte-Mecklenbeck
###### Associate Professor
I study human decision making, process tracing methods, food choice and traffic behavior.
|
2021-01-20 09:17:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25792136788368225, "perplexity": 7101.967433237792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00294.warc.gz"}
|
https://davidyat.es/2016/07/27/writing-a-latex-macro-that-takes-a-variable-number-of-arguments/
|
Writing a LaTeX macro that takes a variable number of arguments
LaTeX is the document preparation system of choice for middle-aged computer scientists. Despite its dense, esoteric and downright old-fashioned syntax and general workings, it’s probably still the best way to prepare and typeset complex documents, provided you’re prepared to learn and struggle a lot up front to get your tools and templates set up correctly (and then have everything work forever).1
One of the nifty things LaTeX provides is the ability to define custom macros. This allows you to do neat, effort-saving things like this:
\newcommand{\longestmovietitle}{Night Of The Day Of The Dawn Of The Son Of The Bride Of The Return Of The Revenge Of The Terror Of The Attack Of The Evil, Mutant, Hellbound, Flesh-Eating, Crawling, Alien, Zombified, Subhumanoid Living Dead — Part 5}
%[...]
And so, in summary, \longestmovietitle{} is not actually the longest movie. \longestmovietitle{} actually only has a running time of 96 minutes, making it a very slightly more credible contender for the title of shortest movie (which it also isn't).
Basically, the idea is to allow you to use a short command as a placeholder for something longer and more complicated.
You can also define macros with arguments, like this:
\newcommand{\ofthe}[3]{#1 of the #2 of the #3}
%[...]
My movie is called \ofthe{Revenge}{Return}{Retribution}. It will be a prequel to the critically acclaimed \ofthe{Venge}{Turn}{Tribution}.
…which will produce the output:
My movie is called Revenge of the Return of the Retribution. It will be a prequel to the critically acclaimed Venge of the Turn of the Tribution.
As you may have deduced, the [3] in that \newcommand invocation specifies the number of arguments expected by the macro. Change it to a [2] or a [4] without making corresponding changes to the macro itself, and everything breaks. Good job.
This is all well and good, but what if you want to make a macro that takes any number of arguments? Let’s say you have a favourite list format that goes like this:
Shopping list: eggs and also bread and also milk and that’s all!
You’d want to be able to display that list by writing some LaTeX like this:
\shoppinglist{eggs}{bread}{milk}
But tomorrow your needs would be different, so you’d like to be able to also do this:
\shoppinglist{milk}{bread}
Or this:
\shoppinglist{milk}{bread}{cheese}{the Sacred Tome of all Knowledge}
Or anything, really!
This is possible in LaTeX, but the complexity of the code required takes a jump from what you’ve seen before. To get it working, we’ll need to dive into the depths of TeX, the typesetting system to which LaTeX is a mere front-end.
At a high level, we’re going to need to write a macro that does two things: (1) displays the first list item and (2) manually moves the parser head to the next character in the text so that it can consume the next argument (stuff like this reminds you that TeX was initially released in the 70s). Simple enough!
The operative command here is going to be the TeX built-in \@ifnextchar, which checks what the next character in the text is and does something accordingly. An example:
\@ifnextchar[{It was an open square bracket!}{It wasn't an open square bracket!}
Because \@ifnextchar contains an @ symbol, which normally isn’t allowed in LaTeX macro names, the first thing we need to do is enclose our macro definition code between two special commands which temporary lift this restriction.
\makeatletter
% This code can use @ in macro names.
\makeatother
Now we need to make our macro. This first version will only display one list item.
\makeatletter
\newcommand{\shoppinglist}[1]{%
Shopping list: #1 and that's all!}
\makeatother
That’s pretty lame, so let’s add the rest of the items. As a first step towards doing this, we’ll need to define \checknextarg, a macro which will make sure there actually is more than one argument.
\newcommand{\checknextarg}{\@ifnextchar\bgroup{\gobblenextarg}{ and that's all!}}
We can’t use { as a literal character for \@ifnextchar to check, so we say \bgroup, which means literal {. This code checks if there is a { character after the } character closing off the macro’s first (and last official) argument. If there is, it calls \gobblenextarg to consume it, else it ends the list.
Our definition of \gobblenextarg looks like this:
\newcommand{\gobblenextarg}[1]{ and also #1\@ifnextchar\bgroup{\gobblenextarg}{ and that's all!}}
It takes one argument (a single list item), and then does \checknextarg’s job again. If another { is found, it recursively calls itself. Otherwise it ends the list.
Finally, we replace the end of the list in \shoppinglist with a call to \checknextarg. Here’s our complete code, with some examples underneath:
\documentclass{article}
\makeatletter
\newcommand{\shoppinglist}[1]{%
Shopping list: #1\checknextarg}
\newcommand{\checknextarg}{\@ifnextchar\bgroup{\gobblenextarg}{ and that's all!}}
\newcommand{\gobblenextarg}[1]{ and also #1\@ifnextchar\bgroup{\gobblenextarg}{ and that's all!}}
\begin{document}
\shoppinglist{eggs}\par
\end{document}
|
2019-12-14 21:05:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7582342624664307, "perplexity": 2026.0349651952304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541294513.54/warc/CC-MAIN-20191214202754-20191214230754-00533.warc.gz"}
|
http://math.stackexchange.com/tags/divisibility/info
|
Tag Info
This tag is for questions about divisibility, that is, determining when one thing is a multiple of another thing.
If $a$ and $b$ are integers, $a$ divides $b$ if $b=ca$ for some integer $c$. This is denoted $a\mid b$. It is usually studied in introductory courses in number theory, so add (elementary-number-theory) tag, if appropriate.
A common notation used for the phrase "$a$ divides $b$" is $a|b$. It is also common to negate the notation by adding a slash like this: "$c$ does not divide $d$" written as $c\nmid d$. Note that the order is important: for example, $2|4$ but "$4\nmid 2$".
This notion can be generalized to any ring. The definition is the same: For two elements $a$ and $b$ of a commutative ring $R$, $a$ divides $b$ if $ac=b$ for some $c$ in $R$.
Divisibility in commutative rings corresponds exactly to containment the poset of principal ideals. That is, $a$ divides $b$ if and only if $aR\subseteq bR$. For commutative rings like principal ideal rings, this means that divisibility mirrors exactly the poset of all ideals of the ring.
The topics appropriate for this tag include, for example:
• Questions about the relation $\mid$.
• Questions about g.c.d and l.c.m.
|
2016-02-09 06:39:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7741363644599915, "perplexity": 242.43344512306166}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156520.89/warc/CC-MAIN-20160205193916-00057-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/790847/a-consistent-first-order-theory-whose-impredicative-second-order-variant-is-inco
|
# A consistent first-order theory whose impredicative second-order variant is inconsistent
Let's assume that we have a consistent first-order theory, which was derived from a second order theory by replacing universal quantification over second order variables by axiom schemes for first-order definable predicates. Now let's compare this theory to two different second-order theories using Henkin semantics with suitable comprehension axiom schemes.
1. If the comprehension axiom scheme allows quantification only over the first-order variables, can the resulting second-order theory be inconsistent? I guess the answer is no, and the resulting theory will be equivalent to the first-order theory with respect to provability of first-order formulas.
2. If the comprehension axiom scheme allows quantification over both first and second-order variables, we can no longer be sure that the resulting second order theory is consistent. Is there a simple example, where the first-order theory is "provably consistent", and the second-order theory is "provably inconsistent"?
As already clarified in the comments, "provably (in)consistent" just means that assuming ZFC (or any other foundation) is fine, there is no need to restrict answers to syntactic derivations.
• If "the second-order theory is" inconsistent then "the second-order theory is 'provably inconsistent'". $\hspace{.49 in}$ – user57159 May 11 '14 at 21:15
• @RickyDemer By "provably inconsistent", I just mean that I don't insist on a possibly extremely long syntactic derivation of $\phi\land\lnot\phi$, but that a "short" proof (possibly assuming ZFC) that such a derivation exist is also sufficient. – Thomas Klimpel May 11 '14 at 21:21
• While I don't know the answer to your question, here's a nice example. Consider that we work in the theory $\sf ZFC$ augmented by the assumptions of $\rm Con\sf (ZFC)$ and $\lnot\rm Con\sf (ZFC+\rm Con\sf (ZFC))$. Then there is a model of $\sf ZFC$, therefore there is a model of $\sf NBG$; but there is no model of $\sf MK$ since that would imply much more in terms of consistency strength (and there's also no model of $\sf ZFC_2$, since that would require an inaccessible cardinal to exist). – Asaf Karagila May 16 '14 at 5:21
• @AsafKaragila Thanks, this really helped. I see now that the main obstacle to the second part of my question is the word "simple". If I drop it, I can just fix a Gödel numbering, and add $\lnot\mathsf{Con(PA)}$ as a single axiom to $\mathsf{PA}$. My best guess for an answer is then probably that either a "simple" independent statement for PA like Goodstein's theorem can be expressed as a single formula (so that I can add the negation of that formula as an axiom), or to see whether Robinson arithmetic $\mathsf{Q}$ allows for a simple formula (provably) independent of $\mathsf{Q}$. – Thomas Klimpel May 17 '14 at 12:27
• $(\forall y)(\exists x)(x+x = y \: \lor \: S(x+x) = y) \;\;$ is a simple formula that is provably independent of Q. $\hspace{1.01 in}$ – user57159 Aug 11 '14 at 6:27
|
2019-05-19 10:20:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971373438835144, "perplexity": 423.0283282640497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254751.58/warc/CC-MAIN-20190519101512-20190519123512-00334.warc.gz"}
|
https://www.projecteuclid.org/euclid.aos/1051027879
|
## The Annals of Statistics
### Current status and right-censored data structures when observing a marker at the censoring time
#### Abstract
We study nonparametric estimation with two types of data structures. In the first data structure n i.i.d. copies of $(C,N(C))$ are observed, where N is a finite state counting process jumping at time-variables of interest and C a random monitoring time. In the second data structure n i.i.d. copies of $(C\wedge T,I(T\leq C),N (C\wedge T))$ are observed, where N is a counting process with a final jump at time T (e.g., death). This data structure includes observing right-censored data on T and a marker variable at the censoring time.
In these data structures, easy to compute estimators, namely (weighted)-pool-adjacent-violator estimators for the marginal distributions of the unobservable time variables, and the Kaplan-Meier estimator for the time T till the final observable event, are available. These estimators ignore seemingly important information in the data. In this paper we prove that, at many continuous data generating distributions the ad hoc estimators yield asymptotically efficient estimators of $\sqrt{n}$-estimable parameters.
#### Article information
Source
Ann. Statist., Volume 31, Number 2 (2003), 512-535.
Dates
First available in Project Euclid: 22 April 2003
https://projecteuclid.org/euclid.aos/1051027879
Digital Object Identifier
doi:10.1214/aos/1051027879
Mathematical Reviews number (MathSciNet)
MR1983540
Zentralblatt MATH identifier
1039.62095
Subjects
Primary: 62G07: Density estimation
Secondary: 62F12: Asymptotic properties of estimators
#### Citation
Van der Laan, Mark J.; Jewell, Nicholas P. Current status and right-censored data structures when observing a marker at the censoring time. Ann. Statist. 31 (2003), no. 2, 512--535. doi:10.1214/aos/1051027879. https://projecteuclid.org/euclid.aos/1051027879
#### References
• BARLOW, R. E., BARTHOLOMEW, D. J., BREMNER, J. M. and BRUNK, H. D. (1972). Statistical Inference under Order Restrictions. Wiley, New York.
• BICKEL, P. J., KLAASSEN, C. A. J., RITOV, Y. and WELLNER, J. A. (1993). Efficient and Adaptive Estimation in Semi-Parametric Models. Johns Hopkins Univ. Press.
• DIAMOND, I. D. and MCDONALD, J. W. (1992). The analysis of current status data. In Demographic Applications of Event History Analy sis (J. Trussell, R. Hankinson and J. Tilton, eds.) 231- 252. Oxford Univ. Press.
• DIAMOND, I. D., MCDONALD, J. W. and SHAH, I. H. (1986). Proportional hazards models for current status data: Application to the study of differentials in age at weaning in Pakistan. Demography 23 607-620.
• DINSE, G. E. and LAGAKOS, S. W. (1982). Nonparametric estimation of lifetime and disease onset distributions from incomplete observations. Biometrics 38 921-932.
• GILL, R. D., VAN DER LAAN, M. J. and ROBINS, J. M. (1997). Coarsening at random: Characterizations, conjectures and counterexamples. Proc. First Seattle Sy mposium in Biostatistics. Lecture Notes in Statist. 123 255-294. Springer, New York.
• GROENEBOOM, P. J. (1998). Special topics course 593C: Nonparametric estimation for inverse problems: algorithms and asy mptotics. Technical Report 344, Dept. Statistics, Univ. Washington. (For related software see www.stat.washington.edu/jaw/RESEARCH/SOFTWARE/ software.list.html.)
• GROENEBOOM, P. and WELLNER, J. A. (1992). Information Bounds and Nonparametric Maximum Likelihood Estimation. Birkhäuser, Basel.
• HUANG, J. and WELLNER, J. A. (1995). Asy mptotic normality of the NPMLE of linear functionals for interval censored data, case I. Statist. Neerlandica 49 153-163.
• JEWELL, N. P., MALANI, H. M. and VITTINGHOFF, E. (1994). Nonparametric estimation for a form of doubly censored data with application to two problems in AIDS. J. Amer. Statist. Assoc. 89 7-18.
• JEWELL, N. P. and SHIBOSKI, S. C. (1990). Statistical analysis of HIV infectivity based on partner studies. Biometrics 46 1133-1150.
• JEWELL, N. P. and VAN DER LAAN, M. J. (1995). Generalizations of current status data with applications. Lifetime Data Analy sis 1 101-109.
• JONGBLOED, G. (1995). Three statistical inverse problems. Ph.D. dissertation, Delft Univ. Technology.
• KEIDING, N. (1991). Age-specific incidence and prevalence: A statistical perspective (with discussion). J. Roy. Statist. Soc. Ser. A 154 371-412.
• KODELL, R. L., SHAW, G. W. and JOHNSON, A. M. (1982). Nonparametric joint estimators for disease resistance and survival functions in survival/sacrifice experiments. Biometrics 38 43-58.
• SUN, J. and KALBFLEISCH, J. D. (1993). The analysis of current status data on point processes. J. Amer. Statist. Assoc. 88 1449-1454.
• TURNBULL, B. W. and MITCHELL, T. J. (1984). Nonparametric estimation of the distribution of time to onset for specific diseases in survival/sacrifice experiments. Biometrics 40 41-50.
• VAN DER LAAN, M. J., JEWELL, N. P. and PETERSON, D. R. (1997). Efficient estimation of the lifetime and disease onset distribution. Biometrika 84 539-554.
• BERKELEY, CALIFORNIA 94720 E-MAIL: [email protected]
|
2019-12-07 21:25:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4231354296207428, "perplexity": 7808.657120436125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00209.warc.gz"}
|
https://math.stackexchange.com/questions/661177/clarification-for-this-combinations-permutations-problem
|
# Clarification for this combinations/permutations problem?
I've been going through a list of poker hands and their descriptions, and then attempting to calculate their probabilities by first calculating the number of possible hands for the given hand.
I tried to do the Two Pair hand, which is a hand where you have 2 cards of the same value, and another 2 cards of the same value but different from the previous pair, and one card of a value different from the pairs(e.g. $3\heartsuit3\spadesuit \space 4\clubsuit$ $4\spadesuit \space10\heartsuit$); I got the wrong answer:
But why is my approach wrong? I thought of it as choosing a value out of 13 possible values(A,K,J,10,...), then choose 2 cards; choose another value from the remaining 12 values, then another 2 cards. But this isn't the same as choosing a pair out of 13 values, and then choose 4 cards as it appears in the correct answer. I can't see how that makes a difference intuitively... what's the difference here?
Your first way double counts. It counts two Jacks, and two $5$'s, and some useless card, as different from two $5$'s, and two Jacks, and some useless card.
For Jack is one of the $\binom{13}{1}$ kinds that you chose "first," and $5$ is among the $\binom{12}{1}$ kinds that you chose "second." But $5$ is among the $\binom{13}{1}$ kinds that you chose "first," and Jack is among the $\binom{12}{1}$ kinds that you chose "second."
Note that this issue does not arise with a full house, for three Jacks and two $5$'s is a different hand than three $5$'s and two Jacks.
|
2019-06-26 14:44:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7976263761520386, "perplexity": 346.20804710479206}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00412.warc.gz"}
|
http://blog.tmorris.net/posts/finding-the-levenshtein-distance-in-scala/index.html
|
λ Tony's blog λ
The weblog of Tony Morris
# Finding the Levenshtein Distance in Scala
Posted on April 24, 2008, in Programming
In the spirit of esoteric code snippets like this one, I thought I’d put my two bob in :)
I have written the Levenshtein Distance algorithm using Scala below. The Levensthein Distance algorithm is a Dynamic Programming Algorithm (DPA). This implementation is a little different to the Python one (which creates the arrays explicitly and fills them using loops):
• The code more closely represents the mathematical definition of the algorithm
• The code is easier to reason about because the destructive updates occur behind the scenes (in the memoisation library)
• The code below requires a third party library (Scalaz) note that Scalaz comes with demo code including the levenshtein distance and other DPAs
• The code has a better complexity than the typical loopy version by using lazy evaluation (notice that ‘c’ is not always evaluated)
• While the code memoises with an array, you could use say, a map and save some space as well
• The code builds the call stack as it traverses the matrix (the loopy one does not)
import scalaz.memo.Memo
import scalaz.memo.SizedMemo.arraySizedMemo
object Levenshtein {
def levenshtein[A](x: Array[A], y: Array[A]): Int = {
val im = arraySizedMemo
val m = im[Memo[Int, Int]](x.length + 1)
// the call matrix
def mx(i: Int, j: Int): Int = if(i == 0) j else if(j == 0) i else {
def f = (n: Int) => im[Int](y.length + 1)
val a = m(f)(i - 1)(mx(i - 1, _))(j) + 1
val b = m(f)(i - 1)(mx(i - 1, _))(j - 1) + (if(x(i - 1) == y(j - 1)) 0 else 1)
lazy val c = m(f)(i)(mx(i, _))(j - 1) + 1
if(a < b) a else if(b <= c) b else c
}
mx(x.length, y.length)
}
def main(args: Array[String]) =
println(levenshtein(args(0).toCharArray, args(1).toCharArray))
}
To run this code:
$wget http://projects.workingmouse.com/public/scalaz/artifacts/2.4/scalaz.jar # download Scalaz 2.4$ scalac -classpath scalaz.jar Levenshtein.scala # compile
\$ scala -classpath .:scalaz.jar Levenshtein algorithm altruistic # find the distance!
<strong>6</strong>
|
2017-06-22 22:15:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18920207023620605, "perplexity": 8838.921590660399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319912.4/warc/CC-MAIN-20170622220117-20170623000117-00019.warc.gz"}
|
http://mailman.ntg.nl/pipermail/ntg-context/2004/006551.html
|
# [NTG-context] The last line
Hans Hagen pragma at wxs.nl
Wed Aug 11 09:09:14 CEST 2004
Steffen Wolfrum wrote:
>that means \placefigure{top,left}, WITH text around it, won't be doable in ConTeXt?
>not even in your idea of ConTeXt4?
>
>
never say never, so far i managed to do more than i expected to be possible with tex so ...
Hans
-----------------------------------------------------------------
Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
tel: 038 477 53 69 | fax: 038 477 53 74 | www.pragma-ade.com
| www.pragma-pod.nl
-----------------------------------------------------------------
|
2015-11-27 10:03:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973652184009552, "perplexity": 6852.985442033656}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398448506.69/warc/CC-MAIN-20151124205408-00334-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://gmatclub.com/forum/during-a-single-hour-of-a-pledge-drive-for-a-public-radio-92544.html
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 01 Jul 2015, 05:01
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# During a single hour of a pledge drive for a public radio
Author Message
TAGS:
Retired Moderator
Joined: 01 Oct 2009
Posts: 500
Location: Bangalore,India
WE 1: 4yrs in IT Industry
Followers: 22
Kudos [?]: 107 [0], given: 344
During a single hour of a pledge drive for a public radio [#permalink] 11 Apr 2010, 10:14
4
This post was
BOOKMARKED
00:00
Difficulty:
(N/A)
Question Stats:
41% (02:16) correct 59% (01:35) wrong based on 153 sessions
During a single hour of a pledge drive for a public radio station, anyone making a pledge of a stated amount was given a free gift. Pledges were encouraged by the announcement that the retail cost of the gift was equal to the amount of the pledge. Yet, at the end of the hour, the total money raised from pledges accounted for a larger dollar amount than the amount organizers had paid for all the free gifts.
Which of the following, if true, is the best explanation for the fact that funds raised from pledges during the hour accounted for more money than the cost of the free gifts?
A)The cost of postage was included in the total cost assigned to the gifts, making them seem more expensive to potential donors.
B)Organizers underestimated the amount of money that would be raised during the hour and were surprised by the actual total of pledges.
C)Organizers overestimated the number of donors who would respond to the offer and were forced to offer gifts at half price when there were fewer pledges than expected.
D)Free gifts were donated by a sponsor, eliminating the need to subtract the cost of them from the total money raised through pledges.
E)More money was raised during this hour than during the previous three hours, driving down the average out-of-pocket cost of the free gifts.
_________________
One Final Try.......
Last edited by RaviChandra on 11 Apr 2010, 20:51, edited 1 time in total.
Intern
Joined: 15 Mar 2010
Posts: 18
Followers: 0
Kudos [?]: 2 [0], given: 4
Re: Pledge [#permalink] 11 Apr 2010, 19:02
it's D. If oraganizer does not need to pay the cost of the gifts , whatever fund is raised the radio station gets to keep it. so whatever they raise , the station will always be at profit.
Retired Moderator
Joined: 01 Oct 2009
Posts: 500
Location: Bangalore,India
WE 1: 4yrs in IT Industry
Followers: 22
Kudos [?]: 107 [0], given: 344
Re: Pledge [#permalink] 11 Apr 2010, 20:58
[Reveal] Spoiler:
OA is D
During a single hour of a pledge drive for a public radio station, anyone making a pledge of a stated amount was given a free gift. Pledges were encouraged by the announcement that the retail cost of the gift was equal to the amount of the pledge. Yet, at the end of the hour, the total money raised from pledges accounted for a larger dollar amount than the amount organizers had paid for all the free gifts.
if we observe the last sentence it clearly says that the gifts are bought...
even thought option D is very convincing i eliminated D thinking they are bought and can not be gifted.....
Can some one explain this!!!!
_________________
One Final Try.......
Senior Manager
Joined: 24 Jul 2009
Posts: 297
Followers: 2
Kudos [?]: 84 [0], given: 0
Re: Pledge [#permalink] 11 Apr 2010, 23:49
RaviChandra wrote:
[Reveal] Spoiler:
OA is D
During a single hour of a pledge drive for a public radio station, anyone making a pledge of a stated amount was given a free gift. Pledges were encouraged by the announcement that the retail cost of the gift was equal to the amount of the pledge. Yet, at the end of the hour, the total money raised from pledges accounted for a larger dollar amount than the amount organizers had paid for all the free gifts.
if we observe the last sentence it clearly says that the gifts are bought...
even thought option D is very convincing i eliminated D thinking they are bought and can not be gifted.....
Can some one explain this!!!!
Let me try..!!
AND NOW..
Organizers have to give these gifts to the pledge makers...!! And the any pledge makers will get the "free gift" equivalent to the pledge made.
SO, Let say there 10 pledge makers and each made a pledge $10.. so total worth of gifts to be given by organizers shud worth$100.
But let say organizers bought only 8 gifts themselves and 2 were offered by the sponsors..>>>> So, effectively Organizers paid less...|>> funds raised from pledges during the hour accounted for more money than the cost of the "free gifts" PAID by the organizers..
Hope it helps..
Senior Manager
Joined: 07 Jan 2010
Posts: 250
Followers: 1
Kudos [?]: 7 [0], given: 16
Re: Pledge [#permalink] 12 Apr 2010, 06:07
We should ask NPR if the scenario is possible
I selected D, but yes it contradicts the premise. Nice attempt to explain Nverma, but I don't think that's enough. "Free gifts were donated by a sponsor" could be mean either way.
Retired Moderator
Joined: 01 Oct 2009
Posts: 500
Location: Bangalore,India
WE 1: 4yrs in IT Industry
Followers: 22
Kudos [?]: 107 [0], given: 344
Re: Pledge [#permalink] 12 Apr 2010, 07:26
Yes verma nice attempt to explain but... Im some how still not convinced with the answer
_________________
One Final Try.......
SVP
Joined: 17 Feb 2010
Posts: 1560
Followers: 13
Kudos [?]: 294 [0], given: 6
Re: Pledge [#permalink] 12 Apr 2010, 11:45
very unusual option as OA.
I thought "Free gifts were donated by a sponsor" is a trap....
Manhattan GMAT Instructor
Affiliations: ManhattanGMAT
Joined: 21 Jan 2010
Posts: 354
Location: San Francisco
Followers: 367
Kudos [?]: 849 [8] , given: 11
Re: Pledge [#permalink] 12 Apr 2010, 13:19
8
KUDOS
1
This post was
BOOKMARKED
Hey All,
I got a request to weigh in on this one, but to be honest, I'm unsure where the confusion is. People keep saying that the answer goes against a written premise. It doesn't in the slightest:
During a single hour of a pledge drive for a public radio station, anyone making a pledge of a stated amount was given a free gift. Pledges were encouraged by the announcement that the retail cost of the gift was equal to the amount of the pledge. Yet, at the end of the hour, the total money raised from pledges accounted for a larger dollar amount than the amount organizers had paid for all the free gifts.
This is an "Explain the Discrepancy" question, so all we need to do is write it in our own words.
Discrepancy: Station gives away gift that retails for the same amount as associated pledge, yet station makes profit. How?
Explanation: Pretty obvious that the gifts were donated. This is how the majority of charity auctions are run.
A)The cost of postage was included in the total cost assigned to the gifts, making them seem more expensive to potential donors.
PROBLEM: This doesn't address the issue. Even if cost of postage is included, the station's outlay = pledges if they had to buy the gifts.
B)Organizers underestimated the amount of money that would be raised during the hour and were surprised by the actual total of pledges.
C)Organizers overestimated the number of donors who would respond to the offer and were forced to offer gifts at half price when there were fewer pledges than expected.
PROBLEM: This wouldn't help at all. In fact, it looks like it would hurt the discrepancy.
D)Free gifts were donated by a sponsor, eliminating the need to subtract the cost of them from the total money raised through pledges.
ANSWER: People keep saying this goes against the premise. But the premise never says the gifts were bought. It says "total money raised from pledges accounted for a larger dollar amount than the amount organizers had paid for all the free gifts." That does not imply that they bought them at all. It just says they made more money than they paid. Even if the amount they paid was $0, this would remain a true premise. I see no contradiction. : ) E)More money was raised during this hour than during the previous three hours, driving down the average out-of-pocket cost of the free gifts. PROBLEM: Same problem as B. Hope that helps! -t _________________ Tommy Wallach | Manhattan GMAT Instructor | San Francisco Manhattan GMAT Discount | Manhattan GMAT Reviews Retired Moderator Joined: 01 Oct 2009 Posts: 500 Location: Bangalore,India WE 1: 4yrs in IT Industry Followers: 22 Kudos [?]: 107 [1] , given: 344 Re: Pledge [#permalink] 12 Apr 2010, 18:35 1 This post received KUDOS Thanks TommyWallach, Now its very clear _________________ One Final Try....... Forum Moderator Status: mission completed! Joined: 02 Jul 2009 Posts: 1414 GPA: 3.77 Followers: 168 Kudos [?]: 651 [0], given: 615 Re: Pledge [#permalink] 07 Jul 2010, 01:31 RaviChandra wrote: Thanks TommyWallach, Now its very clear "total money raised from pledges accounted for a larger dollar amount than the amount organizers had paid for all the free gifts." think about that they may have bought all the gifts at a discount from wholesaller. Anything that gives you a clue how they received (for free or paid) that "gifts" cheaper have to attract your attention. _________________ Audaces fortuna juvat! GMAT Club Premium Membership - big benefits and savings Manager Joined: 17 Oct 2008 Posts: 198 Followers: 1 Kudos [?]: 16 [0], given: 11 Re: Pledge [#permalink] 07 Jul 2010, 03:33 option D completely resolves the discrepancy found here. Intern Joined: 05 Jul 2010 Posts: 27 Followers: 0 Kudos [?]: 20 [0], given: 7 Re: Pledge [#permalink] 07 Jul 2010, 14:04 I agree, D seems to be the most probable. _________________ If you like my posts please consider giving me KUDOS!!!!!!!!! Thanks:) Senior Manager Joined: 23 May 2010 Posts: 442 Followers: 5 Kudos [?]: 48 [0], given: 112 Re: Pledge [#permalink] 10 Jul 2010, 21:56 yup.... chose A Manager Joined: 09 Jul 2010 Posts: 150 Followers: 1 Kudos [?]: 17 [0], given: 3 Re: Pledge [#permalink] 11 Jul 2010, 20:44 D _________________ consider cudos if you like my post Manager Joined: 08 Jan 2010 Posts: 194 Followers: 1 Kudos [?]: 3 [0], given: 13 Re: Pledge [#permalink] 03 Aug 2010, 12:29 I was so confused among A B and D ...........finally went with B ............I wouldn't have initially agreed but tommy's explanation is good .......Thanks Manager Joined: 04 Feb 2010 Posts: 64 Schools: IESE '13 WE 1: Engineer Followers: 3 Kudos [?]: 12 [0], given: 0 Re: Pledge [#permalink] 04 Aug 2010, 22:06 had paid does not equal bought ---- actually i read this and was reminded of the stupid joke some people say, when they got something for "free ninety nine" - that's how much they paid (free) not how much they bought it for... Senior Manager Joined: 14 Jun 2010 Posts: 334 Followers: 2 Kudos [?]: 15 [0], given: 7 Re: Pledge [#permalink] 06 Aug 2010, 07:10 Good one . Went for D! Senior Manager Status: Fighting on Joined: 14 Mar 2010 Posts: 318 Schools: UCLA (R1 interview-WL), UNC(R2--interview-ding) Oxford(R2-Admit), Kelley (R2- Admit$\$), McCombs(R2)
WE 1: SE - 1
WE 2: Engineer - 3
Followers: 4
Kudos [?]: 24 [0], given: 3
Re: Pledge [#permalink] 06 Aug 2010, 07:31
Answer should be D, as it clearly gives details about cost of gifts = 0 and thereby proving that
cost to get gifts < amount of money from pledges.
Current Student
Affiliations: Volunteer Operation Smile India, Creative Head of College IEEE branch (2009-10), Chief Editor College Magazine (2009), Finance Head College Magazine (2008)
Joined: 25 Jul 2010
Posts: 471
Location: India
WE2: Entrepreneur (E-commerce - The Laptop Skin Vault)
Concentration: Marketing, Entrepreneurship
GMAT 1: 710 Q49 V38
WE: Marketing (Other)
Followers: 12
Kudos [?]: 96 [0], given: 24
Re: Pledge [#permalink] 11 Sep 2010, 07:41
D coz organizers paid nothing
_________________
Kidchaos
http://www.laptopskinvault.com
Follow The Laptop Skin Vault on:
Consider Kudos if you think the Post is good
Unless someone like you cares a whole awful lot. Nothing is going to change. It's not. - Dr. Seuss
Manager
Joined: 23 Sep 2009
Posts: 152
Followers: 1
Kudos [?]: 31 [0], given: 37
Re: Pledge [#permalink] 11 Sep 2010, 16:49
I know this may be a dumb question... The passage claerly says "the total money raised from pledges accounted for a larger dollar amount [highlight]than the amount organizers had paid for all the free gifts[/highlight]."
So it says that they had paid something for the gifts....So how can we assume an answer which is related to donations? please explain?
_________________
Thanks,
VP
Re: Pledge [#permalink] 11 Sep 2010, 16:49
Go to page 1 2 Next [ 35 posts ]
Similar topics Replies Last post
Similar
Topics:
9 Radio Stations with radio data system (RDS) technolodgy 33 16 Jun 2008, 03:18
1 Radio stations with radio data system (RDS) technology 6 10 Nov 2006, 19:49
Radio stations with radio data system (RDS) technology 5 22 Sep 2006, 08:05
Radio stations with radio data system (RDS) technology 12 25 May 2006, 10:52
8 Laws requiring the use of headlights during daylight hours 34 07 Aug 2005, 05:01
Display posts from previous: Sort by
|
2015-07-01 13:01:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.309253454208374, "perplexity": 6421.979574021704}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094931.19/warc/CC-MAIN-20150627031814-00136-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://in.mathworks.com/help/physmod/sps/ref/inductionmachinescalarcontrol.html
|
# Induction Machine Scalar Control
Induction machine V/f control
• Library:
• Simscape / Electrical / Control / Induction Machine Control
## Description
The Induction Machine Scalar Control block implements an induction machine scalar, that is V/f or V/Hz, control structure. The diagram shows the open-loop V/f control structure that the block implements.
### Equations
The Induction Machine Scalar Control block computes the magnitude of the stator voltage based on the reference frequency, ${f}_{s}^{*}$, as:
`${V}_{s}^{*}=\left(\frac{{V}_{n}-{V}_{min}}{{f}_{n}-{f}_{min}}\right){f}_{s}^{*},$`
where:
• Vn is the rated voltage.
• Vmin is the minimum voltage.
• fn is the rated electrical frequency.
• fmin is the minimum frequency.
The voltage components in the stationary reference frame are:
`${V}_{\alpha }={V}_{s}^{*}\text{cos}\left(2\pi {f}_{s}^{*}t\right)$`
and
`${V}_{\beta }={V}_{s}^{*}\text{sin}\left(2\pi {f}_{s}^{*}t\right).$`
The block obtains Vabc from Vα and Vβ by using an inverse Clarke transformation.
## Ports
### Input
expand all
Reference electrical frequency.
Example: Example
Data Types: `single` | `double`
### Output
expand all
Reference phase voltages.
Example: Example
Data Types: `single` | `double`
## Parameters
expand all
Nominal frequency.
Nominal voltage.
Lower bound for the voltage.
Time, in s, between consecutive block executions. During execution, the block produces outputs and, if appropriate, updates its internal state. For more information, see What Is Sample Time? (Simulink) and Specify Sample Time (Simulink).
If this block is inside a triggered subsystem, inherit the sample time by setting this parameter to `-1`. If this block is in a continuous variable-step model, specify the sample time explicitly using a positive scalar.
|
2020-04-09 11:16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792009174823761, "perplexity": 14756.61332238586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371833063.93/warc/CC-MAIN-20200409091317-20200409121817-00005.warc.gz"}
|
http://theory.cs.uni-bonn.de/Zope/csreports/report_1991/paper_8561/abstract.html
|
Institut für Informatik Abteilung V
Universität Bonn -> Institut für Informatik -> Abteilung V CS-Reports 1991 Copyright 1991 Universität Bonn, Institut für Informatik, Abt. V 8561 Some Computational Problems in Linear Algebra as Hard as Matrix Multiplication Peter Buergnisser, Marek Karpinski, Thomas Lickteig [Download PostScript] [Download PDF] We define the complexity of a computational problem given by a relation using the model of a computation tree with Ostrowski complexity measure. To a sequence of problems we assign an exponent similar as for matrix multiplication. For the complexity of the following computational problems in linear algebra \begin{itemize} \item $KER_n$: Compute a basis of the kernel for a given $n \times n$--matrix. \item $OGB_n$: Find an invertible matrix that transforms a given symmetric $n \times n$--matrix to diagonal form. \item $SPR_n$: Find a sparse representation of a given $n \times n$--matrix. \end{itemize} we prove relative lower bounds of the form $aM_n - b$ and absolute lower bounds $dn^2$, where $M_n$ denotes the complexity of matrix multiplication and $a,b,d$ are suitably chosen constants. We show that the exponent of the problem sequences $KER,\;OGB,\;SPR$ is the same as the exponent $\omega$ of matrix multiplication. Last Change: 08/18/99 at 13:00:38 English Universität Bonn -> Institut für Informatik -> Abteilung V
|
2017-10-19 14:21:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98879075050354, "perplexity": 973.3190529090813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823309.55/warc/CC-MAIN-20171019141046-20171019161046-00360.warc.gz"}
|
https://tex.stackexchange.com/questions/341949/how-to-plot-a-network-flow-with-tikz
|
# How to plot a network flow with tikz?
I would like to produce exactly the same picture shown next.
I started by doing this
\documentclass[twocolumn, 12pt]{article}
\usepackage[latin9]{inputenc}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{tikz}
\usepackage[margin=1in]{geometry}
\usetikzlibrary{tikzmark,calc,matrix,positioning}
\usepackage{pgfplots}
\usetikzlibrary{calc,fit,shapes.geometric}
\pgfdeclarelayer{signal}
\pgfsetlayers{signal,main}
\usetikzlibrary{chains,shapes.multipart}
\usetikzlibrary{automata,positioning}
\usetikzlibrary{arrows}% To get more arrow heads
\tikzstyle{printersafe}=[snake=snake,segment amplitude=0 pt]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\sloppy
\pgfplotsset{compat=1.12}
\begin{document}
\begin{figure}[ht!]
\centering
\resizebox{.5\textwidth}{!}{%
\begin{tikzpicture}%[>=triangle 45]
\node[circle,draw=black, fill=gray, fill opacity = 0.3, inner
sep=0pt,minimum size=2pt] (c1) at (0,0) {};
\node[circle,draw=black, fill=gray, fill opacity = 0.3, inner
sep=0pt,minimum size=2pt] (c2) at (1,0) {};
\node[circle,draw=black, fill=gray, fill opacity = 0.3, inner
sep=0pt,minimum size=2pt] (c3) at (0,1) {};
\node[circle,draw=black, fill=gray, fill opacity = 0.3, inner
sep=0pt,minimum size=2pt] (c4) at (1,1) {};
\node[circle,draw=black, fill=gray, fill opacity = 0.3, inner
sep=0pt,minimum size=2pt] (c5) at (-0.5,0.5) {$s$};
\node[circle,draw=black, fill=gray, fill opacity = 0.3, inner
sep=0pt,minimum size=2pt] (c6) at (1.5,0.5) {$t$};
\draw [->] (c1) -- (c2);
\draw [->] (c5) -- (c1);
\draw [->] (c5) -- (c3);
\end{tikzpicture}
}
\caption{Model}
\label{fig:sys}
\end{figure}
\end{document}
but this gives me something weird.
This figure clearly does not produce the desired result. Can you help me?
• Why is it weird? Most of the nodes have no text in them (the $v_n$ nodes). And the inner sep defines the "padding" between the node content and its border. – Torbjørn T. Dec 1 '16 at 22:05
• Because I would like to produce the same thing but I did not succeed. (It is weird compared to what I would like to have not to what I wrote.) – Ribz Dec 1 '16 at 22:12
• The text $s$ and $t$ are not as they are given in the original figure. They are big. How to make them small? – Ribz Dec 1 '16 at 22:13
• Also, the arrows are not the same. How can I get the same shapes? – Ribz Dec 1 '16 at 22:14
• Yes, I see that they are different, I just thought "weird" was a weird choice of word. – Torbjørn T. Dec 1 '16 at 22:15
This is fairly close. Some notes: - I removed everything unnecessary from the preamble, and also the figure environment and the resizebox. ( personally I think resizebox should be the last thing you do for scaling a tikz diagram, even if it is very simple).
• The reason the node text is "large" compared to the node itself in your code is that you have inner sep=0pt,minimum size=2pt. The inner sep defines the distance from the node content to the border, and the node content is always larger than 2pt. I kept the inner sep value, but increased the minimum size to 20pt.
• Nodes are positioned relative to each other. For example, the v_2 node is placed above right=of c1, where c1 is the name of the s node. Benefits of this is that
1. you don't have to worry about absolute coordinates;
2. it becomes very easy to modify the distance between nodes: just change the values for the node distance. In node distance=0.6cm and 1.2cm the first length is the vertical distance, the second is the horizontal.
• the arrows.meta library is a replacement for the arrows library, and gives you a lot of potential for customizing arrow heads if needed (see the manual)
• The fill opacity applied to the text as well. You could remove the fill opacity, and set fill=gray!30 instead of just gray. I instead added text opacity=1, which overrides the opacity value from the fill.
• I defined a mycircle style to avoid repetition.
• Lastly I used a \foreach loop to draw the arrows. You could also have multiple \draw [myarrow] (<node 1>) node[above] {<number>} (<node 2>);, but with the loop you get less repetition.
• To draw the two parallel arrows I used the angle anchor syntax. E.g. nodename.60 is the point at the border of the node that is at an angle of 60 degrees from the horizontal.
\documentclass[border=4mm]{standalone}
\usepackage{tikz}
\usetikzlibrary{arrows.meta,positioning}
\begin{document}
\begin{tikzpicture}[
mycircle/.style={
circle,
draw=black,
fill=gray,
fill opacity = 0.3,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=\small},
myarrow/.style={-Stealth},
node distance=0.6cm and 1.2cm
]
\node[mycircle] (c1) {$s$};
\node[mycircle,below right=of c1] (c2) {$v_2$};
\node[mycircle,right=of c2] (c3) {$v_4$};
\node[mycircle,above right=of c1] (c4) {$v_1$};
\node[mycircle,right=of c4] (c5) {$v_3$};
\node[mycircle,below right=of c5] (c6) {$t$};
\foreach \i/\j/\txt/\p in {% start node/end node/text/position
c1/c2/8/below,
c1/c4/11/above,
c2/c3/11/below,
c3/c6/4/below,
c4/c5/12/above,
c5/c6/15/above,
c5/c2/4/below,
c3/c5/7/below,
c2.70/c4.290/1/below}
\draw [myarrow] (\i) -- node[sloped,font=\small,\p] {\txt} (\j);
% draw this outside loop to get proper orientation of 10
\draw [myarrow] (c4.250) -- node[sloped,font=\small,above,rotate=180] {10} (c2.110);
\end{tikzpicture}
\end{document}
|
2019-09-15 07:31:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8286615014076233, "perplexity": 2296.4664290342616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570830.42/warc/CC-MAIN-20190915072355-20190915094355-00005.warc.gz"}
|
https://proofwiki.org/wiki/Electric_Charge/Quantum/Examples
|
Electric Charge/Quantum/Examples
$60 \ \mathrm W$ Bulb at $200 \ \mathrm V$
Consider a $60 \ \mathrm W$ light bulb running at $200 \ \mathrm V$.
Approximately $2 \times 10^{18}$ units of elementary charge flow along the filament of the light bulb every second.
|
2022-01-20 20:55:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8943676948547363, "perplexity": 1439.10177809967}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00640.warc.gz"}
|
https://www.physicsforums.com/threads/killing-vectors-and-geodesic-equations-for-the-schwarschild-metric.662324/
|
# Killing vectors and Geodesic equations for the Schwarschild metric.
#### silverwhale
Hello Everybody,
Instead of solving the geodesic equations for the Schwarzschild metric, in many books (nearly in all books that I consulted), conserved quantities are looked at instead.
So take for eg. Carroll, he looks at the killing equation and extracts the equation
$$K_\mu \frac{dx^\mu}{d \lambda}= constant,$$
and he then writes:"In addition we have another constant of the motion for geodesics", and he writes the normalization condition:
$$\epsilon = -g_{\mu \nu} \frac{dx^\mu}{d \lambda} \frac{dx^\mu}{d \lambda}.$$
Now I don't understand why this set of equations is equivalent to the geodesic equations. And I do not understand why we are allowed to use these equations to extract information about the geodesics.
Maybe the questions are the same, but I hope you get my point.
Any help would be greatly appreciated!!
Related Special and General Relativity News on Phys.org
#### Bill_K
For most any mechanics problem, the equations of motion are second-order, and you're always welcome to solve them directly. But also in most mechanics problems there are conserved quantities such as energy and angular momentum, which mathematically are first integrals. You can derive them as a consequence of the EOM each time, or you can take a shortcut and write them down immedately, simplifying the problem.
The geodesic equations for the Schwarzschild metric have three first integrals: energy conservation corresponding to the dt Killing vector, angular momentum conservation corresponding to the dφ Killing vector, plus the norm of the velocity vector.
#### silverwhale
Thank you very much!
I definetely got the point. I was already thinking that these conservation laws are some form of integrated EOMs.
But I did never encounter this formulation, or let me say, this way of looking at the EOMs, by integrating them and then solving.
Did I miss something in my years at uuniversity as a physics student??
#### pervect
Staff Emeritus
Hello Everybody,
...
Now I don't understand why this set of equations is equivalent to the geodesic equations. And I do not understand why we are allowed to use these equations to extract information about the geodesics.
Maybe the questions are the same, but I hope you get my point.
Any help would be greatly appreciated!!
Consider the conserved energy, (1-2m/r) (dt/dtau)
Write this as a function of tau
E(tau) = (1-2m/r(tau) ) ( dt(tau) / dtau)
Then take the derivatrive with respect to tau, using the chain rule. You should find that dE/dtau = 0 is identical to one of the geodesic equations done via the Christoffel symbols.
$$E = \left( 1-2\,{\frac {m}{r \left( \tau \right) }} \right) {\frac {d}{d \tau}}t \left( \tau \right)$$
$$\frac{dE}{d\tau} = 2\,{\frac {m \left( {\frac {d}{d\tau}}r \left( \tau \right) \right) { \frac {d}{d\tau}}t \left( \tau \right) }{ \left( r \left( \tau \right) \right) ^{2}}}+ \left( 1-2\,{\frac {m}{r \left( \tau \right) }} \right) {\frac {d^{2}}{d{\tau}^{2}}}t \left( \tau \right)$$
So, you can verify easily enough that E is a conserved quantity given the geodesic equations.
The normalization condition is a bit sneaky - it turns out you need to assume it to derive the geodesic equations in their standard form. If you don't have the normalization condtion, it's very messy.
#### silverwhale
Yep, checked it and you're right.
In Hartle there is a very nice quote in page 176 also:" Conservation laws, such as those for energy and angular momentum, lead to tractable problems in Newtonian mechanics. Conservation laws give first integrals of the equations of motion that can reduce the order and number of the equations that have to be solved". Exactly as mentioned by Bill K.
And basically doing the calculation as guessed by myself and proposed by pervect indeed yields to the geodesic equations in their "standard form".
#### stevendaryl
Staff Emeritus
Hello Everybody,
Instead of solving the geodesic equations for the Schwarzschild metric, in many books (nearly in all books that I consulted), conserved quantities are looked at instead.
So take for eg. Carroll, he looks at the killing equation and extracts the equation
$$K_\mu \frac{dx^\mu}{d \lambda}= constant,$$
and he then writes:"In addition we have another constant of the motion for geodesics", and he writes the normalization condition:
$$\epsilon = -g_{\mu \nu} \frac{dx^\mu}{d \lambda} \frac{dx^\mu}{d \lambda}.$$
Now I don't understand why this set of equations is equivalent to the geodesic equations. And I do not understand why we are allowed to use these equations to extract information about the geodesics.
Maybe the questions are the same, but I hope you get my point.
Any help would be greatly appreciated!!
I don't know if what I'm going to say is redundant with what other people have said, but I want to go through it for my own benefit, if for nobody else's.
For massive particles, the proper time $\tau$ is defined by:
$\sqrt{- g_{\sigma \mu} dx^\sigma dx^\mu} = d\tau$
(Depending on the convention, the minus sign on the left-hand side may be a plus sign instead.) This implies the following identity:
1. $g_{\sigma \mu} \dfrac{dx^\sigma}{d\tau} \dfrac{dx^\mu}{d\tau} = -1$
In terms of $\tau$ the geodesic equation can be written (in a coordinate basis):
$\dfrac{d}{d\tau}(g_{\sigma \mu} \dfrac{dx^\mu}{d\tau}) - (\dfrac{1}{2} \partial_\sigma g_{\eta \mu}) \dfrac{dx^\eta}{d\tau} \dfrac{dx^\mu}{d\tau} = 0$
In the special case in which the metric is independent of the time coordinate, then the time component of the geodesic equation becomes:
$\dfrac{d}{d\tau}(g_{0 \mu} \dfrac{dx^\mu}{d\tau}) = 0$
which implies:
2. $g_{0 \mu} \dfrac{dx^\mu}{d\tau} = E$
for some constant E. So the geodesic equations agree with the Killing vector equations if we identify $K_\mu = g_{0 \mu}$ and the parameter $\lambda$ is chosen to be proportional to $\tau$:
$\lambda = \dfrac{1}{\sqrt{\epsilon}}\tau$
In terms of $\lambda$ and $K_\mu$, equations 1. and 2. become:
1'. $g_{\sigma \mu} \dfrac{dx^\sigma}{d\lambda} \dfrac{dx^\mu}{d\lambda} = -\epsilon$
2'. $K_{\mu} \dfrac{dx^\mu}{d\lambda} = \sqrt{\epsilon}E$
So the Killing vector approach is completely consistent with the geodesic equation, but is more general, since it doesn't require you to find a coordinate system in which the metric components are independent of time. Also, I think that the Killing vector equations are valid even for massless particles, while the derivation in terms of proper time only works for massive particles.
#### pervect
Staff Emeritus
I'd like to share my approach - though just the outline of it.
Let's consider only a 2d problem, with one space parameter x, and one time parameter t. And thus we can omit the usual tensor notation. And we'll do timelike geodesics, as they're the easiest.
We assume we have some path parameterized by lambda, i.e. we have $t(\lambda)$ and $x(\lambda)$
We'll let $\dot{x} = dx / d \lambda$ and $\dot{t} = dt / d \lambda$
The path that extremizes the proper time will extremize the Lagrangian (note that the metric coefficients are functions of x and t):
$$L(\lambda, t, x, \dot{t}, \dot{x}) = \sqrt{(g_{00}(t,x)\dot{t}^2 + 2 g_{01} (t,x) \dot{t} \dot{x} + g_{11}(t,x) \dot{x}^2}$$
This is becuase s = g_00 dt^2 + 2*g_01 dtdx + g_11 dx^2, and dt = (dt/d lambda) d lambda, dx = (dx / dlambda) d lambda
Then the geodesic equations are just
$$\frac{\partial L}{\partial x} - \frac{d}{d \lambda} \left( \frac{\partial L}{\partial \dot{x}} \right)$$
If you look at the terms for $\partial L / \partial \dot{t}$, you'll see that it's
$$\frac{g_{00} \dot{t} + g_{01} \dot{x}} {\sqrt{L}}$$
Now we need to take $d / d\lambda$ of the above. If we don't impose any restrictions on our curve parameteriztion, this becomes very difficult. But if we choose to parameterize our courve by proper time, the very troublesome sqrt(L) goes away.
It's quite a chore to work it all out, but at the end you get the geodesic equations, and an understanding of why you needed to add the constraint on your curve parameterization to make it managable.
You can find a more detailed, fleshed-out example of this approach in MTW's "Gravitation".
Last edited:
#### stevendaryl
Staff Emeritus
Then the geodesic equations are just
$$\frac{\partial L}{\partial x} - \frac{d}{d \lambda} \left( \frac{\partial L}{\partial \dot{x}} \right)$$
If you look at the terms for $\partial L / \partial \dot{t}$, you'll see that it's
$$\frac{g_{00} \dot{t} + g_{01} \dot{x}} {\sqrt{L}}$$
Now we need to take $d / d\lambda$ of the above. If we don't impose any restrictions on our curve parameteriztion, this becomes very difficult. But if we choose to parameterize our courve by proper time, the very troublesome \sqrt(L) goes away.
As someone pointed out, the results are the same as if you started with the quadratic Lagrangian
$L = \dfrac{1}{2}g_{\mu \nu} \dfrac{dx^\mu}{d\lambda} \dfrac{dx^\nu}{d\lambda}$
(with no square-root)
The thing that's weird is that the geodesic equation works for both massive and massless particles, although in the latter case, it can't be understood as the extremization of an effective Lagrangian (because L in that case is zero for a lightlike path, so any variation of the path that is still lightlike will have the same "action", namely zero).
#### pervect
Staff Emeritus
As someone pointed out, the results are the same as if you started with the quadratic Lagrangian
$L = \dfrac{1}{2}g_{\mu \nu} \dfrac{dx^\mu}{d\lambda} \dfrac{dx^\nu}{d\lambda}$
(with no square-root)
The thing that's weird is that the geodesic equation works for both massive and massless particles, although in the latter case, it can't be understood as the extremization of an effective Lagrangian (because L in that case is zero for a lightlike path, so any variation of the path that is still lightlike will have the same "action", namely zero).
If you use the squared action, but don't use an affine parameterization, you'll get wrong results. The affine parameterization is still required.
There is a mathematical trick called the "Einbein" which justifies in more detail when and why you can minimize the squared Lagrangian. It turns out you're really introducing another variable.
There's an old (short) thread in PF, where I first learned about them:
http://arxiv.org/abs/hep-ph/9708319
Einbein fields were introduced to get rid of square roots which enter the Lagrangians of relativistic systems, though at the price of introducing extra dynamical variables.
Thus
$$L = -m \sqrt{1 - \dot{x}^2}$$
can be minimized by minimizing
$$L = -\mu (1 - \dot{x}^2) / 2 - m^2 / 2 \mu$$[/QUOTE]
u being the "Einbein field".
As far as the distinction between lightlike, timelike, and spacelike geodesics, I'd have to agree that it's more obvious that the "parallel transport" technique works that the Lagrangian one. But light still follows an effective Lagrangian. See for instance http://en.wikipedia.org/w/index.php?title=Hamiltonian_optics&oldid=517305783
Keywords are "Fermat's principle", "Lagrangian optics", or "optical path length'.
Rather than extremizing proper time, light extremizes the optical path length.
#### silverwhale
I'll go through the comments this evening or tomorrow and write my results.
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-10-21 04:34:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888325691223145, "perplexity": 559.4074273072663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00198.warc.gz"}
|
http://tex.stackexchange.com/questions/84166/itemize-bullets-without-itemization
|
# Itemize bullets without itemization?
How can I produce bullets like
as they are used in beamer with itemize environments? I want to use them in the text not just for itemizations. Note: I would like the bullets not just to look more or less like the beamer itemize bullets but they should look identical in colour, shape, and size!
-
which theme are you using to get that style of bullet? \labelitemi should give you the commands used for a first level itemize. – David Carlisle Nov 25 '12 at 14:33
Just the standard theme together with \usepackage{beamerthemeshadow} ... \labelitemi gives error undefined control sequence !?! – lpdbw Nov 25 '12 at 14:40
sorry beamer doesn't follow the standard latex usage here, see the code in my answer which will use whichever bullet the beamer theme is using. – David Carlisle Nov 25 '12 at 14:44
\documentclass{beamer}
\makeatletter
\newcommand\mysphere{%
\parbox[t]{10pt}{\raisebox{0.2pt}{\beamer@usesphere{item projected}{bigsphere}}}}
\makeatother
\begin{document}
\begin{frame}
\mysphere test
\end{frame}
\end{document}
-
It seems beamer doesn't define \labelitemi but
\csname @itemlabel\endcsname
works.
-
Yep! Thanks. Could you briely explain the code? – lpdbw Nov 25 '12 at 14:44
Beamer uses \@itemlabel to make the label and then each theme defines that do whatever kind of marker it wants. the csname usage just means you don't need @ to be a letter, better to go \makeatletter\def\mybullet{\@itemlabel}\makeatother in the preamble so you can use \mybullet without worrying about @. – David Carlisle Nov 25 '12 at 14:48
Thx, but still a problem. Try this: \documentclass{beamer} \usepackage{beamerthemeshadow} \usepackage{xspace} \newcommand{\beamerbullet}{\makeatletter\def\mybullet{\@itemlabel}\makeatother\xspace} \begin{document} \beamerbullet abc\\ \beamerbullet cde \end{document} It won't produce any bullets!? – lpdbw Nov 25 '12 at 14:51
This (beamer-specific) solution works as well, and without @ :
\documentclass{beamer}
\begin{document}
\begin{frame}
|
2014-10-31 11:33:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648069143295288, "perplexity": 7688.036378737695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899632.42/warc/CC-MAIN-20141030025819-00011-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.quantopian.com/posts/kinetic-component-analysis
|
Kinetic Component Analysis
Interesting paper from Marcos Lopez de Prado http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2422183 Python code included.I am a novice python coder ,there are few things I can't figure out.It calls libraries that are complicated.Someone may find the paper and the code interesting.
17 responses
Hi tovim,
have you found a pratical example of KCA python code ? http://www.quantresearch.info/KCA_1.py.txt
I don't know that's mean exactly in the paper by :
3 args
Numpy array t conveys the index of observations.
Numpy array z passes the observations.
Scalar q provides a seed value for initializing the EM estimation of the states covariance
t is my timeserie ? but for z anq ?
thanks for you help.
Regards
Great paper, thanks, not sure how I missed this.
Simon if you found how use it !! tell me please :)
I haven't read the paper yet, it looks like it's just a kalman filter for a model of position, velocity and acceleration of the underlying process?
Simon,
I have you tried the python code ?
Yes, I have explored it.
Simon,
How you use it ? you can paste a sample of code ?
Regards
Ludo.
also played around with it but can't find what all the parameters mean
16
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 56bc6af051c47412aff18808
There was a runtime error.
Well for starters, we probably want to use filter not smooth, since we are trying to prevent forward snooping. Second, if feeding it intraday data, we need to be adjusting the H/A/Q matrices for the differing time steps overnight/weekend. In fact, in my investigation, I was trying to modify dt to be higher in the mornings and afternoons, rather than lunch time, to reflect the higher trading activity. I made another post about that I think, but nobody had any ideas. :)
Lastly, I think that the Q matrix needs to be properly specified with the correlated noise based on the position/velocity/acceleration model, rather than just a diagonal matrix with some random time-invariant q that you pick.
I will probably go back to this at some point, but I got distracted by other things.
# Here is a graphic demo for fitKCA.
# fitKCA by MLdP on 02/22/2014 <[email protected]>
# Kinetic Component Analysis
# demo by Hayek
import numpy as np
from pykalman import KalmanFilter
#-----------------------------------------------
def fitKCA(t,z,q,fwd=0):
'''
Inputs:
t: Iterable with time indices
z: Iterable with measurements
q: Scalar that multiplies the seed states covariance
fwd: number of steps to forecast (optional, default=0)
Output:
x[0]: smoothed state means of position velocity and acceleration
x[1]: smoothed state covar of position velocity and acceleration
Dependencies: numpy, pykalman
'''
#1) Set up matrices A,H and a seed for Q
h=(t[-1]-t[0])/t.shape[0]
A=np.array([[1,h,.5*h**2],
[0,1,h],
[0,0,1]])
Q=q*np.eye(A.shape[0])
#2) Apply the filter
kf=KalmanFilter(transition_matrices=A,transition_covariance=Q)
#3) EM estimates
kf=kf.em(z)
#4) Smooth
x_mean,x_covar=kf.smooth(z)
#5) Forecast
for fwd_ in range(fwd):
x_mean_,x_covar_=kf.filter_update(filtered_state_mean=x_mean[-1], \
filtered_state_covariance=x_covar[-1])
x_mean=np.append(x_mean,x_mean_.reshape(1,-1),axis=0)
x_covar_=np.expand_dims(x_covar_,axis=0)
x_covar=np.append(x_covar,x_covar_,axis=0)
#6) Std series
x_std=(x_covar[:,0,0]**.5).reshape(-1,1)
for i in range(1,x_covar.shape[1]):
x_std_=x_covar[:,i,i]**.5
x_std=np.append(x_std,x_std_.reshape(-1,1),axis=1)
return x_mean,x_std,x_covar
def demo_KCA( ):
""" By Hayek """
from numpy import zeros
from numpy.random import rand
import matplotlib.pyplot as plt
N_K = 100
t = zeros( N_K )
z = zeros( [ N_K, 3 ] )
for k in range( N_K ):
t[ k ] = k
z[ k, : ] = [ 0.005 * k ** 2 + 0.05 * k + 0.5 + 10 * rand(1)[0],
0.01 * k + 0.05, 0.01 ]
# Second order ploynoimial, its vel and acc; sth like x_mean.
x_mean, x_std, x_covar = fitKCA( t, z[ :, 0 ], 1 )
fig = plt.figure( )
ax1 = fig.add_axes( [0.1, 0.1, 0.8, 0.8] )
l_0, l_1 = ax1.plot(
np.arange( N_K ), z[ :, 0 ], 'r' ,
np.arange( N_K ), x_mean[ :, 0 ], 'g' )
fig.legend( ( l_0, l_1 ),
( u'Noised second order ploynoimial',
u'After Kalman filter' ),
'upper left' )
if __name__ == "__main__":
demo_KCA( )
Applying to any instrument (like SPY closing price) shows velocity and acceleration equal to zero. Any idea why?. The position part is working.
I read this paper last month, but have not yet coded it as I've not finished the other parts of a PCA algo.
Note that the paper was updated in June 2016, so may be somewhat different than the version referenced in this original post.
Hayek von's code is adapted from what Lopez de Prado published here
You may find some help at that link.
If you want more specific help, then you'll have to post a back test.
I read this paper last month, but have not yet coded it as I've not finished the other parts of a PCA algo.
Also, Lopez de Prado's code makes calls to a purchased library (Canopy), so I'll have to write some equivalent routines.
Note that the paper was updated in June 2016, so may be somewhat different than the version referenced in this original post.
Hayek von's code is adapted from what Lopez de Prado published here
You may find some help at that link.
If you want more specific help, then you'll have to post a back test.
Anyone figure out how to use this thing? They claim more accuracy than FFT and LOWESS, so could be very interesting.
Maybe Thomas Wiecki is looking for something to do :)
It was cool as an example of a more complex Kalman filter, but didn't work very well. A better-specified linear model would probably work better; there's no reason to assume that stocks have acceleration and velocity/momentum.
As I mentioned earlier, the matrix for the noise propagation is wrong. I am not certain whether people just use independent noise in a dependent linear system because it works just as well? But I didn't bother deriving the precise one.
This is an amazing paper. I tried it on commodity futures now that Quantopian supports futures data. Between 1.5 to 2 Sharpe based on how much vol I am willing to take. Much better than moving averages for momentum. Here is my modified code. I am using the acceleration of acceleration as well because acceleration is a trading signal in my algo.
def fitKCA(t,z,q,fwd=0):
'''
Inputs:
t: Iterable with time indices
z: Iterable with measurements
q: Scalar that multiplies the seed states covariance
fwd: number of steps to forecast (optional, default=0) '''
#1) Set up matrices A,H and a seed for Q
h=1. / t.shape[0]
A=np.array([[1,h,.5*h**2, 0.33*h**3],
[0,1,h,h**2],
[0,0,1,h],
[0,0,0,1]])
Q=q*np.eye(A.shape[0])
#2) Apply the filter
kf=KalmanFilter(transition_matrices=A,transition_covariance=Q)
#3) EM estimates
kf=kf.em(z)
#4) Smooth
x_mean,x_covar=kf.smooth(z)
#5) Forecast
for fwd_ in range(fwd):
x_mean_,x_covar_=kf.filter_update(filtered_state_mean=x_mean[-1],
filtered_state_covariance=x_covar[-1])
x_mean=np.append(x_mean,x_mean_.reshape(1,-1),axis=0)
x_covar_=np.expand_dims(x_covar_,axis=0)
x_covar=np.append(x_covar,x_covar_,axis=0)
#6) Std series
x_std=(x_covar[:,0,0]**.5).reshape(-1,1)
for i in range(1,x_covar.shape[1]):
x_std_=x_covar[:,i,i]**.5
x_std=np.append(x_std,x_std_.reshape(-1,1),axis=1)
return x_mean,x_std,x_covar
Hi All,
Can you confirm me that the KCA isn't casual ?
Thx.
Regards.
|
2019-01-18 08:00:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24422593414783478, "perplexity": 3827.5843184372284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659944.3/warc/CC-MAIN-20190118070121-20190118092121-00511.warc.gz"}
|
http://fredrikj.net/blog/2013/01/arb-0-4/
|
# Arb 0.4
January 26, 2013
I’ve tagged version 0.4 of Arb. Here is an overview of the changes:
• much faster fmpr_mul, fmprb_mul and set_round, resulting in general speed improvements
• code for computing the Hurwitz zeta function with derivatives
• fixed and documented error bounds for hypergeometric series
• better algorithm for series evaluation of the gamma function at a rational point
• much faster generation of Bernoulli numbers
• complex log, exp, pow, trigonometric functions (currently based on MPFR)
• complex nth roots via Newton iteration
• added code for arithmetic on fmpcb_polys
• code for computing Khinchin’s constant
• code for rising factorials of polynomials or power series
• faster sin_cos
• better div_2expm1
• many other new helper functions
• more test code for core operations
A few highlights:
### Faster multiplication
I have optimized a few low-level functions, including fmpr_mul (floating-point multiplication) and fmprb_mul (ball multiplication). This should result in much less overhead for Arb arithmetic at precisions up to several thousand bits. Here is a table of timings in nanoseconds:
bits mpfr_mul fmpr_mul (new) fmprb_mul (old) fmprb_mul (new) 15 34 15 163 68 30 34 15 166 69 60 34 16 170 70 120 39 53 294 112 240 69 70 318 128 480 146 128 386 188 960 350 337 610 395 1920 888 1060 1400 1110 3840 2640 3210 3770 3280 7680 7649 9060 9750 9110
MPFR is 10-30% faster between approximately 1000 and 100000 bits thanks to using Mulder’s mulhigh (not yet implemented in Arb). However, fmprb_mul now gets within 2x of a single mpfr_mul even at very low precision (except precisely at two limbs, where some more work is needed). This basically means that ball arithmetic with the fmprb type generically becomes faster than interval arithmetic with MPFR. The overhead for fmprb_mul is now nearly as small as it can be with the current data representation (I can perhaps get rid of a few more nanoseconds by eliminating some function call overhead). Changing the representation should allow getting within 1.1x or so of mpfr_mul at any precision.
That said, a few important functions are still quite slow (including addition, division and elementary functions), and for now this will drag down performance for many computations.
As a slightly more high-level benchmark, here is how long it now takes to compute $M^{-1}$ where $M$ is a random $n \times n$ matrix with real entries:
Digits n Sage 5.3 (RR) Sage 5.3 (RIF) Mathematica 8.0 Arb 0.4 10 10 0.719 ms 1.16 ms 0.88 ms 0.34 ms 100 10 0.841 ms 1.33 ms 0.99 ms 0.74 ms 100 100 0.618 s 1.10 s 0.33 s 0.53 s 1000 10 3.16 ms 6.35 ms 6.32 ms 3.90 ms 1000 100 2.86 s 5.81 s 3.26 s 3.26 s
The timings for Arb are already closer to those for Sage’s RR (floating-point real field) than those for RIF (real interval field), while providing the correctness guarantees of the latter. A significant portion of the Sage timings might just be Python overhead, though. I was really puzzled to find that Mathematica is much faster than Sage here; any ideas what it’s doing internally?
### Hurwitz zeta function
A new function has been added for computing $\zeta(s,a) = \sum_{k=0}^{\infty} (k+a)^{-s}$ for arbitrary complex $s, a$. In fact, the function allows computing a list of Taylor series coefficients with respect to $s$, i.e. $\zeta(s,a), \zeta’(s,a), \ldots \zeta^{(d)}(s,a) / d!$ (with provably correct error bounds). For example, setting $s = 1, a = 1$ (and removing the pole), we obtain correct values for the Stieltjes constants (or generalized Euler constants) to any desired precision. Computing the 1000 first Stieltjes constants accurately to 1000 digits takes 14 seconds, and computing the 5000 first Stieltjes constants accurately to 5000 digits takes 40 minutes; this is a thousand times or so faster than numerically evaluating the corresponding StieltjesGamma[] constants in Mathematica or repeatedly calling the stieltjes() function in mpmath. The implementation is not yet optimized for evaluating just the value $\zeta(s,a)$ (especially for large imaginary parts of $s$), and will be a bit slower than Pari or mpmath for this.
### Bernoulli numbers
Arb now contains a brand new module for generating Bernoulli numbers, implementing an algorithm recently described by Remco Bloemen (with some minor enhancements). The idea is to evaluate the defining sum for the zeta function and recycling powers in a table. This algorithm has two advantages: it’s faster in practice than anything else I’m aware of, and it can generate one or a few Bernoulli numbers at a time without having to store all Bernoulli numbers.
Indeed, to compute all the Bernoulli numbers up to $B_{50000}$, FLINT currently takes 830 seconds (internally using fast multimodular power series arithmetic and fast Chinese remaindering), while the new code in Arb clocks in at only 110 seconds. The interested (and patient) reader may check how long this computation takes in their computer algebra system of choice!
One important practical benefit is that the precomputation time for functions based on Euler-Maclaurin summation decreases significantly. For example, to evaluate the gamma function of a small real argument to 10000-digit accuracy, Arb now takes 2.1 seconds the first time and 0.56 seconds subsequent times (after the Bernoulli numbers have been cached); previously, the first evaluation took about 20 seconds. For comparison, Mathematica 8 takes 12.8 seconds and MPFR takes 69 seconds (neither system caches the Bernoulli numbers); mpmath takes 86 seconds the first time and 1.9 seconds subsequent times.
Unless I mixed some data up, all timings reported in this post were done on an Intel Xeon X5675 3.07 GHz processor.
|
2017-06-28 10:37:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5871110558509827, "perplexity": 1495.0742157415657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323604.1/warc/CC-MAIN-20170628101910-20170628121910-00194.warc.gz"}
|
http://math.stackexchange.com/questions/204936/divisibility-and-pigeonhole-principle?answertab=oldest
|
# Divisibility and Pigeonhole principle
Given a sequence of $p$ integers $a_1, a_2, \ldots, a_p$, show that there exist consecutive terms in the sequence whose sum is divisible by $p$. That is, show that there are $i$ and $j$, with $1 \leq i \leq j \leq p$, such that $a_i + a_{i+1} + \cdots + a_j$ is divisible by $p$.
I'm having trouble with labeling which entities are the pigeons, and which are the pigeonholes. I think somewhere down the line, there has to be more different sums than $p$, but that is just a guess.
-
You may find of interest a similar Pigeonhole argument: if $\rm\:n\:$ is coprime to $10\,$ then every integer with at least $\rm\:n\:$ digits $\ne 0$ has a contiguous digit subsequence that forms an integer $\ne 0$ divisible by $\rm\:n.\ \$ – Bill Dubuque Sep 30 '12 at 18:29
Hint: The holes are remainders on division by $p$. Consider $\sum_{i=1}^k a_i$ for $k=1,2,3 \ldots,p$ If any are divisible by $p$ you are done. If not, You have $p$ sums with only $p-1$ values of remainder allowed.
@user1526710: Exactly. Then if the two that match are $k_1$ and $k_2$, the sum from $k1+1 through$k_2\$ is the one you want. – Ross Millikan Sep 30 '12 at 17:47
|
2014-07-23 20:39:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601929545402527, "perplexity": 163.7288399423987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00220-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://www.ucalgary.ca/rzach/blog
|
University of Calgary
# All LogBlog Posts
## Workshop on The Notion of Proof
Submitted by Richard Zach on Mon, 03/31/2014 - 8:58am
Conflicts with Vienna Summer of Logic, but very interesting:
## Extended Deadline! CFP: Symposium on the Foundations of Mathematics
Submitted by Richard Zach on Sat, 03/29/2014 - 12:24pm
CfP from http://sotfom.wordpress.com:
Set theory is taken to serve as a foundation for mathematics. But it is well-known that there are set-theoretic statements that cannot be settled by the standard axioms of set theory. The Zermelo-Fraenkel axioms, with the Axiom of Choice (ZFC), are incomplete. The primary goal of this symposium is to explore the different approaches that one can take to the phenomenon of incompleteness.
One option is to maintain the traditional “universe” view and hold that there is a single, objective, determinate domain of sets. Accordingly, there is a single correct conception of set, and mathematical statements have a determinate meaning and truth-value according to this conception. We should therefore seek new axioms of set theory to extend the ZFC axioms and minimize incompleteness. It is then crucial to determine what justifies some new axioms over others.
Alternatively, one can argue that there are multiple conceptions of set, depending on how one settles particular undecided statements. These different conceptions give rise to parallel set-theoretic universes, collectively known as the “multiverse”. What mathematical statements are true can then shift from one universe to the next. From within the multiverse view, however, one could argue that some universes are more preferable than others.
These different approaches to incompleteness have wider consequences for the concepts of meaning and truth in mathematics and beyond. The conference will address these foundational issues at the intersection of philosophy and mathematics. The primary goal of the conference is to showcase contemporary philosophical research on different approaches to the incompleteness phenomenon.
To accomplish this, the conference has the following general aims and objectives:
1. To bring to a wider philosophical audience the different approaches that one can take to the set-theoretic foundations of mathematics.
2. To elucidate the pressing issues of meaning and truth that turn on these different approaches.
3. To address philosophical questions concerning the need for a foundation of mathematics, and whether or not set theory can provide the necessary foundation
Date and Venue: 7-8 July 2014 – Kurt Gödel Research Center, Vienna
Confirmed Speakers:
Sy-David Friedman (Kurt Gödel Research Center for Mathematical Logic),
Hannes Leitgeb (Munich Center for Mathematical Philosophy)
Call for Papers: We welcome submissions from scholars (in particular, young scholars, i.e. early career researchers or post-graduate students) on any area of the foundations of mathematics (broadly construed). Particularly desired are submissions that address the role of set theory in the foundations of mathematics, or the foundations of set theory (universe/multiverse dichotomy, new axioms, etc.) and related ontological and epistemological issues. Applicants should prepare an extended abstract (maximum 1’500 words) for blind review, and send it to sotfom [at] gmail [dot] com. The successful applicants will be invited to give a talk at the conference and will be refunded the cost of accommodation in Vienna for two days (7-8 July).
Notification of Acceptance: 30 April 2014
Scientific Committee: Philip Welch (University of Bristol), Sy-David Friedman (Kurt Gödel Research Center), Ian Rumfitt (University of Birmigham), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel Research Center), Neil Barton (Birkbeck College), Chris Scambler (Birkbeck College), Jonathan Payne (Institute of Philosophy), Andrea Sereni (Università Vita-Salute S. Raffaele), Giorgio Venturi (Université de Paris VII, “Denis Diderot” – Scuola Normale Superiore)
Organisers: Sy-David Friedman (Kurt Gödel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel Research Center), Neil Barton (Birkbeck College), Carolin Antos (Kurt Gödel Research Center)
Conference Website: sotfom [dot] wordpress [dot] com
Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at)
Neil Barton (bartonna [at] gmail [dot] com)
John Wigglesworth (jmwigglesworth [at] gmail [dot] com)
## Vienna Summer of Logic: Call for Volunteers
Submitted by Richard Zach on Wed, 03/26/2014 - 9:08am
## What is the Vienna Summer of Logic?
With over 2000 expected participants, the Vienna Summer of Logic 2014 (VSL) will be the largest event in the history of logic. It will consist of twelve large conferences and numerous workshops, attracting researchers from all over the world. The VSL will take place 9th-24 July 2014, at the Vienna University of Technology in Vienna, Austria.
The VSL conferences and workshops will deal with the main theme, logic, from three important aspects: logic in computer science, mathematical logic and logic in artificial intelligence. The program of the conference consists of contributed and invited research talks and includes a number of social events such as a student reception. For more information, visit http://vsl2014.at.
## Vienna Summer of Logic Student Volunteers?
The VSL is organized by the Kurt Goedel Society, and preparations for this event have started some time ago. The most critical phase in the organization of any large scientific meeting is, of course, the time of the meeting itself! To ensure that all the scientific and social meetings taking place in the course of the VSL can be conducted successfully, the organizers of the VSL need your help as a VSL volunteer.
## What are a volunteer's duties?
There are many tasks at the VSL that will be performed by volunteers, such as helping with the registration of the participants at the conference, assisting with the use of the technical infrastructure at the conference site, etc. Each volunteer will be supervised by one of the senior organizers who will be the volunteer's contact person at the conference.
## What are a volunteer's perks?
The most important benefit of volunteering is that volunteers may *attend all the VSL conferences for free*: this means that you can attend all the research talks given at the conferences, and mingle with the researchers during the coffee breaks. More precisely, your time at the VSL will be divided in the following way: 50% free time to attend lectures of your choosing, 30% fixed volunteer's duties, and 20% ,,standby duty''. Furthermore, all volunteers may participate in the conference's *student reception* (which is a party for all the students participating at VSL), and will receive a VSL volunteer's t-shirt to be able to proudly display their participation in this event in the years to come.
## How do I become a volunteer?
Interested in becoming a Vienna Summer of Logic volunteer? Please visit
http://vsl2014.at/volunteers
for the application form. The deadline for applications is May 25, 2014. Applicants that have been chosen as volunteers will be contacted before June 1, 2014.
## Visiting Research Chair in Logic or Philosophy of Science at the University of Calgary
Submitted by rzach on Wed, 03/19/2014 - 6:27pm
US$25,000 for 4 months (September 2015 or January 2016) Contact: Brad Hector, Fulbright Canada Program Officer (Scholars) The University of Calgary is pleased to offer the opportunity for a Fulbright Visiting Research Chair in Logic or the Philosophy of Science. The visiting researcher will be a part of the Department of Philosophy and collaborate with a dynamic research faculty and graduate students. The Department of Philosophy is internationally recognized in logic and the philosophy of science and home to 22 professors, including a Tier 1 Canada Research Chair in the philosophy of biology. The scholar will offer a combined seminar for senior undergraduate students and graduate students in his or her area of expertise, and will participate in departmental and interdisciplinary research groups while pursuing his or her own research projects. Specialization: History and philosophy of science, mathematical and philosophical logic. ## Leslie Lamport wins Turing Award Submitted by rzach on Tue, 03/18/2014 - 9:16am The Association for Computing Machinery has awarded the 2013 Turing Award (the Computer Science equivalent of the Nobel Prize Fields Medal Schock Prize) to Leslie Lamport at Microsoft Research for his work on formal specification and verification techniques, specifically the Temporal Logic of Actions and his work on fault tolerance in distributed systems. Not as close to logic as some other Turing Laureates (is that what they're called?) but still a nice nod to the continued importance of formal methods derived in part from logic in CS. (Oh yeah, he also invented LaTeX.) ## Constructive Ordinals and the Consistency of PA Submitted by Richard Zach on Mon, 03/17/2014 - 4:45pm Today's the last of three lectures on Gentzen's second proof of the consistency of PA in my proof theory course. a) Still looking for good resources on ordinal notations, esp.,$<\epsilon_0$, especially around the question how one can "see" that they are well-ordered without mentioning that they are order-isomorphic to$\epsilon_0$Takeuti has a discussion in his textbook, anything else? b) Some fun links: Andrej Bauer's Hydra game applet: http://math.andrej.com/2008/02/02/the-hydra-game/ David Madore's ordinal visualizer: http://www.madore.org/~david/math/drawordinals.html c) Looking for a good intro to Goodstein's theorem and incompleteness in PA: Will Sladek's paper linked from Andrés Caicedo's blog: https://andrescaicedo.wordpress.com/2007/07/27/goodstein-sequences/ ## E. W. Beth Dissertation Prize: 2014 Call for Nominations Submitted by Richard Zach on Mon, 03/10/2014 - 8:45am Since 2002, FoLLI (the Association for Logic, Language, and Information, http://www.folli.info) has awarded the E.W. Beth Dissertation Prize to outstanding dissertations in the fields of Logic, Language, and Information. We invite submissions for the best dissertation which resulted in a Ph.D. degree awarded in 2013. The dissertations will be judged on technical depth and strength, originality, and impact made in at least two of three fields of Logic, Language, and Computation. Interdisciplinarity is an important feature of the theses competing for the E.W. Beth Dissertation Prize. ### Who qualifies? Nominations of candidates are admitted who were awarded a Ph.D. degree in the areas of Logic, Language, or Information between January 1st, 2013 and December 31st, 2013. Theses must be written in English; however, the Committee accepts submissions of English translations of theses originally written in other languages, and for which a PhD was awarded in the preceding two years (i.e. between January 1st, 2011 and December 31st, 2012). There is no restriction on the nationality of the candidate or on the university where the Ph.D. was granted. ### Prize The prize consists of: • a certificate • a donation of 2500 euros provided by the E.W. Beth Foundation • an invitation to submit the thesis (or a revised version of it) to the FoLLI Publications on Logic, Language and Information (Springer). For further information on this series see the FoLLI site. ### How to submit Only electronic submissions are accepted. The following documents are required: 1. The thesis in pdf format (ps/doc/rtf not accepted). 2. A ten-page abstract of the dissertation in pdf format. 3. A letter of nomination from the thesis supervisor. Self-nominations are not admitted: each nomination must be sponsored by the thesis supervisor. The letter of nomination should concisely describe the scope and significance of the dissertation and state when the degree was officially awarded. 4. Two additional letters of support, including at least one letter from a referee not affiliated with the academic institution that awarded the Ph.D. degree. All documents must be submitted electronically (preferably as a zip file) to Ian Pratt-Hartmann ([email protected]). Hard copy submissions are not allowed. In case of any problems with the email submission or a lack of notification within three working days, nominators should write to Ian Pratt-Hartmann. ### Important Dates Deadline for Submissions: May 5th, 2014. Notification of Decision: July 14th, 2014. Committee : Julian Bradfield (Edinburgh) Wojciech Buszkowski (Poznan) Michael Kaminski (Haifa) Marco Kuhlmann (Linköping) Larry Moss (Bloomington) Ian Pratt-Hartmann (chair) (Manchester) Ruy de Queiroz (Recife) Giovanni Sambin (Padua) Rob van der Sandt (Nijmegen) Rineke Verbrugge (Groningen) ## Brian Leiter Should Apologize Submitted by Richard Zach on Mon, 03/10/2014 - 12:18am In a (since removed) long post on his widely read blog, Brian Leiter attacked my colleague Rachel McKinnon, calling her "singularly unhinged" and "crazy". I don't know what to say, except that I hope an apology for this singularly unprofessional outburst is forthcoming. ## Journal for the History of Analytical Philosophy (JHAP) Essay Prize Submitted by Richard Zach on Tue, 03/04/2014 - 11:24am JHAP is an international open access, peer reviewed publication that aims to promote research in and provide a forum for discussion of the history of analytic philosophy. ‘History’ and ‘analytic’ are understood broadly. JHAP takes the history of analytic philosophy to be part of analytic philosophy. Accordingly, it publishes historical research that interacts with the ongoing concerns of analytic philosophy and with the history of other twentieth century philosophical traditions. JHAP invites submission for its first Essay Prize Competition. The competition is open to PhD candidates and recent PhDs (no more than 3 years at the time of submission). Articles on any topic in the History of Analytical Philosophy are welcome. There are no constraints on length. Authors are requested to submit their papers electronically according to the following guidelines: 1) Papers should be prepared for anonymous refereeing, 2) put into PDF file format, and 3) sent as an email attachment to the address given below -- where 4) the subject line of the submission email should include the key-phrase "JHAP ESSAY PRIZE submission", and 5) the body text of the email message should constitute a cover page for the submission by including i) return email address, ii) author's name, iii) affiliation, iv) paper title, and v) short abstract. contact: [email protected] Submission Deadline: 1 September 2014 Adjudication: The winner of the competition will be decided by a committee composed of members of the editorial board. Prize: The winning article will be published in a special issue of JHAP and the author will receive a cash prize. ## Philosophy of Mathematics Postdoc at Nancy or Paris Submitted by Richard Zach on Sat, 03/01/2014 - 9:10pm One year Post-doc Fellowship in the context of the ANR-DFG research program MATHEMATICS: OBJECTIVITY BY REPRESENTATION (MathObRe) at the Laboratoire d'Histoire des Sciences et de Philosophie—Archives Henri-Poincaré, Nancy (UMR 7117) or at the Institut d'Histoire et de Philosophie des Sciences, Paris (UMR 8590). We invite applications for a postdoctoral fellowship for 12 months in the academic year 2014/15 (October 1st 2014 to September 30th 2015) in the context of the project mentioned above. The project aims to study the relation between mathematical objectivity and the role of representation in mathematics from a philosophical point if view, with particular attention to historical development of mathematics and to mathematical practice. The directive lines of the project are available here: https://sites.google.com/site/mathobre/ The successful candidate is expected to contribute to the realization of this project and to reside in Nancy or Paris during the whole your of her/his grant. The decision where she/he should reside in Nancy or Paris throughout the year will be taken by ourselves, according to the research topic. The grant amount (1600-1800€/month after taxes) is set by French regulations. We encourage to apply young scholars having received their doctoral degree in the last 5 years in the domain of philosophy of mathematics and the like with a proven potential to conduct and publish research at a level of international excellence. Applications should include: • A (brief) letter of application including personal information academic background, and research interests • A proposal for a research project (3-4 pages) aiming to contribute to MathObRe • CV including a list of publications, talks, conferences attended and teaching experience. • One or two recommendation letters from a recognised scholar in the field. This material is to be sent by e-mail to Gerhard Heinzmann and Marco Panza before April 30th at midnight (French time). Decisions will be made by May 31th, 2014. ## Carnegie Mellon Summer School in Logic and Formal Epistemology Submitted by Richard Zach on Fri, 02/21/2014 - 11:57am In 2014, the Department of Philosophy at Carnegie Mellon University will hold a three-week summer school in logic and formal epistemology for promising undergraduates in philosophy, mathematics, computer science, linguistics, economics, and other sciences.The goals are to introduce promising students to cross-disciplinary research early in their careers, and forge lasting links between the various disciplines. The summer school will be held from Monday, June 2 to Friday, June 20, 2014 on the Carnegie Mellon campus. Tuition and accommodations are free. Further information and instructions for applying can be found at: Topics by week: The Topology of Inquiry Monday, June 2 to Friday, June 6 Instructor: Kevin T. Kelly Causal and Statistical Inference Monday, June 9 to Friday, June 13 Instructor: David Danks Philosophy as Discovery Monday, June 16 to Friday, June 20 Instructor: Clark Glymour Materials must be submitted to the Philosophy Department by March 14, 2014. Inquiries may be directed to Professor Teddy Seidenfeld ([email protected]). ## Rudolf Haller, 1929-2014 Submitted by Richard Zach on Tue, 02/18/2014 - 2:44pm Sad news from Fritz Stadler, director of the Institue Vienna Circle: Mit großer Betroffenheit und tiefer Trauer haben wir heute vom Ableben von Univ.Prof. Rudolf Haller erfahren. Er war in der österreichischen Philosophie und Wissenschaft ein Pionier und eine außergewöhnliche Erscheinung. Seine gewinnende Persönlichkeit mit Expertise, Menschlichkeit, Offenheit und Humor war einzigartig. Das Institut Wiener Kreis verliert einen langjährigen Förderer und Mitstreiter – seit seiner Gründung den langjährigen Vorsitzenden des wissenschaftlichen Beirats. Ich persönlich beklage den Verlust eines unersetzlichen Mentors, Kollegen und Freundes. Seine letzten Lebensjahre waren von einer schweren Krankheit überschattet. Unser Mitgefühl gilt seiner Witwe und seinem Sohn. Rudolf Haller wird uns sehr fehlen und immer in Erinnerung bleiben. Ein Nachruf folgt. With great sadness we have learnt of the passing of Prof. Rudolf Haller. He was a pioneer of Austrian philosophy and science and a singular presence. His winning personality, a combination of expertise, humanity, openness and sense of humor, was without equal. The Institute Vienna Circle has lost a long-time supporter and colleague -- he chaired the scientific advisory board since its inception. I personally mourn the loss of an irreplaceable mentor, colleague, and friend. His last years were darkened by severe illness. Our condolences go out to his widow and his son. Rudolf Haller will be missed and always remembered. An obituary will follow. Wien, 18. Februar 2014 Fritz Stadler ## Join the Association for Symbolic Logic -- Now 50% Off! Submitted by Richard Zach on Tue, 02/04/2014 - 11:30am If you're reading this blog, you should probably be a member of the Association of Symbolic Logic -- the venerable academic society for logic and its applications, the people who bring you the best journals in the field (The Journal, Bulletin, and Review of Symbolic Logic), the Perspectives and Lecture Notes in Logic book series, three logic conferences in North America and one in Europe every year, and travel stipends to these and other logic conferences for graduate students. If you're gainfully employed as a logician, please support the ASL by becoming a full member. New members may now join at a special introductory rate for two years, at 50% off the regular fee. Students, emeriti and unemployed get those 50% off always. http://www.aslonline.org/membership-individual.html ## 2014 Society for Exact Philosophy Submitted by Richard Zach on Tue, 01/21/2014 - 3:30pm The 2014 meeting of the Society for Exact Philosophy will be held 22-24 June 2014 at the California Institute of Technology in Pasadena, CA. ## Postdoc in Proof Theory at TU Vienna Submitted by Richard Zach on Tue, 01/21/2014 - 3:26pm A position as post-doctoral researcher is available in the Group for Computational Logic at the Faculty of Mathematics of the Vienna University of Technology. This position is part of a research project on the proof theory of induction. The aim of this project is to further deepen our understanding of the structure of proofs by induction and to develop new algorithms for the automation of inductive theorem proving. Techniques of relevance include cut-elimination, witness extraction, Herbrand's theorem. ## 2014 Kurt Gödel Research Prize Fellowships Program Submitted by Richard Zach on Wed, 01/08/2014 - 10:24am (Organized by the Kurt Gödel Society with support from the John Templeton Foundation) The Kurt Gödel Society is proud to announce the commencement of the Kurt Gödel Research Prize Fellowships Program "The Logical Mind: Connecting Foundations and Technology." ## Alan Turing gets royal pardon Submitted by Richard Zach on Mon, 12/23/2013 - 6:05pm ## 20 Year Anniversary: Proof Theory of Finite Valued Logics Submitted by Richard Zach on Tue, 12/17/2013 - 4:38am Twenty years ago this month I submitted my Diplomarbeit (MA thesis) on the proof theory of finite valued logics. Still kinda proud of it. ## NASSLI 2014 Student Session CfP Submitted by Richard Zach on Wed, 12/11/2013 - 6:11am The North American Summer School in Logic, Language and Information wil be held June 23-27, 2014 in College Park, MD. A call for papers for the student session was just issued; deadline is February 24. ## Summer School at MCMP for Women Formal Philosophy Students Submitted by Richard Zach on Wed, 12/04/2013 - 9:13am Wow, awesome. Lecturers include Rachael Briggs, Sonja Smets, and Florian Steinberger. The Munich Center for Mathematical Philosophy (MCMP) is organizing the first Summer School on Mathematical Philosophy for Female Students, which will be held from July 27 to August 2, 2014 in Munich, Germany. The summer school is open to excellent female students who want to specialize in mathematical philosophy. ## Maria Reichenbach (1909-2013) Submitted by Richard Zach on Mon, 12/02/2013 - 9:07pm Alan Richardson writes on HOPOS-L: ## Help sought for a biography of Richard Montague Submitted by Richard Zach on Fri, 11/29/2013 - 5:06pm Ivano Caponigro at UCSD writes: I'm working on a biography of Richard Montague (1930-1971) that aims to reconstruct his intellectual and personal life, his contributions, and his legacy. Please contact me if you knew him personally (or just met him a few times) or have any material from him or about him (letters, manuscripts, pictures, audio recordings, etc.) or if you know anybody who knew him or may have material about it. Thanks! [email protected] ## Mancosu on Pasternak (!) Submitted by Richard Zach on Fri, 11/15/2013 - 2:12pm My Doktorvater Paolo Mancosu has a new book: Inside the Zhivago Storm, on the publication history of Pasternak's Doctor Zhivago. That's the kind of scholar Paolo is: write a 400-page literary thriller because his duties as department chair at Berkeley keep him from doing his "real" work as a logician and philosopher of mathematics. From the publisher: In Inside the Zhivago Storm. The Editorial Adventures of Pasternak’s Masterpiece, Paolo Mancosu, Professor of Philosophy at the University of California at Berkeley, provides a riveting account of the story of the first publication of Doctor Zhivago and of the subsequent Russian editions in the West. Exploiting with scholarly and philological rigor the untapped resources of the Feltrinelli archives in Milan as well as several other private and public archives in Europe, Russia, and the USA, Mancosu reconstructs the relationship between Pasternak and Feltrinelli, the story of the Italian publication, and the pressure exercised on Feltrinelli by the Soviets and the Italian Communist Party to stop publication of the novel in Italy and in other countries. Doctor Zhivago, the masterpiece that won Boris Pasternak the Nobel Prize in 1958, had its first worldwide edition in 1957 in Italian. The events surrounding its publication, whose protagonists were Boris Pasternak and the publisher Giangiacomo Feltrinelli, undoubtedly count as one of the most fascinating stories of the twentieth century. It is a story that saw the involvement of governments, political parties, secret services, and publishers. In Inside the Zhivago Storm. The Editorial Adventures of Pasternak’s Masterpiece, Paolo Mancosu, Professor of Philosophy at the University of California at Berkeley, provides a riveting account of the story of the first publication of Doctor Zhivago and of the subsequent Russian editions in the West. Exploiting with scholarly and philological rigor the untapped resources of the Feltrinelli archives in Milan as well as several other private and public archives in Europe, Russia, and the USA, Mancosu reconstructs the relationship between Pasternak and Feltrinelli, the story of the Italian publication, and the pressure exercised on Feltrinelli by the Soviets and the Italian Communist Party to stop publication of the novel in Italy and in other countries. Situating the story in the historical context of the Cold War, Mancosu describes the hidden roles of the KGB and the CIA in the vicissitudes of the publication of the novel both in Italian and in the original Russian language. The full correspondence between Boris Pasternak and Giangiacomo Feltrinelli (spanning from 1956 to 1960) is also published here for the first time in the original and in English translation. Doctor Zhivago is a classic of world literature and the story of its publication, as it is recounted in this book, is the story of the courage and of the intellectual freedom of a great writer and of a great publisher. ## Post Doc in History of Geometry/Epistemology of Math at MPI Berlin Submitted by Richard Zach on Tue, 11/12/2013 - 11:06am A postdoc in history of geometry is being advertised at Vincenzo de Risi's group at the MPI for History of Science,Berlin! https://www.h-net.org/jobs/job_display.php?id=47973 ## Philosophy in the SSHRC Insight Grant Competition Submitted by Richard Zach on Mon, 11/11/2013 - 5:22pm The Insight Grant Adjudication Committee (Committee 1C) for the 2013 Insight Grant competition of SSHRC, on which I served, prepared the following statement when the results of the competition were announced in April. We sent it to the CPA to distribute, but somehow it fell throught the cracks. They did post an excerpt of an earlier letter which was sent to all Canadian philosophy departments inApril on their forum a month ago though. I'm posting it here and now for the record. (La version française suit) The Social Sciences and Humanities Research Council of Canada (SSHRC) funds research projects by Canadian philosophers through its Insight Grant program. This program replaced the Standard Research Grant program in 2011. In the 2012 competition, the results of which were just announced, 13 applications were funded. This represents a success rate of 21%, down from last year’s 18 applications funded (success rate: 26.5%). This is bad news. But Canadian philosophers should understand some hidden factors driving the trend. Under SSHRC's old system, each disciplinary committee was allocated a separate pot of money, where the size of the pot was determined by a formula involving the number of applicants to that committee and the total budget requested in each application. After ranking applications on their merits, committees were at liberty to trim the budgets of successful applicants in order to fund more applications (but to a lesser degree), and Philosophy was quite aggressive about doing so (resulting in a higher success rate for our committee than for some others). Under the new Insight Grant system, the application success rate is standardized to be the same across all committees. For the 2012 competition, committees in all disciplines (Economics, Linguistics, etc.) had the same 21% application success rate. Instead of tinkering with individual budgets, the appropriateness of an applicant's budget was factored into the "feasibility" score, so applicants whose budgets were sharply out of line with the norms (median request:$23k per year) tended to suffer numerically. It remained possible for the committee to make a rough budget cut on an excellent proposal (say, funding just 50% or 75% of what someone asked), but it was hard to do this while still allocating that proposal the near-perfect score on feasibility needed to make the top-of-the-list position that was necessary for funding. Some large projects were certainly funded, but they needed excellent justifications for their large budget requests.
It is important to understand that the drop in the national success rate was driven not primarily by increased stringency on SSHRC’s part, or a decrease in overall funding, or any bias against Philosophy, but by an increase in the number of applications across all committees: the Insight Grant went from 1,821 applications in 2011 to 2,220 applications in 2012, an increase of 399 applications (21.9%). Meanwhile, Philosophy marked an exception to this general trend: we went from 68 applications in 2011 to 62 applications in 2012, a decrease of 8.8%. If Philosophy had increased on a par with other disciplines, we would have had 83 applications, and with this year’s standardized success rate of 21% we would have been able to fund 17 of them, very close to last year's 18 applications funded.
One factor for the decrease in Philosophy applications may have been that last year’s success rate was lower for Philosophy than it has been in the past, and applicants may have been discouraged from even trying. This is exactly the wrong thing to do, especially in a climate in which disciplines other than Philosophy are responding to the fixed success rate by sharply increasing their number of applications.
In many departments, SSHRC grants are vital both to the support and to the training of graduate students. It is in the interest of Canadian Philosophy and our graduate programs that as many eligible faculty members as possible apply. It may be a pain, and discouraging not to be funded. But it can be useful to prepare a research plan, and it’s not much work to do some small revisions to research plans that were unsuccessful in the past. The very same proposal can pass from an insufficient ranking in one year to being funded in the next, for a number of reasons (budget appropriateness, different letters of assessment, different adjudication committee, different fields of applications). There were many excellent proposals this year that very narrowly missed being funded.
Le Conseil de recherches en sciences humaines du Canada (CRSH) subventionne les projets de recherche des philosophes canadiens par son programme des subventions Savoir. Ce programme a remplacé en 2011 le programme des subventions ordinaires de recherche. Au concours de 2012, dont les résultats viennent tout juste d’être annoncés, 13 subventions ont été approuvées. Cela représente un taux de succès de 21%, alors que l’année dernière, 18 projets avaient été financés pour un taux de succès de 26,5%.
Ce résultat est une mauvaise nouvelle. Les philosophes canadiens doivent comprendre que certains facteurs peu apparents ont joué.
Sous l'ancien régime, chaque comité disciplinaire disposait d'une somme donnée, déterminée par une formule qui tenait compte du nombre de demandes reçues par le comité, ainsi que du total des budgets demandés. Après avoir classé les demandes au mérite, les comités pouvaient à leur gré réduire la somme accordée aux candidats retenus pour subvention afin de financer plus de projets (de façon un peu moins généreuse). Le comité de philosophie usait assez largement de cette possibilité, ce qui assurait à ce comité un taux de succès plus élevé qu’à d’autres comités.
Dans le nouveau système des subventions Savoir, le taux de succès est normalisé de façon à être identique pour tous les comités. Donc, en 2012, toutes les disciplines ont eu le même taux de succès de 21%. Il n’est plus question d’ajuster les budgets de façon détaillée, mais seulement de vérifier s’ils sont globalement adéquats, en notant la « faisabilité » du projet : ainsi toute demande dont le budget s’écarte significativement de ce qui apparaît raisonnable (compte tenu d’un budget moyen demandé de quelque 23K\$) tend à être numériquement défavorisé dans l’évaluation. Quoiqu'il demeure possible de réduire (par exemple de 25% ou même 50%) un budget jugé excessif pour un excellent projet de recherche, le fait d'avoir à le faire fait perdre des points au chapitre de la faisabilité, alors qu’une note quasi maximale est requise pour figurer en tête de liste et être financé. Certains projets au budget élevé ont été certes approuvés, mais seulement moyennant d’excellents justifications.
Il est important de comprendre que la baisse constatée du taux de succès général n'a été due ni à un resserrement des critères de la part du CRSH, ni à une baisse du financement disponible, ni à un quelconque préjugé contre la philosophie. Elle résulte uniquement de l'augmentation du nombre total des demandes pour l’ensemble des comités, nombre qui est passé de 1821 en 2011 à 2200 en 2012, soit 399 demandes ou 21,9% de plus. La philosophie a fait exception à cette tendance générale, avec 62 demandes en 2012, 8,8% de moins qu’en 2011 où il y en avait eu 68. Si les philosophes avaient fait comme les chercheurs des autres disciplines, nous aurions eu 83 demandes, ce qui, compte tenu du taux de succès normalisé de 21% aurait vraisemblablement donné 17 subventions octroyées, soit presque le même nombre que les 18 de l'an passé.
Le nombre décroissant des demandes en philosophie pourrait s’expliquer du moins en partie par le fait que le taux de succès, l’année dernière, avait été plus faible qu’antérieurement, ce qui pourrait avoir dissuadé certains philosophes de tenter leur chance. Or il faudrait réagir de manière exactement contraire, surtout dans un contexte où les autres disciplines s’ajustent au taux de succès fixe en augmentant nettement le nombre de leurs demandes.
Les subventions du CRSH jouent un rôle essentiel dans le financement et la formation de nos étudiants gradués. Il est de l’intérêt de la philosophie au Canada et de nos programmes d’études supérieures que le plus grand nombre de professeurs et chercheurs éligibles se portent candidats. Il peut être pénible, voire décourageant de ne pas être financé. Mais il peut être profitable d’élaborer un programme de recherche et le modeste effort requis pour réviser un projet, précédemment refusé, peut valoir amplement la peine : il semble bien établi, en effet, qu’un programme similaire de recherche peut recevoir une note insuffisante une année et se voir financer l'année suivante, pour diverses raisons (ajustement du budget, lettres d’évaluation différentes, changement dans la composition du comité, autres champs d’application). Cette année, plusieurs excellentes demandes ont manqué de très peu d'être financées.
## SEP Entry on Gödel's Incompleteness Theorem
Submitted by Richard Zach on Mon, 11/11/2013 - 4:59pm
The Stanford Encyclopedia now has a separate entry on Gödel's incompleteness theorem (by Panu Raatikainen).
http://plato.stanford.edu/entries/goedel-incompleteness/
(Juliette Kennedy's entry on Gödel also covers incompleteness.)
## LaTeX for Philosophers
Submitted by Richard Zach on Sat, 11/09/2013 - 11:49am
This last Thursday I held a little workshop to tell our graduate students about LaTeX. Since LaTeX is fairly commonly used by philosophers, I thought they should at least know what it's all about. I made a presentation (the handout version contains additional info). I didn't have time to provide a list of documents/sites to check out or detailed instructions (and wouldn't really know how to do that, as e.g., I haven't installed TeX on a Windows machine in at least a decade, and never on a Mac). We did play around with a few packages and experiment with BibTeX as a group. Nice that WriteLaTeX lets you do that without even signing up for a free account!
PDFs of the presentation are attached to this post if you want to have a look, and the source code is on my GitHub. I unlicense'd it, so feel free to use it for your own workshops on LaTeX for Philosophers (or other non-techy acedemics). Suggestions for additions, requests for changes, typos, etc.: comment and/or file an issue on GitHub.
## Gillian Russell Interviewed on 3:AM
Submitted by Richard Zach on Fri, 09/27/2013 - 9:12am
## Awodey Explains Significance of Homotopy Type Theory to Philosophy of Mathematics
Submitted by Richard Zach on Thu, 07/25/2013 - 12:12pm
Steve Awodey (CMU) explains the relevance of the foundational program of homotopy type theory and the univalence axiom to the philosophy of mathematics in a new preprint, "Structuralism, Invariance, and Univalence."
## Gödel's Incompleteness Theorems Formally Verified
Submitted by Richard Zach on Fri, 07/12/2013 - 6:24am
Going through old emails, I found the following announcement by Larry Paulson, posted to the FOM list by Jeremy Avigad. Good stuff, including the link to Stanis?aw ?wierczkowski's monograph in Dissertationes Mathematicae where he carries out the proof of the incompleteness theorems in HF, the theory of hereditarily finite sets.
|
2014-04-17 13:01:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36955752968788147, "perplexity": 12264.554270338296}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://la.mathworks.com/help/symbolic/ellipke.html
|
# ellipke
Complete elliptic integrals of the first and second kinds
## Syntax
``````[K,E] = ellipke(m)``````
## Description
``````[K,E] = ellipke(m)``` returns the complete elliptic integrals of the first and second kinds.```
## Examples
### Compute Complete Elliptic Integrals of First and Second Kind
Compute the complete elliptic integrals of the first and second kinds for these numbers. Because these numbers are not symbolic objects, you get floating-point results.
```[K0, E0] = ellipke(0) [K05, E05] = ellipke(1/2)```
```K0 = 1.5708 E0 = 1.5708 K05 = 1.8541 E05 = 1.3506```
Compute the complete elliptic integrals for the same numbers converted to symbolic objects. For most symbolic (exact) numbers, `ellipke` returns results using the `ellipticK` and `ellipticE` functions.
```[K0, E0] = ellipke(sym(0)) [K05, E05] = ellipke(sym(1/2))```
```K0 = pi/2 E0 = pi/2 K05 = ellipticK(1/2) E05 = ellipticE(1/2)```
Use `vpa` to approximate `K05` and `E05` with floating-point numbers:
`vpa([K05, E05], 10)`
```ans = [ 1.854074677, 1.350643881]```
### Compute Integrals When Input is Not Between `0` and `1`
If the argument does not belong to the range from 0 to 1, then convert that argument to a symbolic object before using `ellipke`:
`[K, E] = ellipke(sym(pi/2))`
```K = ellipticK(pi/2) E = ellipticE(pi/2)```
Alternatively, use `ellipticK` and `ellipticE` to compute the integrals of the first and the second kinds separately:
```K = ellipticK(sym(pi/2)) E = ellipticE(sym(pi/2))```
```K = ellipticK(pi/2) E = ellipticE(pi/2)```
### Compute Integrals for Matrix Input
Call `ellipke` for this symbolic matrix. When the input argument is a matrix, `ellipke` computes the complete elliptic integrals of the first and second kinds for each element.
`[K, E] = ellipke(sym([-1 0; 1/2 1]))`
```K = [ ellipticK(-1), pi/2] [ ellipticK(1/2), Inf] E = [ ellipticE(-1), pi/2] [ ellipticE(1/2), 1]```
## Input Arguments
collapse all
Input, specified as a number, vector, matrix, or array, or a symbolic number, variable, array, function, or expression.
## Output Arguments
collapse all
Complete elliptic integral of the first kind, returned as a symbolic expression.
Complete elliptic integral of the second kind, returned as a symbolic expression.
collapse all
### Complete Elliptic Integral of the First Kind
The complete elliptic integral of the first kind is defined as follows:
`$K\left(m\right)=F\left(\frac{\pi }{2}|m\right)=\underset{0}{\overset{\pi /2}{\int }}\frac{1}{\sqrt{1-m{\mathrm{sin}}^{2}\theta }}d\theta$`
Note that some definitions use the elliptical modulus k or the modular angle α instead of the parameter m. They are related as m = k2 = sin2α.
### Complete Elliptic Integral of the Second Kind
The complete elliptic integral of the second kind is defined as follows:
`$E\left(m\right)=E\left(\frac{\pi }{2}|m\right)=\underset{0}{\overset{\pi /2}{\int }}\sqrt{1-m{\mathrm{sin}}^{2}\theta }d\theta$`
Note that some definitions use the elliptical modulus k or the modular angle α instead of the parameter m. They are related as m = k2 = sin2α.
## Tips
• Calling `ellipke` for numbers that are not symbolic objects invokes the MATLAB® `ellipke` function. This function accepts only ```0 <= m <= 1```. To compute the complete elliptic integrals of the first and second kinds for the values out of this range, use `sym` to convert the numbers to symbolic objects, and then call `ellipke` for those symbolic objects. Alternatively, use the `ellipticK` and `ellipticE` functions to compute the integrals separately.
• For most symbolic (exact) numbers, `ellipke` returns results using the `ellipticK` and `ellipticE` functions. You can approximate such results with floating-point numbers using `vpa`.
• If `m` is a vector or a matrix, then `[K,E] = ellipke(m)` returns the complete elliptic integrals of the first and second kinds, evaluated for each element of `m`.
## Alternatives
You can use `ellipticK` and `ellipticE` to compute elliptic integrals of the first and second kinds separately.
## References
[1] Milne-Thomson, L. M. “Elliptic Integrals.” Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. (M. Abramowitz and I. A. Stegun, eds.). New York: Dover, 1972.
|
2021-03-07 09:33:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175919890403748, "perplexity": 555.4287824205069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00237.warc.gz"}
|
http://mathoverflow.net/revisions/41457/list
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
MacMahon in the paper Divisors of Numbers and their Continuations in the Theory of Partitions defines several generalized notions of the sum-of-divisors function; for example, if we write $a_{n,k}$ for the sum $$\sum s_1 \cdots s_k$$ where this sum is taken over all ways of writing $n = s_1m_1 + \cdots s_km_k$ with $m_1 < \cdots < m_k$ (note the asymmetry in $s_k, m_k$), he then studies the generating functions $$A_k(q) = \sum_{n=1}^\infty a_{n,k}q^n$$ for fixed $k$, as well as a number of other variants. Note that for $k=1$, this is nothing but the generating function for the ordinary sum-of-divisors function.
These functions (Specifically, from his paper, the functions $A_k$ and $C_k$) have arisen in my research and I would like to know what literature there is on them. In particular, I would like to know if there are any well-known identities that hold between them. MacMahon himself lists the identity $$A_2(q) = \tfrac{1}{8}\sum_{n=1}^\infty\big(\sigma_3(n) - (2n-1)\sigma_1(n)\big)q^n$$ as well as similar ones for $A_3$ and $A_4$, but the identities that I am looking for are more in line with an attempt to write these as quasi-modular forms, if possible. For example, it turns out that the following is true: $$A_2(q) = \tfrac{1}{10}\Big(3A_1(q)^2 + A_1(q) - q\frac{d}{dq}A_1(q)\Big)$$ and I conjecture that you can always write $A_k(q)$ (and similarly $C_k(q)$) recursively in terms of previous such functions.
|
2013-06-20 00:25:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890082836151123, "perplexity": 93.4224191128792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709906749/warc/CC-MAIN-20130516131146-00020-ip-10-60-113-184.ec2.internal.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.