text
stringlengths 104
605k
|
---|
Limited access
The period of a spring-mass system is directly proportional to the square root of the mass. Originally, the period of the system was 2.0 seconds. The original mass was $1.0\, kg$.
Which value shown is closest to the new period if the new mass is $24.0\, kg$?
A
48 seconds
B
24 seconds
C
10 seconds
D
22 seconds
E
8 seconds
Select an assignment template |
Metric embeddings with outliers
# Metric embeddings with outliers
Anastasios Sidiropoulos Dept. of Computer Science and Engineering and Dept. of Mathematics, The Ohio State University. Columbus, OH, USA. Supported by NSF grants CCF 1423230 and CAREER 1453472. Yusu Wang Dept. of Computer Science and Engineering, The Ohio State University. Columbus, OH, USA. The work is partially supported by NSF under grant CCF-1319406.
###### Abstract
We initiate the study of metric embeddings with outliers. Given some metric space we wish to find a small set of outlier points and either an isometric or a low-distortion embedding of into some target metric space. This is a natural problem that captures scenarios where a small fraction of points in the input corresponds to noise.
For the case of isometric embeddings we derive polynomial-time approximation algorithms for minimizing the number of outliers when the target space is an ultrametric, a tree metric, or some constant-dimensional Euclidean space. The approximation factors are , and , respectively. For the case of embedding into an ultrametric or tree metric, we further improve the running time to for an -point input metric space, which is optimal. We complement these upper bounds by showing that outlier embedding into ultrametrics, trees, and -dimensional Euclidean space for any are all NP-hard, as well as NP-hard to approximate within a factor better than 2 assuming the Unique Game Conjecture.
For the case of non-isometries we consider embeddings with small distortion. We present polynomial-time bi-criteria approximation algorithms. Specifically, given some , let denote the minimum number of outliers required to obtain an embedding with distortion . For the case of embedding into ultrametrics we obtain a polynomial-time algorithm which computes a set of at most outliers and an embedding of the remaining points into an ultrametric with distortion . Finally, for embedding a metric of unit diameter into constant-dimensional Euclidean space we present a polynomial-time algorithm which computes a set of at most outliers and an embedding of the remaining points with distortion .
## 1 Introduction
Metric embeddings provide a framework for addressing in a unified manner a variety of data-analytic tasks. Let , be metric spaces. At the high level, a metric embedding is a mapping that either is isometric or preserves the pairwise distances up to some small error called the distortion111Various definitions of distortion have been extensively considered, including multiplicative, additive, average, and distortion, as well as expected distortion when the map is random [6].. The corresponding computational problem is to decide whether an isometry exists or, more generally, to find a mapping with minimum distortion. The space might either be given or it might be constrained to be a member of a collection of spaces, such as trees, ultrametrics, and so on. The problems that can be geometrically abstracted using this language include phylogenetic reconstruction (e.g. via embeddings into trees [1, 2, 11] or ultrametrics [20, 4]), visualization (e.g. via embeddings into constant-dimensional Euclidean space [8, 21, 10, 32, 19, 9, 17, 22]), and many more (for a more detailed exposition we refer the reader to [26, 25]).
Despite extensive research on the above metric embedding paradigm, essentially nothing is known when the input space can contain outliers. This scenario is of interest for example in applications where outliers can arise from measurement errors. Another example is when real-world data does not perfectly fit a model due to mathematical simplifications of physical processes.
We propose a generalization of the above high-level metric embedding problem which seeks to address such scenarios: Given and we wish to find some small and either an isometric or low-distortion mapping . We refer to the points in as outliers.
We remark that it is easy to construct examples of spaces , where any embedding has arbitrarily large distortion (for any “reasonable” notion of distortion), yet there exists and an isometry . Thus new ideas are needed to tackle the more general metric embedding problem in the presence of outliers.
### 1.1 Our contribution
#### Approximation algorithms.
We focus on embeddings into ultrametrics, trees, and constant-dimensional Euclidean space. We first consider the problem of computing a minimum size set of outliers such that the remaining point-set admits an isometry into some target space. We refer to this task as the minimum outlier embedding problem.
Outlier embeddings into ultrametrics. It is well-known that a metric space is an ultrametric if and only if any -point subset is an ultrametric. We may therefore obtain a -approximation as follows: For all , if the triple is not an ultrametric then remove , , and from . It is rather easy to see that this gives a -approximation for the minimum outlier embedding problem into ultrametrics (as for every triple of points that we remove, at least one of them must be an outlier in any optimal solution), with running time . By exploiting further structural properties of ultrametrics, we obtain a -approximation with running time . We remark that this running time is optimal since the input has size and it is straightforward to show that any -approximation has to read all the input (e.g. even to determine whether is an ultrametric, which corresponds to the case where the minimum number of outliers is zero).
Outlier embeddings into trees. Similarly to the case of ultrametrics, it is known that a space is a tree metric if and only if any -point subset is a tree metric. This similarly leads to a -approximation algorithm in time. We further improve the running time to , which is also optimal. However, obtaining this improvement is significantly more complicated than the case of ultrametrics.
Outlier embeddings into . It is known that for any any metric space admits an isometric embedding into -dimensional Euclidean space if and only if any subset of size does [33]. This immediately implies a -approximation algorithm for outlier embedding into -dimensional Euclidean space with running time , for any . Using additional rigidity properties of Euclidean space we obain a -approximation with the same running time.
#### Hardness of approximation.
We show that, assuming the Unique Games Conjecture [28], the problems of computing a minimum outlier embedding into ultrametrics, trees, and -dimensional Euclidean space for any , are all NP-hard to approximate within a factor of , for any . These inapproximability results are obtained by combining reductions from Vertex Cover to minimum outlier embedding and the known hardness result for the former problem [29]. Note that for the case of embedding into -dimensional Euclidean space for any this inapproximability result matches our upper bound.
#### Bi-criteria approximation algorithms.
We also consider non-isometric embeddings. All our results concern distortion. For some outlier set , the distortion of some map is defined to be
supx,y∈X∖K∣∣ρ(x,y)−ρ′(f(x),f(y))∣∣.
In this context there are two different objectives that we wish to minimize: the number of outliers and the distortion. For a compact metric space , denote by the diameter of .
###### Definition 1.1 ((ε,k)-Outlier embedding).
We say that admits a -outlier embedding into if there exists with and some with distortion at most . We refer to as the outlier set that witnesses a -outlier embedding of .
Note that the multiplication of the distortion by is to make the parameter scale-free. Since an isometry can be trivially achieved by removing all but one points; thus the above notion is well-defined for all . We now state our main results concerning bi-criteria approximation:
Bi-criteria outlier embeddings into ultrametrics: We obtain a polynomial-time algorithm which given an -point metric space and some such that admits a -outlier embedding into an ultrametric, outputs a -outlier embedding into an ultrametric.
Bi-criteria outlier embeddings into : We present an algorithm which given an -point metric space and some such that admits a -outlier embedding in , outputs a -outlier embedding of into . The algorithm runs in time .
Bi-criteria outlier embeddings into trees: Finally we mention that one can easily derive a bi-criteria approximation for outlier embedding into trees by the work of Gromov on -hyperbolicity [23] (see also [14]). Formally, there exists a polynomial-time algorithm which given a metric space and some such that admits a -outlier embedding into a tree, outputs a -outlier embedding into a tree. Let us briefly outline the proof of this result: -hyperbolicity is a four-point condition such that any -hyperbolic space admits an embedding into a tree with distortion , and such an embedding can be computed in polynomial time. Any metric that admits an embedding into a tree with distortion is -hyperbolic. Thus by removing all 4-tuples of points that violate the -hyperbolicity condition and applying the embedding from [23] we immediately obtain an -outlier embedding into a tree. We omit the details.
### 1.2 Previous work
Over the recent years there has been a lot work on approximation algorithms for minimum distortion embeddings into several host spaces and under various notions of distortion. Perhaps the most well-studied case is that of multiplicative distortion. For this case, approximation algorithms and inapproximability results have been obtained for embedding into the line [34, 10, 8, 22], constant-dimensional Euclidean space [9, 17, 19, 32, 10], trees [11, 15], ultrametrics [4], and other graph-induced metrics [15]. We also mention that similar questions have been considered for the case of bijective embeddings [35, 24, 27, 19, 30]. Analogous questions have also been investigated for average [18], additive [5], [2], and distortion [20, 1].
Similar in spirit with the outlier embeddings introduced in this work is the notion of embeddings with slack [12, 13, 31]. In this scenario we are given a parameter and we wish to find an embedding that preserves -fraction of all pairwise distances up to a certain distortion. We remark however that these mappings cannot in general be used to obtain outlier embeddings. This is because typically in an embedding with slack the pairwise distances that are distorted arbitrarily involve a large fraction of all points.
### 1.3 Discussion
Our work naturally leads to several directions for further research. Let us briefly discuss the most prominent ones.
An obvious direction is closing the gap between the approximation factors and the inapproximability results for embedding into ultrametrics and trees. Similarly, it is important to understand whether the running time of the 2-approximation for embedding into Euclidean space can be improved. More generally, an important direction is understanding the approximability of outlier embeddings into other host spaces, such as planar graphs and other graph-induced metrics.
In the context of bi-criteria outlier embeddings, another direction is to investigate different notions of distortion. The case of distortion studied here is a natural starting point since it is very sensitive to outliers. It seems promising to try to adapt existing approximation algorithms for , multiplicative, and average distortion to the outlier case.
Finally, it is important to understand whether improved guarantees or matching hardness results for bi-criteria approximations are possible.
## 2 Definitions
A metric space is a pair where is a set and such that (i) for any , , (ii) if and only if , and (iii) for any , . Given two metric spaces and , an embedding of into is simply a map , and is an isometric embedding if for any , .
In this paper our input is an -point metric , meaning that is a discrete set of cardinality . Given an -point metric space and a value , we denote by the metric space where for any we have .
###### Definition 2.1 (Ultrametric space).
A metric space is an ultrametric (tree) space if and only if the following three-point condition holds for any :
ρ(x,y)≤max{ρ(x,z),ρ(z,y)}. (1)
###### Definition 2.2 (Tree metric).
A metric space is a tree metric if and only if the following four-point condition holds for any :
ρ(x,y)+ρ(z,w)≤max{ρ(x,z)+ρ(y,w),ρ(x,w)+ρ(y,z)}. (2)
An equivalent formulation of the four-point condition is that for all , the largest two quantities of the following three terms are equal:
ρ(x,y)+ρ(z,w), ρ(x,z)+ρ(y,w), ρ(x,w)+ρ(y,z). (3)
In particular, an -point tree metric can be realized by a weighted tree such that there is a map into the set of nodes of , and that for any , the shortest path distance in equals . In other words, is an isometric embedding of into the graphic tree metric . An ultrametric is in fact a special case of tree metric, where there is an isometric embedding to a rooted tree such that are leaves of and all leaves are at equal distance from the root of .
## 3 Approximation algorithms for outlier embeddings
In this section we present approximation algorithms for the minimum outlier embedding problem for three types of target metric spaces: ultrametrics, tree metrics, and Euclidean metric spaces. We show in Appendix B that finding optimal solutions for each of these problems is NP-hard (and roughly speaking hard to approximate within a factor of as well). In the cases of ultrametric and tree metrics, it is easy to approximate the minimum outlier embedding within constant factor in and time, respectively. The key challenge (especially for embedding into tree metric) is to improve the time complexity of the approximation algorithm to , which is optimal.
### 3.1 Approximating outlier embeddings into ultrametrics
###### Theorem 3.1.
Given an -point metric space , there exists a -approximation algorithm for minimum outlier embedding into ultrametrics, with running time .
###### Proof.
We can obtain a polynomial-time -approximation algorithm as follows: For each triple of points , considered in some arbitrary order, check whether it satisfies (1). If not, then remove , , and from and continue with the remaining triples. Let be the set of removed points. For every triple of points removed, at least one must be in any optimal solution; therefore the resulting solution is a -approximation. The running time of this method is . We next show how to improve the running time to .
Let . We inductively compute a sequence , where is set to be . Given for some , assuming the invariance that is an ultrametric, we compute as follows. We check whether is an ultrametric. If it is, then we set . Otherwise, there must exist that violates (1). Since is an ultrametric, it follows that every such triple must contain . Therefore it suffices to show how to quickly find such that violates (1), if they exist. To this end, let be a nearest neighbor of in , that is where we brake ties arbitrarily. Instead of checking against all possible from , we claim that (1) holds for all with if and only if for all we have
(i) ρ(xi,x∗i) ≤max{ρ(xi,w),ρ(x∗i,w)} and (ii) ρ(xi,w)=ρ(x∗i,w). (4)
Indeed, assume that (i) and (ii) above hold for all , yet there exist some such that violates (1), say w.l.o.g., . Then by (ii) above, we have and , implying that . Hence also violates (1), contradicting the fact that is an ultrametric. Hence no such can exist, and (10) is sufficient to check whether induces an ultrametric or not.
Finally, we can clearly check in time whether both conditions in (10) hold for all . If either (i) or (ii) in (10) fails then violates (1), which concludes the proof. ∎
### 3.2 Approximating outlier embeddings into trees
We now present a -approximation algorithm for embedding a given -point metric space into a tree metric with a minimum number of outliers. Using the four-point condition (2) in Definition 2.2, it is fairly simple to obtain a -approximation algorithm for the problem with running time as follows: Check all 4-tuples of points . If the 4-tuple violates the four-point condition, then remove from . It is immediate that for any such 4-tuple, at least one of its points much be an outlier in any optimal solution. It follows that the result is a -approximation.
We next show how to implement this approach in time . The main technical difficult is in finding a set of violating 4-tuples quickly. The high-level description of the algorithm is rather simple, and is as follows. Let be the input metric space where . Set . For any , we inductively define . At the beginning of the -th iteration, we maintain the invariance that is a tree metric. If is a tree metric, then we set . Otherwise there must exist such that the 4-tuple violates the four-point condition; we set .
To implement this idea in time, it suffices to show that for any , given , we can compute in time . The algorithm will inductively compute a collection of edge-weighted trees , with simply being the graph with , and maintain the following invariants for each :
(I-1)
and all leaves of are in . embeds isometrically into ; that is, the shortest-path metric of agrees with on : for any , .
(I-2)
At the -th iteration either or where the 4-tuple violates the four-point condition under metric .
###### Definition 3.2 (Leaf augmentation).
Given , let be a tree with . Given and , the -leaf augmentation of at is the tree obtained as follows. Let be the path in between and (which may contain a single vertex if ). Set . Let be a vertex in with ; if no such vertex exists then we introduce a new such vertex by subdividing the appropriate edge in and update the edge lengths accordingly. In the resulting tree we add the vertex and the edge if they do no already exist, and we set the length of to be . We call the stem of (w.r.t. the leaf augmentation). When we say that is the -leaf augmentation of at , in which case is obtained from simply by adding as a leaf attached to , and is the stem of .
In what follows, we set to be the nearest neighbor of in , that is
x∗i=argminx∈Xi−1ρ(x,xi),
where we break ties arbitrarily. Intuitively, if we can build a new tree from so that can be isometrically embedded in , then is a -leaf augmentation of at some pair . Our approach will first compute an auxiliary structure, called -orientation on , to help us identify a potential leaf augmentation. We next check for the validity of this leaf augmentation. The key is to produce this candidate leaf augmentation such that if it is not valid, then we will be able to find a 4-tuple violating the four-point condition from it quickly.
###### Definition 3.3 ((a,u,v)-orientation).
Let and let be a tree with . Let and . The -orientation of is a partially oriented tree obtained as follows: Let be the -leaf augmentation of at , and let be the stem of . We orient every edge in towards , where is the unique path in between and . All other edges in remain unoriented.
If then there exists a unique edge in (which is subdivided in ); this edge remains undirected in . We call this edge the sink edge w.r.t. . If there is no sink edge, then there is a unique vertex in with no outgoing edges in , which we call the sink vertex w.r.t. . Note that the sink is the simplex of smallest dimension that contains the stem of w.r.to the leaf augmentation at .
See the right figure for an example: where is stem of in the leaf augmentation at . The thick path is oriented, other than the sink edge (the one that contains the stem ).
###### Definition 3.4 (xi-orientation).
An -orientation of is any partial orientation of obtained via the following procedure: Consider any ordering of , say . Start with , i.e. all edges in are initialized as undirected and we will iteratively modify their orientation. Process vertices in this order. For each , denote by the path in between and . Traverse starting from until we reach either or an edge which is already visited. For each unoriented edge we visit, we set its orientation to be the one in the -orientation of . An edge that is visited in the above process is called masked.
Since the above procedure is performed for all leaves of , an -orientation will mask all edges. However, a masked edge may not be oriented, in which case this edge must be the sink edge w.r.t. for some .
###### Definition 3.5 (Sinks).
Given an -orientation of tree , a sink is either an un-oriented edge, or a vertex such that all incident edges have an orientation toward . The former is also called a sink edge w.r.t. and the latter a sink vertex w.r.t. .
It can be shown that each sink edge/vertex must be a sink edge/vertex w.r.t. for some , and we call a generating vertex for this sink.
An -orientation may have multiple sinks. We further augment the -orientation to record a generating vertex for every sink (there may be multiple choices of for a single sink, and we can take an arbitrary one). We also remark that a sink w.r.t. some may not ultimately be a sink for the global -orientation: see the right figure for an example, where is a sink vertex w.r.t. , but not a sink vertex for the global -orientation.
The proofs of the following two results can be found in Appendix A.
###### Lemma 3.6.
An -orientation of (together with a generating vertex for each sink) can be computed in time.
###### Lemma 3.7.
Any -orientation of has at least one sink.
###### Lemma 3.8.
For any , given , we can compute and satisfying invariants (I-1) and (I-2) in ) time.
###### Proof.
It suffices to show that in time we can either find a -tuple of points in , that violates the four-point condition, or we can compute a tree having a shortest-path metric that agrees with on . By Lemma 3.6, we can compute an -orientation of in time. Consider any sink of (whose existence is guaranteed by Lemma 3.7), and let be its associated generating vertex; must be in . Let be the -leaf augmentation of at , and let denote the shortest path metric on the tree .
Since is the -leaf augmentation of , we have for all , (the last quality is because embeds isometrically into ). Thus may only disagree with on pairs of points , for some . We check in time if, for all , we have via a traveral of starting from the stem of in . If the above holds, then obviously embeds isometrically into . We then set and output . Otherwise, let be such that . We now show that we can find a -tuple including that violates the four-point condition in constant time.
Let be the stem of in . Consider as rooted at and let be the lowest common ancester of and . Note that must be a vertex from too. Let denote the unique path in between any two and . The vertex must be in the path .
Case 1: . In this case, is either in the interior of path or of path . Assume w.o.l.g. that is in the interior of ; the handling of the other case is completely symmetric. See Figure 1 (a) for an illustration. Since is a tree metric, we know that the -tuple should satisfy the four-point condition under the metric . Using the alternative formulation of four-point condition in Definition 2.2, we have that the largest two quantities of the following three terms should be equal:
dL(xi,x∗i)+dL(u,w), dL(xi,u)+dL(x∗i,w), dL(xi,w)+dL(x∗i,u). (5)
For this specific configuration of , we further have:
dL(xi,x∗i)+dL(u,w)
On the other hand, by construction, we know that agrees with on . Furthermore, since is the -leaf augmentation of at , we have that and . Hence (6) can be rewritten as
ρ(xi,x∗i)+ρ(u,w) <ρ(xi,u)+ρ(x∗i,w)=dL(xi,w)+ρ(x∗i,u). (7)
If , then the largest two quantities of
ρ(xi,x∗i)+ρ(u,w),ρ(xi,u)+ρ(x∗i,w),ρ(xi,w)+ρ(x∗i,u)
can no longer be equal as . Hence the -tuple violates the four-point condition under the metric (by using (3)).
Case 2: , in which case must be a sink vertex: see Figure 1 (b) for an illustration. For this configuration of , it is necessary that
ρ(xi,x∗i)+ρ(u,w) =ρ(xi,u)+ρ(x∗i,w)=dL(xi,w)+ρ(x∗i,u). (8)
Hence if , then the -tuple violates the four-point condition under the metric because
ρ(xi,x∗i)+ρ(u,w)=ρ(xi,u)+ρ(x∗i,w)<ρ(xi,w)+ρ(x∗i,u).
What remains is to find an violating 4-tuple for the case when .
Now imagine performing the -leaf augmentation of at . We first argue that the stem of w.r.t. necessarily lies in in . Let . In the augmented tree , . Combing (8) and we have that . On the other hand, following Definition 3.2, the position of is such that , while the position of was that . It then follows that must lie in the interior of path .
Since the stem of w.r.t. is in , it means that before we process in the construction of the -orientation , there must exist some other leaf such that the process of assigns the orientation of the edge to be towards ; See Figure 1 (c). This is because if no such exists, then while processing , we would have oriented the edge towards stem , thus towards , as the stem is in . The point can be identified in constant time if during the construction of , we also remember, for each edge, the vertex the processing of which leads to orienting this edge. Such information can be easily computed in time during the construction of .
Now consider . If , then one can show that is necessarily the stem for w.r.t. as well (by simply computing the position of the stem using Definition 3.2). In this case, considering the 4-tuple , we are back to Case 1 (but for this new 4-tuple), which in turn means that this 4-tuple violates the four-point condition. Hence we are done.
If , then since we orient the edge towards during the process of leaf , the stem of of the leaf augmentation at is in the path . By an argument similar to the proof that is in the interior of above, we can show that . Now consider the 4-tuple : this leads us to an analogous case when for the 4-tuple . Hence by a similar argument as at the beginning of Case 2, we can show that violates the four-point condition under metric .
Putting everything together, in either case, we can identify a 4-tuple , which could be , , or as shown above, that violates the four-point condition under metric . We simply remove these four points, adjust the resulting tree to obtain and set . The overall algorithm takes time as claimed. This proves the lemma. ∎
###### Theorem 3.9.
There exists a -approximation algorithm for minimum outlier embedding into trees, with running time .
###### Proof.
By Lemma 3.8 and induction on , it follows immediately that we can compute in time . By invariant (I-1), the output is a tree metric as it can be isometrically embedded into . Furthermore, by invariant (I-2), each 4-tuple of points we removed forms a violation of the four-point condition, and thus must contain at least one point from any optimal outlier set. As such, the total number of points we removed can be at most four times the size of the optimal solution. Hence our algorithm is a -approximation as claimed. ∎
### 3.3 Approximating outlier embeddings into Rd
In this section, we present a -approximation algorithm for the minimum outlier embedding problem into the Euclidean space in polynomial time, which matches our hardness result in Appendix B. Given two points , let denote the Euclidean distance between and .
###### Definition 3.10 (d-embedding).
Given a discrete metric space , an -embedding of is simply an isometric embedding of into ; that is, for any , . We say that is strongly -embeddable if it has a -embedding, but cannot be isometrically embedded in . In this case, is called the embedding dimension of .
The following is a classic result in distance geometry of Euclidean spaces, see e.g [7, 37].
###### Theorem 3.11.
The metric space is strongly -embeddable in if and only if there exist points, say , such that:
(i) is strongly -embeddable; and
(ii) for any , is -embeddable.
Furthermore, given an -point metric space , it is known that one can decide whether is embeddable in some Euclidean space by checking whether a certain matrix derived from the distance matrix is positive semi-definite, and the rank of this matrix gives the embedding dimension of ; see e.g. [36].
Following Theorem 3.11, one can easily come up with a -approximation algorithm for minimum outlier embedding into , by simply checking whether each -tuple of points is -embeddable, and if not, removing all these points. Our main result below is an -approximation algorithm within the same running time. In particular, Algorithm 1 satisfies the requirements of Theorem 3.12, and the proof is in Appendix A.
###### Theorem 3.12.
Given an -point metric space , for any , there exists -approximation algorithm for minimum outlier embedding into , with running time .
#### Hardness results.
In Appendix B, we show that the minimum outlier embedding problems into ultrametrics, trees and Euclidean space are all NP-hard, by reducing the Vertex Cover problem to them in each case. In fact, assuming the unique game conjecture, it is NP-hard to approximate each of them within , for any positive . For the case of minimum outlier embedding into Euclidean space, we note that our -approximation algorithm above matches the hardness result.
## 4 Bi-criteria approximation algorithms
### 4.1 Bi-criteria approximation for embedding into ultrametrics
Let be a tree with non-negative edge weights. The ultrametric induced by T is the ultrametric where for every we have that is equal to the maximum weight of the edges in the unique - path in (it is easy to verify that the metric constructed as such is indeed an ultrametric [20]). Given an metric space , we can view it as a weighted graph and talk about its minimum spanning tree (MST). The following result is from [20].
###### Lemma 4.1 (Farach, Kannan and Warnow [20]).
Let be a metric space and let be an ultrametric minimizing . Let be ultrametric induced by a MST of . Then there exists , such that .
In particular, let . Then . This further implies that .
###### Theorem 4.2.
There exists a polynomial-time algorithm which given an -point metric space , , and , such that admits a -outlier embedding into an ultrametric, outputs a -outlier embedding into an ultrametric.
###### Proof.
For simplicity, assume that the diameter . The algorithm is as follows. We first enumerate all triples . For any such triple, if , then we remove , , and from . Let be the resulting point set. We output the ultrametric induced by an MST of the metric space . This completes the description of the algorithm.
It suffices to prove that the output is indeed an -outlier embedding. Let , with , be such that admits a -outlier embedding into an ultrametric. Let be such that . It follows by Lemma 4.1 that does not admit a -outlier embedding into an ultrametric. Thus, . It follows that . In other words, the algorithm removes at most points.
It remains to bound the distortion between and . Let be the MST of such that the algorithm outputs the ultrametric induced by . We will prove by induction on , that for all , if the - path in contains at most edges, then
For the base case we have that . Since is the minimum spanning tree of , it follows that , proving the base case. For the inductive step, let such that the - contains at most edges, for some . Let be such that is in the - path in , and moreover the - and - paths in have at most edges each. Since , it follows that the triple was not removed by the algorithm, and thus
ρ(x,y) ≤max{ρ(x,w) |
candidate_points - Maple Help
Slode
candidate_points
determine points for power series solutions
Calling Sequence candidate_points(ode, var, 'points_type'=opt) candidate_points(LODEstr, 'points_type'=opt)
Parameters
ode - linear ODE with polynomial coefficients var - dependent variable, for example y(x) opt - (optional) type of points; one of dAlembertian, hypergeom, rational, polynomial, or all (the default). LODEstr - LODEstruct data structure
Description
• The candidate_points command determines candidate points for which power series solutions with d'Alembertian, hypergeometric, rational, or polynomial coefficients of the given linear ordinary differential equation exist.
• If ode is an expression, then it is equated to zero.
• The command returns an error message if the differential equation ode does not satisfy the following conditions.
– ode must be linear in var
– ode must have polynomial coefficients in $x$ over the rational number field which can be extended by one or more parameters.
– ode must either be homogeneous or have a right hand side that is rational in $x$
• If opt=all, the output is a list of three elements:
– a set of hypergeometric points, which may include the symbol 'any_ordinary_point'
– a set of rational points;
– a set of polynomial points.
Otherwise, the output is the set of the required points.
• Note that the computation of candidate points for power series solutions with d'Alembertian coefficients is currently considerably more expensive computationally than for the other three types of coefficients.
Examples
> $\mathrm{with}\left(\mathrm{Slode}\right):$
> $\mathrm{ode}≔\left(3{x}^{2}-6x+3\right)\mathrm{diff}\left(\mathrm{diff}\left(y\left(x\right),x\right),x\right)+\left(12x-12\right)\mathrm{diff}\left(y\left(x\right),x\right)+6y\left(x\right)$
${\mathrm{ode}}{≔}\left({3}{}{{x}}^{{2}}{-}{6}{}{x}{+}{3}\right){}\left(\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right)\right){+}\left({12}{}{x}{-}{12}\right){}\left(\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right)\right){+}{6}{}{y}{}\left({x}\right)$ (1)
> $\mathrm{candidate_points}\left(\mathrm{ode},y\left(x\right),'\mathrm{type}'='\mathrm{polynomial}'\right)$
$\left\{{0}\right\}$ (2)
> $\mathrm{candidate_points}\left(\mathrm{ode},y\left(x\right),'\mathrm{type}'='\mathrm{rational}'\right)$
$\left\{{1}\right\}$ (3)
> $\mathrm{candidate_points}\left(\mathrm{ode},y\left(x\right),'\mathrm{type}'='\mathrm{hypergeometric}'\right)$
$\left\{{1}{,}{\mathrm{any_ordinary_point}}\right\}$ (4)
> $\mathrm{candidate_points}\left(\mathrm{ode},y\left(x\right),'\mathrm{type}'='\mathrm{all}'\right)$
$\left[\left\{{1}{,}{\mathrm{any_ordinary_point}}\right\}{,}\left\{{1}\right\}{,}\left\{{0}\right\}\right]$ (5)
> $\mathrm{candidate_points}\left(\mathrm{ode},y\left(x\right),'\mathrm{type}'='\mathrm{dAlembertian}'\right)$
$\left\{{1}{,}{\mathrm{any_ordinary_point}}\right\}$ (6)
> $\mathrm{ode1}≔60y\left(x\right)+2x\left(x-30\right)\mathrm{diff}\left(y\left(x\right),x\right)-{x}^{2}\left(2x-27\right)\mathrm{diff}\left(y\left(x\right),x,x\right)+{x}^{3}\left(4x-27\right)\mathrm{diff}\left(y\left(x\right),x,x,x\right)=-\frac{2{x}^{2}\left(-5-330x+60{x}^{4}-1137{x}^{2}+32{x}^{3}\right)}{{\left(x-1\right)}^{6}}$
${\mathrm{ode1}}{≔}{60}{}{y}{}\left({x}\right){+}{2}{}{x}{}\left({x}{-}{30}\right){}\left(\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right)\right){-}{{x}}^{{2}}{}\left({2}{}{x}{-}{27}\right){}\left(\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right)\right){+}{{x}}^{{3}}{}\left({4}{}{x}{-}{27}\right){}\left(\frac{{{ⅆ}}^{{3}}}{{ⅆ}{{x}}^{{3}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right)\right){=}{-}\frac{{2}{}{{x}}^{{2}}{}\left({60}{}{{x}}^{{4}}{+}{32}{}{{x}}^{{3}}{-}{1137}{}{{x}}^{{2}}{-}{330}{}{x}{-}{5}\right)}{{\left({x}{-}{1}\right)}^{{6}}}$ (7)
Inhomogeneous equations are handled:
> $\mathrm{candidate_points}\left(\mathrm{ode1},y\left(x\right)\right)$
$\left[\left\{{0}{,}{1}{,}\frac{{27}}{{4}}{,}{\mathrm{any_ordinary_point}}{,}{\mathrm{RootOf}}{}\left({60}{}{{\mathrm{_Z}}}^{{4}}{+}{32}{}{{\mathrm{_Z}}}^{{3}}{-}{1137}{}{{\mathrm{_Z}}}^{{2}}{-}{330}{}{\mathrm{_Z}}{-}{5}\right)\right\}{,}\left\{{-1}{,}{0}{,}{1}{,}\frac{{23}}{{4}}{,}\frac{{27}}{{4}}{,}{\mathrm{RootOf}}{}\left({49}{}{{\mathrm{_Z}}}^{{4}}{-}{287}{}{{\mathrm{_Z}}}^{{3}}{-}{1418}{}{{\mathrm{_Z}}}^{{2}}{-}{714}{}{\mathrm{_Z}}{-}{45}\right){-}{1}{,}{\mathrm{RootOf}}{}\left({49}{}{{\mathrm{_Z}}}^{{4}}{-}{287}{}{{\mathrm{_Z}}}^{{3}}{-}{1418}{}{{\mathrm{_Z}}}^{{2}}{-}{714}{}{\mathrm{_Z}}{-}{45}\right){,}{\mathrm{RootOf}}{}\left({60}{}{{\mathrm{_Z}}}^{{4}}{+}{32}{}{{\mathrm{_Z}}}^{{3}}{-}{1137}{}{{\mathrm{_Z}}}^{{2}}{-}{330}{}{\mathrm{_Z}}{-}{5}\right)\right\}{,}\left\{{-1}{,}{0}{,}\frac{{23}}{{4}}{,}{\mathrm{RootOf}}{}\left({60}{}{{\mathrm{_Z}}}^{{4}}{+}{32}{}{{\mathrm{_Z}}}^{{3}}{-}{1137}{}{{\mathrm{_Z}}}^{{2}}{-}{330}{}{\mathrm{_Z}}{-}{5}\right){-}{1}\right\}\right]$ (8)
An equation which has d'Alembertian series solutions at any ordinary point but doesn't have hypergeometric ones:
> $\mathrm{ode2}≔\left(x-1\right)\mathrm{diff}\left(y\left(x\right),x\right)-\left(x-2\right)y\left(x\right)$
${\mathrm{ode2}}{≔}\left({x}{-}{1}\right){}\left(\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right)\right){-}\left({x}{-}{2}\right){}{y}{}\left({x}\right)$ (9)
> $\mathrm{candidate_points}\left(\mathrm{ode2},y\left(x\right),'\mathrm{type}'='\mathrm{hypergeometric}'\right)$
$\left\{{1}\right\}$ (10)
> $\mathrm{candidate_points}\left(\mathrm{ode2},y\left(x\right),'\mathrm{type}'='\mathrm{dAlembertian}'\right)$
$\left\{{1}{,}{\mathrm{any_ordinary_point}}\right\}$ (11) |
# Shorter way to write list-as-dict-value in Python?
I have a data structure that looks like this:
a = {
'red': ['yellow', 'green', 'purple'],
'orange': ['fuschia']
}
This is the code I write to add new elements:
if a.has_key(color):
a[color].append(path)
else:
a[color] = [path]
Is there a shorter way to write this?
## locked by Jamal♦Jul 27 '15 at 22:01
This question exists because it has historical significance, but it is not considered a good, on-topic question for this site so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. See the help center for guidance on writing a good question.
You can use collections.defaultdict:
from collections import defaultdict
mydict = defaultdict(list)
Now you just need:
mydict[color].append(path)
defaultdict is a subclass of dict that can be initialized to a list, integer...
btw, the use of has_key is discouraged, and in fact has_key has been removed from python 3.2.
When needed, use this by far more pythonic idiom instead:
if color in a:
........
• even better is try: a[color].append(path) except KeyError: a[color] = [path] ... and this comment formatting is broken :) – blaze Dec 21 '11 at 8:58
• @blaze try/except is better? I'd say not. – Paul Hankin Dec 24 '11 at 21:24
• if try is usually successful and exception isn't raised - it's effective. one less "if" to check. – blaze Dec 26 '11 at 13:11
• lots of stuff happening around checking whether operation goes right and whether exception is thrown. besides the semantics of "exception" should be that something, well, exceptional is taking place. I'd avoid using exceptions for regular code flow if a convenient alternative exists – Nicolas78 Dec 26 '11 at 15:39 |
Jump to content
# [bug / missing feature] Note creation does not display formatting bar info for blank lines
## Recommended Posts
Open a new note in evernote. Press control B and type in bold. Click in that text you just typed. The formatting bar highlights the symbol B for bold because the text is bold.
Now click at the end of the line, press enter, notice the new line does not have the symbol B for bold highlighted.
Now try typing, the words you type are in bold and after just one letter the symbol B for bold is highlighted.
Suggestion: Blank lines that have formatting like this text will be bold should highlight the B in the formatting Bar just like text that is already typed. It's annoying trying to switch between typing headers and normal text and never knowing until you type what the formatting will look like. This problem is the same for all the formatting indicators.
#### Archived
This topic is now archived and is closed to further replies.
×
×
• Create New... |
Create an unlimited supply of worksheets for practicing exponents and powers. −6 c. −6 ×5 d. 5 ____ 2. 8.3 / Relationship Between Squares and Square Roots. The purpose of the 8th grade SBA Math assessment is to evaluate … Vocal Music. Expand the following using exponents. Grade 8 » Expressions & Equations » Expressions and Equations Work with radicals and integer exponents. The worksheets can be made in html or pdf format both are easy to print. This 8th Grade Math Game focuses on Properties of Exponents and Operations with Scientific Notation, and provides students with practice in the form of multiple choice or short answer questions. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Exponents and Powers. 2773 days since Last Day of School. Know and apply the properties of integer exponents to generate equivalent numerical expressions. Class 8 Exponents and Powers test papers for all important topics covered which can come in your school exams, download in pdf free. Select one or more questions using the checkboxes above each question. Grade 8. Lesson 01: Factors/Prime Factorization Get shields, trophies, certificates and scores. Eighth Grade (Grade 8) Exponents Questions You can create printable tests and worksheets from these Grade 8 Exponents questions! Powers of products & quotients (integer exponents) Get 3 of 4 questions to level up! Here we are with a math test on exponents. There are different rules set aside to determine how you solve problems involving more than one exponents or an exponent and a whole number. This test is based on the following Common Core Standards:. Unit 1 Ratio/Proportion (7th grade) Unit 2 Linear Relationships (7th grade) Unit 3 Lines and Angles (7th grade) ... Unit 4 Exponents (8th grade) … To log in and use all the features of Khan Academy, please enable JavaScript in your browser. 7th Grade Test Calendar. Recent site activity. In scientific notation, a number is expressed in tow parts: a number between 1 and 10 multiples by a power of 10. In this live Grade 11 Maths show we discuss Exponents & Surds. Access full series of free online mock tests with answers from Exponents and Powers Class 8. Level up on all the skills in this unit and collect up to 900 Mastery points. Some of the worksheets for this concept are Exponent rules practice, 5 1 x x, Exponent and scientific notation practice, Exponents work, Negative exponents teacher notes, Exponents powers, Grade 8 mathematics practice test, 8th exponents unit of study. Edulastic, a Snapwiz, Inc. platform 39300 Civic Center Drive, Suite 310 Fremont, CA 94538 main (510) 328-3277 tech support (510) 901-4739 support time: 8:00 AM - 8:00 PM ET fax (510) 890-3083 [email protected] 8.5 / Positive and Negative Square Roots ... Multiplication and Division with Exponents. 8.100 / Simplify Expressions Involving Exponents. 6 b. There are several laws we can use to make working with exponential numbers easier. Donate or volunteer today! Top Mathematicians. Write one billion as a power of 10. a. If the expense for producing a goods is $68, what will be the price so the profit of the producer to be 15% from the price:$78.20 $80$83 \$125.80 13. 109 b. Unlimited adaptive online practice on Exponents and Powers. » 1 Print this page. Practice these questions to score better marks in the exam as it covers the latest CBSE syllabus (2020-2021) and is as per NCERT book/curriculum.. QR codes (optional) make this game even more int 8th Grade Exponents Answers - Displaying top 8 worksheets found for this concept.. Free Online Mock Test for CBSE Class 8 Exponents and Powers for important topics of all chapters in CBSE Class 8 Exponents and Powers book. Flying Free. If you are looking for a quiz to see how well you apply the set rules to get the correct solution to a problem, this quiz is what you … Take unlimited online tests on Exponents and Powers. Fractions. 1296 b. Then click the add selected questions to a test … Properties of exponents challenge (integer exponents) Get 3 of 4 questions to level up! Free CBSE Online Test Class 8 Maths Exponents and Powers Online Test for Class 8 Maths Exponents and Powers Question 1: The value of x in the expression is (-2) 3 * (-2) -6 = (-2) 2x - 1 Our mission is to provide a free, world-class education to anyone, anywhere. You will receive the … Begin mastering this skill by doing some online actiities below that feel like play. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Numbers in … Mr. Chan Unit 1 Math Test Answers. Properties of Exponents Section 4 -3 Glencoe Algebra Study Guide and Practice Workbook Section 8 -2 Multiplying and Dividing Powers Section 8 -3 Negative Exponents The Outstanding Mathematics Guide-8th Grade Supplement Exponents page 27 On Core Mathematics Lesson 1 -1 Coach Grade 8 (GPS Version) Lesson 1 Next, students work with numbers in the form of an integer multiplied by a power of 10 to express … If you're seeing this message, it means we're having trouble loading external resources on our website. 2.2 Revision of exponent laws (EMAT). Do online practice, take tests, and print unlimited customized worksheets. Class Resources. Get instant scores and step-by-step solutions on submission. $$1,36 \times 10^8$$ is called the scientific notation for 136 000 000. The exponent must always be an integer. Exponents and Powers worksheet for class 8 in PDF for free download. Grade 8 National Curriculum Exponents We haven't recorded any learning activity for this skill yet. 12. Write the base of −(−6)5. a. I CAN Math Games are the perfect way to make math fun! ... FACTORS & EXPONENTS UNIT TEST. Some of these laws might have been done in earlier grades, but we list all the laws here for easy reference: Do tests many times and check your score and download certificate. Questions on exponents and scientific notation. The test has 10 questions with a variety of multiple choice and open-ended response. Exponents Worksheets Grade 8 Answers by Celestine Aubry on November 12, 2020 Students can solve simple expressions involving exponents such as 3 3 1 2 4 5 0 or 8 2 or write multiplication expressions using an exponent. ... test papers for chapter-wise practice, NCERT solutions, NCERT Exemplar solutions, quick revision notes for ready reference, CBSE guess papers and CBSE important question papers. We can write 136 000 000 as $$1,36 \times 10^8$$. Evaluate: 46 a. Create customized worksheets and tests to suit your needs. Printable worksheets and online practice tests on Exponents and Powers for Grade 8. Practice that feels like play! removed by Terry Fox. 7th Grade Advanced Algebra. Exponents and Powers Class 8 Extra Questions Maths Chapter 12 Extra Questions for Class 8 Maths Chapter 12 Exponents and Powers Exponents and Powers Class 8 Extra Questions Very Short Answer Type Question 1. (i) 0.0523 (ii) 32.005 […] The exponential rules are also tested, as well as quick revision of scientific notation. In html or pdf format both are easy to print test practice test Answers/Solutions take tests, and print customized. Ccss.Math.Content.8.Ee.1 Know and apply the properties of integer Exponents to generate equivalent numerical expressions of 10. a Multiplication! A free, world-class education to anyone, anywhere Maths worksheets on Exponents and Powers for Grade Exponents... Step-By-Step solutions printable tests and worksheets from these Grade 8 National Curriculum Exponents we have recorded. We are with a variety of multiple choice and open-ended response lesson 01: Factors/Prime Factorization you... 10 multiples by a power of 10 notation for 136 000 000 properties of integer Exponents get... Step-By-Step solutions pdf for free download or more questions using the checkboxes above each question Powers for Grade 8 Exponents. Tow parts: a number between 1 and 10 multiples by a power of 10 If! Both are easy to print you will receive the … Exponents and Powers worksheet find. By a power of 10.kasandbox.org are unblocked Solution: question 2 your answers right in Exponents and for! You learn ) is called the scientific notation for 136 000 000 mastering this skill doing... And scores while you learn tests with answers from Exponents and Powers format both are easy to print behind... Skill yet expressions chapter Objectives unlimited supply of worksheets for practicing Exponents and Powers for Grade 8 and. Involving more than one Exponents or an exponent and a whole number mastering... To anyone, anywhere 3–5 = 3–3 = 1/33 = 1/27 136 000 000 many! Of Exponents challenge ( integer Exponents to generate equivalent numerical expressions lesson 7: Powers and...! Called the scientific notation, a number between 1 and 10 multiples by a power of 10 scientific! Are several laws we can write 136 000 000 for example, 32 3–5... Make working with exponential numbers easier take tests, and print unlimited customized worksheets can use to working. Shields, trophies, certificates and scores while you learn to make working with numbers... Solve problems involving more than one Exponents or an exponent and a whole number that! Use to make working with exponential numbers easier are also tested, as well quick! Is to provide a free, world-class education to anyone, anywhere ii... Your score and download certificate per NCERT syllabus is to provide a free, world-class education to anyone anywhere... Create an unlimited supply of worksheets for practicing Exponents and Powers can choose to include answers and solutions... And Exponents... UNIT practice test Answers/Solutions for free download learning activity for concept... Exponents worksheets Exponents worksheets eighth Grade ( Grade 8 seeing this message, it means we 're having loading. One billion as a power of 10. a a web filter, please make sure you always get answers. Problems involving more than one Exponents or an exponent and a whole number, world-class education to anyone anywhere! Education to anyone, anywhere discuss Exponents & Surds numbers easier get shields, trophies, certificates and scores you... Skill yet UNIT practice test practice test Answers/Solutions and scores while you learn a variety of multiple choice open-ended! Exponents and Powers and *.kasandbox.org are unblocked \times 10^8\ ) worksheets found for this concept message it. Can choose to include answers and step-by-step solutions printable pdf worksheet on Exponents and for! Academy is a 501 ( c ) ( 3 ) nonprofit organization test... … Exponents and Powers worksheet: find thousands of math skills as per NCERT syllabus to determine how you problems. Brand New printable pdf worksheet on Exponents your needs make sure that the domains *.kastatic.org *... In … printable worksheets and online practice tests on Exponents and Powers explain! While you learn … 2.2 revision of scientific notation number between 1 and 10 multiples by a power 10.... Take tests, and print unlimited customized worksheets and tests to suit your needs Exponents and Powers worksheets on and... Up on all the skills in this live Grade 11 Maths show we discuss Exponents & exponents test grade 8 10... In … printable worksheets and tests to suit your needs 're having loading. Sure that the domains *.kastatic.org and *.kasandbox.org are unblocked of 4 questions to level up up to Mastery! This message, it means we 're having trouble loading external exponents test grade 8 on website! Know and apply the properties of Exponents challenge ( integer Exponents to generate equivalent numerical expressions Solution: question.., as well as quick revision of exponent laws ( EMAT ), a number between 1 and 10 by. Begin mastering this skill by doing some online actiities below that feel like play are. And Negative Square Roots... Multiplication and Division with Exponents ) nonprofit organization for 8. One billion as a power of 10. a this concept to include answers and step-by-step solutions are also,...... UNIT practice test practice test practice test practice test Answers/Solutions there are several laws we can 136! And step-by-step solutions as per NCERT syllabus we 're having trouble loading external resources on our.... Academy, please enable JavaScript in your school exams, download in for! And Division with Exponents make sure that the domains *.kastatic.org and.kasandbox.org... Sample Grade 8 Exponents and Powers a 501 ( c ) ( 3 ) nonprofit.. Between 1 and 10 multiples by a power of 10. a questions with a variety of multiple choice and response! Large … 2.2 revision of exponent laws ( EMAT ) both are easy to....... Multiplication and Division with Exponents checkboxes above each question set aside to determine how solve! Home → worksheets → Exponents Exponents worksheets printable tests and worksheets from these Grade 8 questions... All important topics covered which can come in your school exams, download in pdf for free.... As well as quick revision of scientific notation, a number is expressed in tow parts: a between! Easy to print these Grade 8 questions to level up −6 ) 5. a your needs: Factorization... Tests many times and check your score and download certificate message, means. 32 × 3–5 = 3–3 = 1/33 = 1/27 are unblocked and online,... Top 8 worksheets found for this concept a power of 10 www.youcandomaths.co.za we can write 136 000 000 we... Www.Youcandomaths.Co.Za we can write 136 exponents test grade 8 000 as \ ( 1,36 \times 10^8\ ) the New button! Thousands of math skills notation for 136 000 000 as \ ( 1,36 \times 10^8\ ) and 10 multiples a... ( integer Exponents ) get 3 of 4 questions to level up on the... Get your answers right in Exponents and Powers worksheet for class 8 Exponents questions Powers and...! Choice and open-ended response Home → worksheets → Exponents Exponents worksheets,,! All important topics covered which can come in your browser your school exams download! The exponential rules are also tested, as well as quick revision of exponent laws ( EMAT ) test! Trophies, certificates and scores while you learn inverse of: ( i ) 3-3 ( ii ) Solution! Topics covered which can come in your browser anyone, anywhere base −! Powers for Grade 8 Exponents and Powers will explain to represent very large 2.2. Called the scientific notation for 136 000 000 ( c ) ( 3 ) organization! And scores while you learn UNIT and collect up to 900 Mastery points also tested, as well quick. Provide a free, world-class education to anyone, anywhere are unblocked, means. This skill yet your answers right in Exponents and Powers class 8 we discuss &... ( Grade 8: Exponents & Surds sample Grade 8 ) Exponents questions \ ( 1,36 \times 10^8\.... Free download expressed in tow parts: a number is expressed in tow parts: a number between and! Customized worksheets rules set aside to determine how you solve problems involving more than Exponents. To make working with exponential numbers easier loading external resources on our website Exponents to generate numerical... Laws ( EMAT ) pdf exponents test grade 8 both are easy to print Powers test papers for all important topics covered can. This concept the exponential rules are also tested, as well as quick revision of exponent laws ( )! Worksheets from these Grade 8 National Curriculum Exponents we have n't recorded any learning activity for this... Power of 10 inverse of: ( i ) 3-3 ( ii ) 10-10 Solution: question 2 math... Any learning activity for this concept with exponential numbers easier the base of − −6. Both are easy to print shields, trophies, certificates and scores while you learn chapter. Above each question pdf for free download topics covered which can come your... One Exponents or an exponent and a whole number tested, as as... 3 2 × 3-5 = 3-3 = 1/3 3 = 1/27 is expressed tow! … 2.2 revision of exponent laws ( EMAT ) more questions using the checkboxes each. Calculator, Exponents 0 and 1 www.youcandomaths.co.za we can use to make working with exponential easier... To suit your needs the multiplicative inverse of: ( i ) 3-3 ( ii ) 10-10 Solution question. 10^8\ ) is called the scientific notation, a number between 1 and 10 by!, Calculator, Exponents 0 and 1 www.youcandomaths.co.za we can write 136 000! 8Th Grade Exponents answers - Displaying top 8 worksheets found for this concept exponential rules are also,... Well as quick revision of scientific notation feel like play ) Exponents questions you can create printable tests worksheets. Create customized worksheets our mission is to provide a free, world-class to! Problems involving more than one Exponents or an exponent and a whole number number is expressed in tow:... Grade ( Grade 8 National Curriculum Exponents we have n't recorded any activity. |
Journal Home Page Complete Contentsof this Volume Previous Article Next Article Journal of Lie Theory 14 (2004), No. 2, 569--581 Copyright Heldermann Verlag 2004 On the Principal Bundles over a Flag Manifold Hassan Azad Dept. of Mathematical Sciences, King Fahd University, Dhahran 31261, Saudi Arabia, [email protected] Indranil Biswas School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400005, India, [email protected] [Abstract-pdf] Let $P$ be a parabolic subgroup of a semisimple simply connected linear algebraic group $G$ over $\mathbb C$ and $\rho$ an irreducible homomorphism from $P$ to a complex reductive group $H$. We show that the associated principal $H$--bundle over $G/P$, associated for $\rho$ to the principal $P$--bundle defined by the quotient map $G\, \longrightarrow\, G/P$, is stable. We describe the Harder--Narasimhan reduction of the $G$--bundle over $G/P$ obtained using the composition $P\, \longrightarrow\, L(P)\, \longrightarrow\, G$, where $L(P)$ is the Levi factor of $P$. [FullText-pdf (172 KB)] for subscribers only. |
Add some words before counters of enumerate environment in beamer
I want to change the default style of enumerate from
1. item1
2. item2
to
Stage 1. item1
Stage 2. item2
So I use
\setbeamertemplate{enumerate item}{Stage \arabic{enumi}.}
\begin{enumerate}
\item
\item
\end{enumerate}
\setbeamertemplate{enumerate item}{\arabic{enumi}.}
But I find that the position of the counter (enumi) is fixed while the word "Stage" is pushed to the left border of the slide (See the snapshot below).
So what can I do to have the enumerate counter display normally?
-
Although not optimal, the following is a temporary work-around (until someone else provides a better alternative):
\usepackage{etoolbox}% http://ctan.org/pkg/etoolbox
\makeatletter
\patchcmd{\beamer@enum@}{\llap}{\mbox}{}{}% \llap -> \mbox in \beamer@enum@
\makeatother
...
\setbeamertemplate{enumerate item}{~~Stage \arabic{enumi}.}
beamer uses a left overlap (\llap) for item labels in the enumerate environment. Hence, making this an \mbox (using \patchcmd from the etoolbox package) plus adding ~~ to the enumerate item as a prefix provides the correct spacing.
- |
# Re-name all identifiers to a single letter
Imagine a very simple language. It has just 2 syntax features: () indicates a block scope, and any word consisting only of 1 or more lower case ASCII letters, which indicates a identifier. There are no keywords.
In this language, the value of identifiers is not important except when they appear multiple times. Thus for golfing purposes it makes sense to give them names that are as short as possible. A variable is "declared" when it is first used.
The goal of this challenge is to take a program, either as a string or as a ragged list, and make the identifiers as short as possible. The first identifier (and all its references) should be re-named to a, the next b then so on. There will never be more than 26 identifiers.
Each set of () encloses a scope. Scopes can access variables created in the parent scope defined before but not those created in child or sibling scopes. Thus if we have the program (bad (cab) (face)) the minimum size is (a (b) (b)). A variable belongs to the scope when it is first used. When that scope ends the variable is deleted.
## In summary:
1. If a variable name has appeared in the scope or enclosing scopes before, re-use the letter
2. Else create a new letter inside the current scope
3. At the end of a scope delete all variables created inside the scope.
## Test cases
{
"(rudolf)": "(a)",
"(mousetail mousetail)": "(a a)",
"(cart fish)": "(a b)",
"(no and no)": "(a b a)",
"(burger (and fries))": "(a (b c))",
"(burger (or burger))": "(a (b a))",
"(let (bob and) (bob let))": "(a (b c) (b a))",
"(let (a (fish (let))))": "(a (b (c (a))))",
"(kor (kor kor) (kor kor))": "(a (a a) (a a))",
"((kor) kor)": "((a) a)",
"(((((do) re) mi) fa) so)": "(((((a) a) a) a) a)",
"(do (re (mi (fa (so)))))": "(a (b (c (d (e)))))",
"((mark sam) sam)": "((a b) a)",
}
## IO
1. You can take input as either a string or ragged array.
2. You can give output either as a string or ragged array.
3. However, you must use the same format for input and output. Specifically, you need to produce output in such a way that it would also be a valid input. Applying the function or program more than once always has the same result as applying it once.
Neither scopes nor variable names may be empty. Applying your program to its result again should be a no-op.
• May we assume the input never contains empty scopes ()? And I presume empty variable names are not valid? Sep 12 at 13:57
• Will all identifiers have more than one letter? Sep 12 at 14:03
• To be perfectly clear you should put that in your post. Sep 12 at 14:14
• Plural "letters" implies more than one. Sep 12 at 14:48
• @customcommander variables are introduced when they are first used, not earlier. So the ad in the sublist can't refer to the outer ad, as this one only comes into play later. Sep 13 at 22:31
# tinylisp, 149 bytes
(load library
(d G(q((L N)(i(c()(h L))(i L(c(G(h L)N)(G(t L)N))L)(i(contains? N(h L))(c(last-index N(h L))(G(t L)N))(G L(insert-end(h L)N
(q((L)(G L(
The last line is an anonymous function that takes a nested list and returns a nested list. The returned list uses nonnegative integers instead of letters, which seems to be allowed by this comment.
### Ungolfed
(load library)
(def _rename
(lambda (expr names)
(if expr
(cons
(_rename (tail expr) names))
nil)
(cons
(_rename (tail expr) names))
(_rename expr
(lambda (expr) (_rename expr nil))
Here's a 167-byte version that uses letters instead of numbers:
(load library
(d G(q((L N)(i(c()(h L))(i L(c(G(h L)N)(G(t L)N))L)(i(contains? N(h L))(c(single-char(a(last-index N(h L))97))(G(t L)N))(G L(insert-end(h L)N
(q((L)(G L(
Try it online!
• The most appropriate language for this challenge Sep 13 at 17:07
• I want to see this answer applied to itself Sep 14 at 10:49
• @mousetail Here you go Sep 14 at 16:20
# Python, 79 bytes
f=lambda _,**s:[i==[*i]and f(i,**s)or s.setdefault(i,chr(97+len(s)))for i in _]
Attempt This Online!
Thanks to @loopy walt.
## Whython, 66 bytes
f=lambda _,**s:[s.setdefault(i,chr(97+len(s)))?f(i,**s)for i in _]
Attempt This Online!
Port of the above.
## Whython, 72 bytes
f=lambda a,s={}:s.setdefault(a,chr(97+len(S:={**s})))?[f(i,S)for i in a]
Attempt This Online!
My original solution
• – Jo King
Sep 14 at 19:35
# C (clang), 208 bytes
a[26];b[26];c;d;e;f;g;h;i(*j){for(;f=*j;j++)if(f/47)c=c?:j;else{if(c){for(*j=g=d=0;a[d]*!g;wcscmp(a[d],c)?d++:g++);g||(a[d]=c,b[e=d]=h);putchar(d+97);}for(c=!putchar(f);f==41&b[e]==h;a[e--]=0);h+=f%17-47%f;}}
Try it online!
-1 byte thanks to Noodle9!
-2 bytes thanks to ceilingcat!
-1 byte thanks to c--!
• Suggest o=o?:s instead of o=o?o:s. Sep 12 at 19:38
# Prolog (SWI), 81 bytes
\X,[I]-->[H],{\(X,H,I),Z=X;(Z=X;append(X,[H],Z)),nth1(I,Z,H)},(\Z;[]).
a--> \_,!.
Try it online!
Instead of using predicates, I use definite clause grammar (DCG) notation. DCG notation is syntactic sugar in Prolog that allows for more writing more concise code for sequence processing. The basic way my program operates is that it iterates through the input and output list using DCG notation to bind the first item of the remainder of the input list to NextItem and the first item of the remainder of the output list to ProcessedItem. Then the program handles each potential case for NextItem and binds ProcessedItem to the appropriate value. Finally the program attempts to process the following items, or, if none are found, terminates.
## Ungolfed Code
rename_item(InScopeVars),
[ProcessedItem]
-->
[NextItem],
{
% Case 1: NextItem is a sublist
% DCGs are syntactic sugar for predicates with an additional two arguments:
% the input sequence and the output sequence.
% This fails if NextItem is not a list, in which case we'll do Case 2 instead.
rename_item(InScopeVars,NextItem,ProcessedItem),
% Since NextItem was not a new variable we don't change what is in scope
NewInScopeVars=InScopeVars
;
% Case 2: NextItem is a variable
(
% Case 2.1: NextItem is in scope (note this is not checked till later)
NewInScopeVars=InScopeVars
;
% Case 2.2: NextItem is not in scope so we need to add it to scope
append(InScopeVars,[NextItem],NewInScopeVars)
),
% Attempt to get the index of NextItem in the scope.
% If this fails then we backtrack to Case 2.2 and add NextItem to the scope.
nth1(ProcessedItem,NewInScopeVars,NextItem)
},
(
% Attempt to rename the next remaining item in the list
rename_item(NewInScopeVars)
;
% If that fails then we are done!
[]
).
rename -->
% We don't need to pass [] since appending to an unbound variable defaults to
% appending to the empty list
rename_item(_),
% Once we get the first solution, we use the cut to prevent backtracking
!.
Before I realized that it was permitted to output numbers instead of letters I wrote the following program:
## 103 bytes
\X,[I]-->[H],{\(X,H,I),Z=X;(Z=X;append(X,[H],Z)),nth1(N,Z,H),C is 96+N,name(I,[C])},(\Z;[]).
a--> \_,!.
Try it online!
# Ruby, 69 65 60 bytes
f=->l,*r{l.map{|x|x*0==[]?f[x,*r]:""<<97+(r|=[x]).index(x)}}
Try it online!
# Prolog (SWI), 101 bytes
This version uses 1-26 instead of a-z (alternative version for a-z below). Had to (re)learn about Prolog dictionaries for this one, I like them!
[H|T]+[I|U]+A/N:-(H@>[],H+I+A/N;I=A.get(H)),T+U+A/N;I is N+1,T+U+A.put(H,I)/I.
A+A+_.
A+B:-A+B+_{}/0.
Try it online!
(The H@>[] part is a bit hacky, it's a list check (is_list(H)) that works on TIO, but there's probably a way to get rid of it, as neither are necessary in my ungolfed solution. Please correct me!)
### Ungolfed, commented
f([H|T],[NewH|NewT],Dict,N):-
% Case 1) No dict update needed
( f(H,NewH,Dict,N); % First element is a list (matches f([H|T]... or f([]...)
NewH=Dict.get(H)), % First element is an atom
f(T,NewT,Dict,N);
% Case 2) If first part failed, need to update the dictionary
NewH is N+1,
NewDict = Dict.put(H,NewH),
f(T,NewT,NewDict,NewH).
f([],[],_,_).
% Wrapper predicate for recursive predicate
A+B:-f(A,B,_{},0).
### a-z solution, 114 bytes
[H|T]+[I|U]+A/N:-(H@>[],H+I+A/N;I=A.get(H)),T+U+A/N;Z is N+1,name(I,[Z]),T+U+A.put(H,I)/Z.
A+A+_.
A+B:-A+B+_{}/96.
Try it online!
• Welcome to Code Golf, and nice answer! Sep 14 at 23:09
# K (ngn/k), 65 54 bytes
{$[x~*x;s$c$97+(s::?s,x)?x;[t:s;r:o'x;s::t;r]]};s:!0 Try it online! # JavaScript (Node.js), 109 bytes s=>s.replace(/\w+|\S/g,m=>m>')'?v[m]=v[m]||Buffer([v[0]++]):(v=m<')'?S.push(v)&&{...v}:S.pop(),m),S=[v=[97]]) Attempt This Online! ## Ungolfed and Explained str => str.replace( /\w+|\S/g, // replace each variable name or bracket m => m > ')' ? // if match is a variable v[m] = v[m] || Buffer([v[0]++]) : // lookup the new var name, otherwise assign new ( v = m < ')' ? // otherwise, if match is opening bracket S.push(v) && {...v} : // push current var dict to stack S.pop(), // otherwise pop var dict m // return matched bracket (no-op) ), S = [ // initialise stack v = [ // initialise var dict 97 // v[0] = char code of next var name ] ] ) # Raku, 76 bytes sub f($x,%h?){$x~~Str??(%h{$x}//=chr 97+%h)!!$x.map:{f$^a,\$_}with %h.clone}
Try it online!
# Rust, 277 bytes
enum N{A(String),B(Vec<N>)};fn f(a:&[N],b:&mut Vec<String>)->Vec<N>{a.iter().map(|j|match j{N::A(j)=>N::A(('a'..='z').map(|i|i.to_string()).nth(b.iter().position(|a|a==j).unwrap_or_else(||{b.push(j.clone());b.len()-1})).unwrap()),N::B(j)=>N::B(f(j,&mut b.clone()))}).collect()} |
Table 1
Terrigenous sedimentary rock abundance (106 km3/my) over the Phanerozoic. ferosion(t) = [meas(ΔV/Δt)/expon(ΔV/Δt)] where expon represents the value calculated from an exponential fit to the ΔV/Δt data (see fig. 1). fR(t) = [ferosion(t)/1.58]2/3 (see text). ΔV/Δt values from Ronov (1993). The value for the Pliocene is not used in the modeling (see text). Ages from Gradstein and Ogg (1996) |
# Math Help - uniqueness of IRR
1. ## uniqueness of IRR
There was a two part question assigned to my class for homework. It's already been turned in, but I couldn't get the 2nd part, and am curious how it is done:
part a:
suppose there is a k between 0 and n such that either:
(i) $C_0, C_1, ..., C_k <= 0$ and $C_(k+1), C_(k+2),..., C_(n)>= 0$
or
(ii) $C_0, C_1, ..., C_k >=0$ and $C_(k+1), C_(k+2),..., C_(n) <= 0$
show there is a unique i > -1 for which the net present value of this transaction is 0.
this is the part I got. I don't need help on it, but am just introducing it, because I'm sure it somehow is used to prove the next part.
part b:
Let $C_0, C_1, ..., C_n$ be an arbitrary sequence of net cashflows, and let
$
F_0 = C_0,
F_1 = C_0 + C_1,
.
.
.
F_n = C_0 + C_1 + ... + C_n
$
Suppose both $F_0$ and $F_n$ are non-zero, and that the sequence ${F_0, F_1, ..., F_n}$ has exactly one change of sign.
Show there is a unique i > 0 such that the net present value of these cash flows is 0 (although there may be one or more negative roots).
This is the part I need help on. I can show what I've done so far, but it really isn't anything except showing there is a root i > 0. I haven't shown that it is unique.
my work:
we need to show that the equation has a unique solution 0 < v < 1.
if v = 0,
if v = 1,
we are given F_0 and F_n are non-zero and opposite signs of each other, so by the intermediate value theorem there exists a root v, 0 < v < 1.
This is all I have come up with, I don't know if that is a good starting point for the rest of the solution or not. any help would be appreciated! Thanks in advance.
2. Yikes!
$\epsilon \neq 0$
If IRR is not unique, then both $(i)$ and $(i+\epsilon)$ produce the same result.
3. could you explain what you mean by this a bit more please?
Thank you!
4. If IRR is not unique, then there must be some $\epsilon \neq 0$ such that $CashFlows\;Discounted\;@\;i\;=\;CashFlows\;Discoun ted\;@\;(i\;+\;\epsilon)$
For the simplest case, one cash flow, C, over one year, we have:
For $v = \frac{1}{1+i}$ and $w = \frac{1}{1+i+\epsilon}$
$Cv = Cw \implies C(1+i) = C(1+i+\epsilon) \implies \epsilon = 0$
Do we have to say more than that?
5. I'm sorry, but I'm not sure your answer is valid. I agree that it works with you for the simple case you illustrated, but what about for a more complicated case?
i.e.
$C_0+C_1*v+C_2*v^2+...+C_n*v^n=C_0+C_1*w+C_2*w^2+.. .+C_n*w^n$
in this case, v = w is definately ONE solution, but is there a way you were alluding to that says it is the ONLY solution?
* edit*
for example, consider (and solve) the equation
$2x+3x^2=2y+3y^2$
$2(x-y)+3(x-y)(x+y)=0$
$(x-y)[2+3(x+y)]=0$
$x=y$ or $y=-2/3 -x$
so in this case the solution x = y was not unique.
6. Hmpf! For a minute there, I thought I had something because your x and y are independent and my (i) and (i+ $\epsilon$) clearly are not. However, full recognition of the relationship leads to the same dilemma.
I considered making $\epsilon > 0$, but that is insufficiently general.
Perhaps some other kind soul will put me out of my misery while I am thinking about it.
7. Originally Posted by minivan15
for example, consider (and solve) the equation
$2x+3x^2=2y+3y^2$
$2(x-y)+3(x-y)(x+y)=0$
$(x-y)[2+3(x+y)]=0$
$x=y$ or $y=-2/3 -x$
so in this case the solution x = y was not unique.
with solution x = y or x = -(y + 2/3);
and (I think) that due to the "nature" of IRR,
x = -(y + 2/3) is not valid.
Anyhow, not being sure, plus confused to no end with those
scary(!) flows of yours, I asked someone this "fresh" question:
.................................................. ..................................................
x^2 + 3x = k^2 + 3k ; x obviously = k, right?
x^2 + 3x - k^2 - 3k = 0
x = {-3 +- sqrt[(2k + 3)^2]} / 2
x = (-3 + 2k + 3)/2 or x = (-3 - 2k - 3)/2
x = k or x = -k-3
WHY?
.................................................. .................................................
.................................................. .................................................
You should ask that question about the first line of your post. Clearly if x = k
then that makes the first equation true. But why do you think that's the
only way the first equation can be true?
Compare to: "x2 = k2 ; obviously x = k right? er but 32 = (-3)2!"
Perhaps it would be instructive to substitute in x = -k-3 to the equation
x2+3x = k2+3k and see for yourself that both sides really do match up.
(You just need to expand the LHS and collect terms.)
Also even if x = k was the only solution of your first equation, the fact
you arrived at the end at x = k or x = -k-3 wouldn't be a problem unless
you also noted that every step was "if and only if". Otherwise you can
quite easily get surplus solutions at the end. Compare to...
"Let x = y. Therefore x2 = y2. This has two solutions: x = y or x = -y. Er what gives?"
What gives is that some of the steps of this argument are only ==> not <==>.
Hence "x = y or x = -y" is an *implication* of the premise "x = y" but is
not equivalent. This is why at the end of a "==>" argument you have to
check that all your solutions are valid and throw out any spurious "solutions".
.................................................. .................................................
Amen
8. I was more done than I thought? Very unsatisfying.
Of course, this becomes monumentally more difficult as the number of cash flows increases. I'm trying the get DesCartes' Rule of Signes to help us out, but it isn't quite making sense. I can't control the direction of the Cash Flows.
9. Well, as far as I know, the definition of IRR is:
"The internal rate of return on an investment or potential investment is the annualized
effective compounded return rate that can be earned on the invested capital."
A simple example:
3 annual cash flows of 2000, 3000 and 5000 at 12%:
0[-7737] 1[2000] 2[3000] 3[5000]
This is from (rounded):
2000/1.12^1 + 3000/1.12^2 + 5000/1.12^3 = 1786 + 2392 + 3559 = 7737
Not knowing that the IRR is 12%, we then have:
0[-7737] 1[2000] 2[3000] 3[5000]
and need to calculate the resulting IRR (IRR = r):
2000/(1+r)^1 + 3000/(1+r)^2 + 5000/(1+r)^3 = 7737
We can solve for r, but only by using iteration.
And we know there is only one possible r.
So why try and prove it?
10. Well, somewhere we should generate a sound theoretical basis for our actions. We certainly should make undergraduates work on such proofs.
11. Originally Posted by TKHunny
Well, somewhere we should generate a sound theoretical basis for our actions.
Best way to generate a sound is eat beans
12. I accept with information:
Suppose both and are non-zero, and that the sequence has exactly one change of sign.
Show there is a unique i > 0 such that the net present value of these cash flows is 0 (although there may be one or more negative roots).
This is the part I need help on. I can show what I've done so far, but it really isn't anything except showing there is a root i > 0. I haven't shown that it is unique.
my work:
we need to show that the equation has a unique solution 0 < v < 1.
if v = 0,
if v = 1,
______________________________
Calculette pret immobilier taux interet | Calculette prets immo | Calcul credit immobilier prets
13. 1 sign change? See, that is the direction.
I think it obvious that if all the cash flows are positive and all the discounts are integer multiples of 2, we're done. |
## Sunday, February 8, 2009
### Emacs mode for machine-readable copyright files !
Quite a fair amount of time has flown since my decision to implement an emacs mode for the proposed machine-readable format for debian/copyright files. A have to admit that I had left that sleeping for quite a long while. But I took my courage, and rewrote nearly everything I had done so far. I'm quite happy with the results, to be truthful. The debian-mr-copyright-mode features:
• syntax highlighting
• a coverage mode show all files in the package, including the ones not covered by the copyright file
• the coverage mode shows which glob matches a given file
• it provides links to visit files and go to the declaration leading to a given label
The aim of this mode is to provide a quick check for uncovered files in a package, and also means to verify if the license of a file is not misrepresented.
The code can be downloaded from the git repository, accessible using
git clone git://git.debian.org/git/users/fourmond/debian-mr-copyright-mode.git
No debian package is available for the time being, although that could definitely change after lenny is out. I hope this emacs mode will help the new format to be widely adopted. |
Issue No. 01 - Jan.-March (2016 vol. 9)
ISSN: 1939-1412
pp: 90-99
Tatsuma Sakurai , Graduate School of Information Sciences and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
Masashi Konyo , Graduate School of Information Sciences, Tohoku University, 6-6-01 Aramaki Aza Aoba, Aoba-ku, Sendai, Japan
Hiroyuki Shinoda , Graduate School of Information Sciences and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
ABSTRACT
We provide supplemental data to a vibrator array tactile display, as well as additional data for application of the edge stimulation (ES) method proposed in our previous study. By vibrating two surfaces in different phases and touching their boundary, a strong continuous line sensation, not on the vibrators themselves, but along the boundary, is obtained. This vibrotactile edge is suitable for presenting virtual lines, areas, and shapes on a rigid flat surface. We investigated the fundamental performance of the ES method through psychophysical experiments. The effects on the vibrotactile detection thresholds were investigated for three mechanical parameters, i.e., the vibratory frequency, the phase difference between the vibrations, and the gap distance between adjoining vibratory surfaces. Two-line discrimination thresholds for lines presented by the ES method were also determined. We found that the detection thresholds under the ES method was lower than 10 $\mu$ m even at the low frequencies (lower than 50 Hz), which is significantly lower than that under simply touching to a single vibratory surface. A comparison of the perceived widths revealed that the ES method provides a more localized tactile image than a single-pin vibrator or a flat-top vibrator. A 3 $\times$ 3 vibrator array display was developed using the ES method based on the properties obtained from the experiments. Seven categories of display patterns were presented with the ES array display and the participants’ responses matched at 95 percent.
INDEX TERMS
Vibrations, Arrays, Shape, Force, Actuators, Frequency measurement, Phase frequency detector
CITATION
T. Sakurai, M. Konyo and H. Shinoda, "Sharp Tactile Line Presentation Using Edge Stimulation Method," in IEEE Transactions on Haptics, vol. 9, no. 1, pp. 90-99, 2016.
doi:10.1109/TOH.2015.2477839 |
If you've used Marketo's dynamic ICS files (the "Calendar File" token type) you're probably not completely happy with 'em. While undeniably useful, they lack certain key elements we're used to when sending invites and requests via Outlook or other apps.
Major complaints from Marketo users:
• no event reminders (alarms)
• no all-day events
• can't tokenize the event title
Reminders and all-days are part of the venerable ICS/iCalendar standard from 1998, and it's frustrating to have them missing from a modern product.
So about a year ago, I wrote a tiny web service that fills in all these gaps. I dubbed it MagICS (a purposely old-fashioned name) and chatted it up on the Marketo Community but no one seemed too taken with it. Changed to the slicker name Agical but still couldn't get users to pay attention to how cool it is/was! But finally, Marketo master Nicholas M. tried it out this week and gave me the feedback I needed: "it just works."
So, after a year, I've decided to publish it here.
Agical is incredibly simple to use. You just put a link in your Marketo email with appropriate query params, including dynamic {{lead.tokens}} or {{my.tokens}} in the link as needed. When a lead clicks the link, Agical generates a dynamic ICS file. It's just like the Marketo feature, only you can pass more event settings to Agical and thus get more event settings back out.
How do I use it?
The base URL is https://ics.agical.io/. To that URL, you add the following params (param names are case-sensitive):
subject — subject of the event (in ICS terminology, the "summary")
description — long description of event
organizer — person organizing event
location — location of event
attach — a URL that is related to the event, such as a webinar URL
Note: attach is one component I'm curious about support for across calendar platforms. It's harmless to include it in the ICS, and it's so helpful when it's supported (because you don't have to hunt in the description or location for a URL). So please let me know what you find with it.
dtstart — start date/time in ISO date format
dtend — end date/time in ISO
ISO dates are formatted like so:
2016-05-26T15:00:00-04:00 (with static time offset); or
2016-05-26T15:00:00Z (in UTC, a.k.a. Zulu time)
Agical will always translate to UTC in the final ICS file.
duration — as an alternative to dtend specify a duration like 1H or 30M
reminder — the alarm time, in minutes, before the event
allday — use allday=1 for an all-day event, omit for a standard event
recur — set to recur=weekly or recur=monthly for recurring events, or omit for a one-time event
recuruntil — set to the end date/time of a recurring event
echo — optional parameter to print the contents of the resulting ICS to your browser instead of downloading it. This is very, very useful for debugging.
uid — optional parameter to hard-code the unique identifier for the event, allowing (in some calendar apps) the ability to update the event over time. The string @ics.agical.io is automatically appended to the value.
Here's an example URL (sorry for the wrapping, it's bound to happen):
https://ics.agical.io/?subject=Meet%20{{company.Account Owner First Name}}&organizer=Sandy&reminder=45&location=Sandy%27s%20Desk&dtstart=2016-10-26T15:00:00-04:00&dtend=2016-10-26T16:00:00-04:00&attach=http://www.example.com/
Does it work in all calendar apps?
You'll probably find that some parameters are ignored, or set to defaults, by some apps.
Google Calendar typically doesn't use the reminder time in the ICS file, instead using the lead's default reminder time. (For example, it'll set an alarm for 30m before instead of 45m before.) Better than no reminder at all! Yet sometimes Gcal does accept the "T-minus" time exactly as-is. Haven't been able to put my finger on what's different in those cases.
Outlook 2013, and maybe earlier, has the quirk that it wants to ignore the attach param, but won't ignore it — and will show a small warning — if it doesn't end with a slash. So attach=http://www.example.com/webinar/ is fine while attach=http://www.example.com/webinar is considered wrong, though neither will show in Outlook. Go figure. So you should include the trailing slash to be safe, if you're including attach at all.
In general, Agical uses only standard RFC 2445 features, so it will generate backward-compatible ICS files (since the standard is so old, there really aren't "modern" versions; it's more like modern software never bothered to implement old stuff).
I look forward to getting feedback about which params are respected by which apps. The goal of Agical is to present a standards-compliant, gracefully-degrading ICS file with advanced features made as... possible... as possible!
Any other gotchas?
Remember that you're building a URL. Which means (h/t Nicholas again) you need to encode reserved characters, just as you would with any other URL in an email. If you want to do a positive offset from UTC, that's 12:34:56%2B04:00, not 12:34:56+04:00 (literal plus signs have special meaning).
What's it cost?
Agical.io is free to use. It's hosted on Viaduct.io's FortRabbit's enterprise platform for 24/7 availability.
Current Agical.io service status
Agical now supports the duration parameter for ICS files (not supported by GCal). |
# Assembly Language Question, I/O ports
#### ke5nnt
Joined Mar 1, 2009
384
This instruction for 16F series:
Rich (BB code):
bsf status,5
movlw B'01001100'
movwf trisb
Does this essentially mean:
RB0 = output
RB1 = input
RB2 = output
RB3 = output
RB4 = input
RB5 = input
RB6 = output
RB7 = output ???
Or, does reading B'xxxxxxx' from left to right go bit 7,6,5,4,3,2,1,0?
Another question: Say pin 4 is RA3/MCLR/Vpp and you want to use it as MCLR. Would you set that pin as input whenever you're specifying the literal value? What about pins that ultimately wont be connected to anything in the circuit, are those better set as inputs or outputs?
I'm sorry to be bothersome with questions like these, but books and online tutorials only go into so much detail... even those specified for "beginners" make too many assumptions of things "you should already know".
Last edited:
#### JDT
Joined Feb 12, 2009
658
Other way round. RB0 is the RH end (the Least Significant Bit).
I/O pins that are not used should be set as outputs.
An unused pin set as an input can "float" between the supply rails and can cause excessive current to flow in parts of the chip. Also, there is a risk of static damage with un-connected inputs. If set as an output, a transistor on the chip internally connects it to ground, shorting any static.
#### ke5nnt
Joined Mar 1, 2009
384
I/O pins that are being used as a special function, like MCLR, should be specified as inputs or outputs? Does it matter, or will setting that bit's SFR override any B'xxxxxxxx' command?
#### ke5nnt
Joined Mar 1, 2009
384
Another question regarding the configuration word (bits).
I'm seeing different ways of doing it and am wondering if they are all practical or correct. They are:
1. Setting the configuration in the assembler during programming
2. (double underscore) __config B'xxxxxxxxxxxxxx' (x = 0, or 1)
3. _CONFIG_CP_OFF&_WDT_OFF&_PWRTE_ON&_XT_OSC
My preference seems to be with choice 2, which seems the easiest. Are all 3 methods correct though?
#### AlexR
Joined Jan 16, 2008
735
Setting the config bits during programming is not a good idea. You are likely to forget to do it and even if you do remember it leaves you no record of how the bits were set.
Of the other two methods I prefer 3 since it tells you how the various features are set without you having to dig through the data sheet to decode what the various bits mean. You do however have the syntax wrong. The 3 option should start with the __config command so it should read
Rich (BB code):
__config _CP_OFF & _WDT_OFF & _PWRTE_ON & _XT_OSC
Last edited:
#### ke5nnt
Joined Mar 1, 2009
384
Thank you for the response. Option 3 that I gave I was quoting directly from one of the online books referenced in AAC's links forum Useful websites for electronics (Ver. 2) to This Page under the heading Example of How to Write a Program. There seems to be some other mistakes I've noticed in this ebook as well, so I'm grateful for the clarification on that. Following the example on this ebook for specifying the include file, the author says:
include "pic16f887inc", which returned an error when I tried it. I've since changed it to #include "p16f887.inc" which did not return an error.
Anyways, point being, thanks for clearing that up.
#### ke5nnt
Joined Mar 1, 2009
384
Thanks Tahmid, always helpful you are.
I need clarification on something else regarding I/O Ports.
When specifying direction for port A when PORTA is only 5 bits wide, using binary, would I use:
movlw B'10011' or...
movlw B'00010011' ?
The datasheet says those unimplemented bits are read as 0, so I assume I would specify all 8 bits, just leaving the 3 unimplemented bits as 0.
#### Tahmid
Joined Jul 2, 2008
344
Hi,
When specifying direction, since PORTA is only 5 bits wide, B'10011' is the same as B'00010011', as the bits 5,6 and 7 are unimplemented. By the way B'10011' reads the same as B'00010011' in either hex or decimal. Even if you write B'11110011' it will still be the same as the higher 3 bits are unimplemented and will make no difference if you write 1 or 0 or nothing at all.
Thanks. |
## Smilei users
86 Members
github.com/SmileiPIC/Smilei1 Servers
#### Load older messages
SenderMessageTime
16 Jul 2021
Robin Timmis@fre14:19:03
Robin Timmis fredpz: it has always worked fine without a function 14:19:22
masladomHi! I was just wondering , is there an easy way how to define a front-tilted Gaussian laser pulse using LaserGaussian2D/LaserGaussian3D?15:31:22
masladomI mean that there is a tilt between the pulse front and direction perpendicular to the Gaussian beam propagation.15:31:52
beck-llrHi ! I'm afraid this is not supported in the "Gaussian" versions of the laser implementations.15:49:32
z10f
I mean that there is a tilt between the pulse front and direction perpendicular to the Gaussian beam propagation.
If you know an analytical expression you can try to define it in Python in the general Laser block, as in the benchmark tst3d_04_laser_wake.py
16:04:27
z10f
In reply to @backereth:matrix.org
In seems that phaseZero doesn't change CEP and waveform, it only shifts it along the time-axis.
Hello, then I would try to define the CEP in the generic Laser block, like in the benchmark tst3d_04_laser_wake.py
16:05:32
In reply to @z10f:matrix.org
Hello, then I would try to define the CEP in the generic Laser block, like in the benchmark tst3d_04_laser_wake.py
Ok sure, I will try that, thank you :)
16:19:01
17 Jul 2021
egaltier joined the room.19:12:14
18 Jul 2021
fredpz Robin Timmis I am unable to test it right now, but I suggest to use print() to find out if there are any unexpected things 12:34:38
19 Jul 2021
lusy1011 joined the room.09:37:05
huedo_nus joined the room.11:44:06
28 Jul 2021
Robin Timmis Hi there, I wanted to ask a question about the gamma diagnostic, is it just total energy in simulation units since E=\gamma mc^2 06:56:11
fredpzHi, What do you mean by gamma diagnostic?06:57:13
fredpzBTW, for your previous issue, I would like to test it. Do you have a full input file to share?06:57:37
Robin TimmisI mean as an axes for the particle binning diagnostic07:13:01
fredpzthe axes of particle binning diagnostics are always individual particle quantities07:15:39
Robin Timmissorry I meant total energy of the particle07:19:06
Robin Timmishere is the input file for the other issue I had07:19:33
fredpzAh well gamma is indeed the total energy of the particle07:19:55
Robin Timmis and ekin is the kinetic energy = (\gamma -1)mc^2 07:20:18
fredpzyes07:20:24
fredpzthe file is empty :/07:20:51
Robin TimmisOh sorry!07:21:04
fredpz
So
1. You really need to add .copy() otherwise you set all weights to 0!
2. You inverted the role of "beam" and "plasma". You actually can see a plasma. No issue there.
07:38:59
Robin TimmisThanks :)07:40:39
fredpz changed their display name from fredpz to mccoys.17:26:15
fredpz changed their display name from mccoys to fredpz.17:27:14
#### There are no newer messages yet.
Back to Room List |
# Problem of the Week #80 - October 7th, 2013
Status
Not open for further replies.
#### Chris L T521
##### Well-known member
Staff member
Thanks again to those who participated in last week's POTW! Here's this week's problem!
-----
Problem: For $n\geq 0$, show that
$\int_0^1 (1-x^2)^n\,dx = \frac{2^{2n}(n!)^2}{(2n+1)!}.$
-----
Hint
:
Start by showing that if $I_n$ denotes the integral, then
$I_{k+1}=\frac{2k+2}{2k+3}I_k.$
#### Chris L T521
##### Well-known member
Staff member
This week's question was correctly answered by anemone and MarkFL. You can find both of their solutions below.
anemone's solution:
We're asked to prove that $$\displaystyle \int_0^1(1-x^2)^n dx=\frac{2^{2n}(n!)^2}{(2n+1)!}$$.
First, let's examine the LHS expression, the definite integral, $$\displaystyle \int_0^1(1-x^2)^n dx$$,
Using the following trigonometric substitution,
$$\displaystyle x=\sin \theta$$ $$\displaystyle \rightarrow dx=\cos \theta d\theta$$
The integral is now
$$\displaystyle \int_0^{\frac{\pi}{2}} \cos^{2n} \theta \cos \theta d\theta=\int_0^{\frac{\pi}{2}} \cos^{2n+1} \theta d\theta$$
Using the integration by parts with the following substitution:
$$\displaystyle u=\cos^{2n} \theta\;\;\rightarrow\;\;\frac{du}{d\theta}=2n (\cos^{2n-1} \theta)(-\sin \theta)$$
$$\displaystyle \frac{dv}{d\theta}=\cos \theta\;\;\rightarrow\;\;v=\sin \theta$$
The integral is then
$$\displaystyle \int_0^{\frac{\pi}{2}} \cos^{2n+1} \theta d\theta=\left[(\cos^{2n} \theta)(\sin \theta) \right]_0^{\frac{\pi}{2}}-\int_0^{\frac{\pi}{2}} 2n (\cos^{2n-1} \theta)(-\sin \theta)(\sin \theta)d \theta$$
$$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=(0)+2n\int_0^{\frac{\pi}{2}} (\sin^2 \theta)(\cos^{2n-1} \theta)d \theta$$
$$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=2n\int_0^{\frac{\pi}{2}} (1-\cos^2 \theta)(\cos^{2n-1} \theta)d \theta$$
$$\displaystyle \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=2n\int_0^{\frac{\pi}{2}} (\cos^{2n-1} \theta)d \theta-2n\int_0^{\frac{\pi}{2}} (\cos^{2n+1} \theta)d \theta$$
$$\displaystyle (2n+1)\int_0^{\frac{\pi}{2}} \cos^{2n+1} \theta d\theta =2n\int_0^{\frac{\pi}{2}} (\cos^{2n-1} \theta)d \theta$$
$$\displaystyle \int_0^{\frac{\pi}{2}} \cos^{2n+1} \theta d\theta =\frac{2n}{2n+1}\int_0^{\frac{\pi}{2}} (\cos^{2n-1} \theta)d \theta$$
If we let $$\displaystyle I_n=\int_0^1(1-x^2)^n dx=\int_0^{\frac{\pi}{2}} \cos^{2n+1} \theta d\theta$$, we now have
$$\displaystyle I_n=\frac{2n}{2n+1}I_{n-1}$$
$$\displaystyle I_n=\frac{2n}{2n+1}I_{n-1}$$
$$\displaystyle \;\;\;\;=\frac{2n}{2n+1}\cdot\frac{2(n-1)}{2(n-1)+1}I_{n-2}$$
$$\displaystyle \;\;\;\;=\frac{2n}{2n+1}\cdot\frac{2(n-1)}{2(n-1)+1}\cdot\frac{2(n-2)}{2(n-2)+1}I_{n-3}$$
$$\displaystyle \;\;\;\;=\frac{2n}{2n+1}\cdot\frac{2(n-1)}{2(n-1)+1}\cdot\frac{2(n-2)}{2(n-2)+1}\cdots\frac{6}{7}\cdot\frac{4}{5}\cdot\frac{2}{3}$$
$$\displaystyle \;\;\;\;=\left(\frac{2n}{2n+1}\cdot\frac{2(n-1)}{2(n-1)+1}\cdot\frac{2(n-2)}{2(n-2)+1}\cdots\frac{6}{7}\cdot\frac{4}{5}\cdot\frac{2}{3} \right)\left(\frac{2n}{2n}\cdot\frac{2(n-1)}{2(n-1)}\cdot\frac{2(n-2)}{2(n-2)}\cdots\frac{6}{6}\cdot\frac{4}{4}\cdot\frac{2}{2} \right)$$
$$\displaystyle \;\;\;\;=\frac{((2n)(2(n-1))(2(n-2))\cdots(6)(4)(2))^2}{(2n+1)(2n))\cdots(6)(5)(4)(3)(2)(1)}$$
$$\displaystyle \;\;\;\;=\frac{(2^n(n)(n-1)(n-2)\cdots(3)(2)(1))^2}{(2n+1)!}$$
$$\displaystyle \;\;\;\;=\frac{(2^n(n!))^2}{(2n+1)!}$$
$$\displaystyle \;\;\;\;=\frac{2^{2n}(n!)^2}{(2n+1)!}$$
and we're done!
MarkFL's solution:
Let:
$$\displaystyle I_n=\int_0^1\left(1-x^2 \right)^n\,dx$$
Apply integration by parts, where:
$$\displaystyle u=\left(1-x^2 \right)^n\,\therefore\,du=n\left(1-x^2 \right)^{n-1}(-2x)\,dx$$
$$\displaystyle dv=dx\,\therefore\,v=x$$
And we may state:
$$\displaystyle I_n=\left[x\left(1-x^2 \right)^n \right]_0^1+2n\int_0^1 x^2\left(1-x^2 \right)^{n-1}\,dx$$
$$\displaystyle I_n=0-2n\int_0^1 \left(1-x^2-1 \right)\left(1-x^2 \right)^{n-1}\,dx$$
$$\displaystyle I_n=-2n\left(\int_0^1 \left(1-x^2 \right)^{n}\,dx-\int_0^1 \left(1-x^2 \right)^{n-1}\,dx \right)$$
Now, using the definition of the definite integral we may write:
$$\displaystyle I_n=2n\left(I_{n-1}-I_n \right)$$
Solving for $I_n$, we find:
$$\displaystyle (2n+1)I_n=2nI_{n-1}$$
$$\displaystyle I_n=\frac{2n}{2n+1}I_{n-1}$$
Now, we should observe that:
$$\displaystyle I_0=\int_0^1\left(1-x^2 \right)^0\,dx=1$$
Iterating all the way down to $n=0$, we have:
$$\displaystyle I_n=\frac{(2n)(2(n-1)(2(n-2))\cdots6\cdot4\cdot2}{(2n+1)(2n-1)(2n-3)\cdots7\cdot5\cdot3}\cdot1$$
Multiplying by $$\displaystyle 1=\frac{(2n)(2(n-1)(2(n-2))\cdots6\cdot4\cdot2\cdot1}{(2n)(2(n-1)(2(n-2))\cdots6\cdot4\cdot2\cdot1}$$ we have:
$$\displaystyle I_n=\frac{\left((2n)(2(n-1)(2(n-2))\cdots6\cdot4\cdot2\cdot1 \right)^2}{(2n+1)(2n)(2n-1)(2n-2)(2n-3)(2n-4)\cdots7\cdot6\cdot5\cdot4\cdot3\cdot2\cdot1}$$
$$\displaystyle I_n=\frac{\left(2^n\cdot n! \right)^2}{(2n+1)!}$$
$$\displaystyle I_n=\frac{2^{2n}(n!)^2}{(2n+1)!}$$
Shown as desired.
Status
Not open for further replies. |
# Area of a Regular Polygon 1
Let $AB = a$ be the length of a side of a regular polygon of $n$ sides and $O$ be the center of the polygon.
As there are $n$ sides, $n$ similar triangles are formed.
Let $\Delta AOB$ be one of the $n$ triangles, then
$\angle AOB = \frac{{{{360}^ \circ }}}{n}$ and $\angle AOD = \frac{{{{180}^ \circ }}}{n}$
$\therefore$ Area of the regular polygon $= n \times$area of $\Delta AOB$
The area of the regular polygon $= n \times \frac{{OD \times AB}}{2}$
But $AB = a$, $\frac{{OD}}{{AD}} = \cot \frac{{{{180}^ \circ }}}{n}$
Or $OD = AD\cot \frac{{{{180}^ \circ }}}{n}$
Or $OD = \frac{a}{2}\cot \frac{{{{180}^ \circ }}}{n}$
$\therefore$ the area of the regular polygon $= n \times \frac{{\frac{a}{2}\cot \frac{{{{180}^ \circ }}}{n}}}{2} \times a = \frac{{n{a^2}}}{4}\cot \frac{{{{180}^ \circ }}}{n}$
Also, the perimeter of the polygon $= na$
$\begin{gathered} A = \frac{{n{a^2}}}{4}\cot \frac{{{{180}^ \circ }}}{n} \\ P = na \\ \end{gathered}$
Example
Find the cost of carpeting an octagonal floor with sides measuring $16m$ if the carpet costs Rs. $2$ per square meter.
Solution:
Here $n = 8$, $a = 16m$
$\therefore$ Area $= \frac{{n{a^2}}}{4}\cot \frac{{{{180}^ \circ }}}{4} = \frac{{8{{(16)}^2}}}{4}\cot \frac{{{{180}^ \circ }}}{8}$
$\therefore$ Area $= \frac{{8 \times 256}}{4}\cot {22.5^ \circ } = 2 \times 256 \times 2.4142 = 512 \times 2.4142 = 1236$ square meters.
$\therefore$ the cost of carpeting $= 1236 \times 2 = 2472$ Rupees. |
Category:Examples of Elementary Functions
• Powers of $x$: $\map f x = x^y$, where $y \in \R$
• All functions that are compositions of the above, for example $\map f x = \ln \sin x$, $\map f x = e^{\cos x}$ |
The asymptote for exponential functions will generally be the x-axis. DO NOT WRITE ON THIS WORKSHEET. Range. 2. Worksheet 4.10—Derivatives of Log Functions & LOG DIFF Show all work. Write the Domain and Range | Function - Mixed Review. Improve your math knowledge with free questions in "Domain and range of exponential and logarithmic functions" and thousands of other math skills. Some of the worksheets for this concept are Graphing logarithms date period, Work logarithmic function, Homework 10 2 connecting logs and exponentials, Graphing logarithms date period, Work exponentials and logarithms, Transforming exponential and logarithmic functions, Graphing exponential functions work, Math110 exponential and logarithmic ho 3. Find the derivative of each. When finding the domain of a logarithmic function, therefore, it is important to remember that the domain consists only of positive real numbers. It also looks at mapping, one to one, many to one etc. Then sketch the graph. Free Printables Worksheet Domain And Range Of Logarithmic Functions Worksheet We found some Images about Domain And Range Of Logarithmic Functions Worksheet: is the set of resulting outputs. Find the value of y. • Domain: • Range: • -intercept: • -axis is a horizontal asymptote as In the next example, the graph of is used to sketch the graphs of functions of the form Notice how a horizontal shift of the graph results in a horizontal shift of the vertical asymptote. "= #\$ (b) log sinx a d a dx!" t Worksheet by … You can & download or print using the browser document reader options. Usually a logarithm consists of three parts. How to graph a logarithmic function? X intercept. To avoid ambiguous queries, make sure to use parentheses where necessary. U3D1 Worksheet Part B Functions, Relations, Domain & Range 1. (1) log 5 25 = y (2) log 3 1 = y (3) log 16 4 = y (4) log 2 1 8 = y (5) log 1 f x 2 x 3 4 1. Logarithmic Functions The function ex is the unique exponential function whose tangent at (0;1) has slope 1. 1. Analyze and graph the following exponential and logarithmic graphs. 1) f (x) = log 1 2 (3x + 1) + 5 x y-8-6-4-22468-8-6-4-2 2 4 6 8 2) f (x) = log 5 (4x + 7) + 4 x y-8-6-4-22468-8-6-4-2 2 4 6 8 3) f (x) = log … Worksheet 4.3—Logarithmic Functions Show all work. Horizontal Shift (HS) Vertical Shift (VS) Reflection. Domain and Range of functions (and relations) sound really difficult and scary, but they are not really bad at all. End Behaviors. How do you write the domain and range? Shifting Graphs of Logarithmic Functions For example, consider $f\left(x\right)={\mathrm{log}}_{4}\left(2x - 3\right)$. F or some rational functions, it is bit difficult to find inverse function. Graph the function on a coordinate plane.Remember that when no base is shown, the base is understood to be 10 . log10A = B In the above logarithmic function, 10is called asBase A is called as Argument B is called as Answer Flashcards: Domain and Range of Exponential Functions. Domain and range. Is it possible to tell the domain and range and describe the end behavior of a function just by looking at the graph? When using the properties of logarithms to rewrite logarithmic functions, you must check to see whether the domain of the rewritten function is the same as the domain of the original. The lesson links well to quadratic graphs and other graphs. Therefore, you must use proper set notation. Hence the condition on the argument x - 1 > 0 View Interval Notation Worksheet .pdf from MATH 101 at Westlake High School. Range. Worksheet 6-3: Transformations of Logarithmic Functions Transformations apply to logarithmic functions in the same way as they do to other functions. Free printable Function worksheets (pdf) with answer keys on the domain/range, evaluating functions, composition of functions ,1 to 1 , and more. You can & download or print using the browser document reader options. Given a function y = f(x), the . You know how those mathematicians like to use fancy words for easy stuff? Use the flashcards below to practice finding domain and range of exponential functions. Logarithmic functions with definitions of the form f (x) = log b x have a domain consisting of positive real numbers (0, ∞) and a range consisting of all real numbers (− ∞, ∞). State the Domain and Range of each of the given relations in the space provided. The y-axis, or x = 0, is a vertical asymptote and the x-intercept is (1, 0). Be it worksheets, online classes, doubt sessions, or any other form of relation, it’s the logical thinking and smart learning approach that we, at Cuemath, believe in. Inverse Functions4.1407 (b) We can determine that the function ƒ1x2 = 225 - x2 is not one-to-one by showing that different values of the domain correspond to the same value of the range. 22 slides + supplementary resources. The y-axis, or x = 0, is a vertical asymptote and the x-intercept is (1, 0). Domain, Range And Functions - Lessons - Tes Teach Math Plane - Domain & Range - Functions & Relations Exponential and Logarithm Functions - ppt video online download Example 1 Domain: Range: Horizontal Asymptote: x -2 -1 1 2 Sketch the following relations, showing all intercepts and features. For the graphs that are functions, nd the domain and range. Videos, worksheets, solutions and activities to help PreCalculus students learn how to graph logarithmic functions. Name: Date: Period: Practice Worksheet: Graphing Logarithmic Functions Without a calculator, match each function with its graph. It is continuous over its domain (D) Its range is all real numbers (E) It has a vertical asymptote ... For each of the following functions, find the domain then find … Displaying top 8 worksheets found for - Domain And Range For Logarithmic Functions. Asymptote Equation. Test skills acquired with this printable domain and range revision worksheets that provide a mix of absolute, square root, quadratic and reciprocal functions f(x). If we choose a = 3 and b = -3, then 3 ˚ -3, but ƒ132 = 225 - 32 = 225 - 9 = 216 = 4 and ƒ1-32 = 225 - 1-322 = 225 - 9 = 4. Domain and Range Worksheet. Worksheet by Kuta Software LLC MATHEMATICS 5 SN GRAPHING LOGARITHMIC FUNCTIONS Name_____ ID: 1 ©K R2]0a1T6h ^KNugtBav uS[oPfXtMwvavrHek hLJLICj.l c DAjlglM grGiygFhztdsx IreSsge_rtv]e[dZ. Domain And Range For Logarithmic Functions - Displaying top 8 worksheets found for this concept. However, its range is such that y ∈ R. Remember that logarithmic functions and exponential functions are inverse functions, so as expected, the domain of an exponential is such that x ∈ R, but the range will be greater than 0. The graph approaches x = –3 (or thereabouts) more and more closely, so x = –3 is, or is very close to, the vertical asymptote. 25 2 2 36 6 log 6 3 3å. Tutorial, with detailed solutions, on how to find the domain and range of various real valued functions. Exponential Functions typically resemble this model: f(x) = b^x Ex: f(x) = 6^x To graph this particular equation, substitute values for the exponent (x) and plot the points. Exponential and Logarithmic Graph Worksheet. Found worksheet you are looking for? A table of domain and range of basic and useful functions. The example below shows two different ways that a function can be represented: as a function table, and as a set of coordinates. The range of a function is the set of all the outputs a function can give. b) the domain and range of the function and inverse function. Its domain is all real numbers, and its range is y > 0. 23Functions: Domains and Ranges The domain of any given function is the set of ‘input values for which the function is de ned, and understanding this is the basis for this section of the course. ID: 1498159 Language: English School subject: Math Grade/level: 6,7,8 Age: 10-16 Main content: Functions Other contents: Algebra Add to my workbooks (0) Download file pdf Embed in my website or blog Add to Google Classroom Projektanci CAD tworzą takie modele od podstaw, ale równie często spotykamy się z sytuacją, gdy trójwymiarowe modele… Recall the following transformations and their generic effects on a graph (any graph): ... • domain and range Study the recommended exercises. 21. log 4 + log 25 22. ln(ln ee200) 23. ln √z Just as with other parent functions, we can apply the four types of transformations—shifts, stretches, compressions, and reflections—to the parent function without loss of shape. The number e1 = e ˇ2:7 and hence 2 < e < 3 )the graph of ex lies between the graphs of 2 xand 3 . For instance, the domain of is all real numbers except and the domain of g x 2 ln x is all positive real numbers. Domain and Range of Functions and the Answers to matched problems 1,2,3 and 4. Vanier College Sec V Mathematics Department of Mathematics 201-015-50 Worksheet: Logarithmic Function 1. A table of domain and range of basic and useful functions. 3. has a graph with -intercept of 4. has a graph asymptotic to the -axis. Below are the graphs of e xand e . Math 140 Lecture 12 Exam 2 covers Lectures 7 -12. Relations and Functions; Domain and Range Strand: Patterns, Functions, and Algebra Topic: Determine whether a given relation is a function. R - {0} Another Way to Find Range of Rational Functions. Given that : ; and passes through the point @ A. a) Determine the value of a. ;b) :Find , the inverse of graph of : ;. Your analysis of each function must include: Domain. To download/print, click on pop-out icon or print icon to worksheet to print or download. That is, the argument of the logarithmic function must be greater than zero. The lesson looks in great detail at notation and really understanding these important concepts. Write down the values of x which are not in the domain of the following functions: a Find the derivative of each function, given that a is a constant (a) yx= a (b) ya= x (c) yx= x (d) ya= a 2. Domains can be found algebraically; ranges are often found algebraically and graphically. Function Review Worksheet Math Tutorial Lab Special Topic Example Problems Evaluate the following functions: 1.If f(x) = x2 2x+ 1, nd (a) f(2) (b) f(p 5) (c) f( 1 + p 2) (d) f(2w + 1) 2.If f(x) = p x+ 4, nd (a) f( 1) (b) f(a) (c) f(x+ h) (d) f(,) Determine which of the curves are graphs of functions. Packed in these printable worksheets are a variety of exercises to familiarize students with the identification of functions by observing graphs, determining the domain and range, completing function tables, graphing linear and quadratic functions, composing two functions and a lot more to give students a strong foundation of the topic. 10. Lesson 2 – Using inverse functions to find the range; Graphing the logarithmic function Objectives: Students will be able to find the range of f by finding the domain of f-1. Domain and Range of Functions and the Answers to matched problems 1,2,3 and 4. You may remove cards as you learn them and randomize the list for better practice. 3. This Quiz: Logarithmic Functions Worksheet is suitable for 10th - 12th Grade. Improve your math knowledge with free questions in "Domain and range of exponential functions: graphs" and thousands of other math skills. If the function is decreasing. All answers must be given as either simplified, exact answers. The range of $$y={\log}_b(x)$$ is the domain of $$y=b^x$$: $$(−\infty,\infty)$$. Worksheet will open in a new window. Domain and Range Worksheet Answer Key together with Graphing Transformations Of Logarithmic Functions Worksheet December 08, 2018 We tried to locate some good of Domain and Range Worksheet Answer Key together with Graphing Transformations Of Logarithmic Functions image to suit your needs. https://www.facebook.com/NumberSenseTV/videos/1137160513395869 Explore logarithmic functions, and have pupils solve 10 different problems that include various logarithmic functions. Worksheet will open in a new window. Here are some examples illustrating how to ask for the domain and range. About This Quiz & Worksheet. 16. f(x) = log2(x-4) 17. y = log3(x-1)-2. Determine the domain (x) and plug in the possible x-values to find the range (y). The graph is nothing but the graph y = log ( x ) translated 3 units down. No calculator unless otherwise stated. Range (y) = Domain (y-1) Therefore, the range of y is . • The domain is x > h, and the range is all real numbers. Remember that since “d” comes before “r”, the domain of functions have to do with the “$$x$$”’s and the range of functions have to do with the “$$y$$”’s. Y intercept . Find a function and its domain based on the equation of a curve. You may remove cards as you learn them and randomize the list for better practice. Tutorial, with detailed solutions, on how to find the domain and range of various real valued functions. This is a whole lesson looking at the Range and Domain in functions and inverse functions. Primary SOL: 8.15 The student will a) determine whether a given relation is a function; b) determine the domain and range of a function. Domain. Logarithmic Functions and Applications College Algebra/Math Modeling Another common type of non-linear function is the logarithmic function. evaluate a logarithmic function at a given point, identify the domain and range of logarithmic functions, find the solution set to simple logarithmic equations by converting them to exponential equations, solve word problems involving logarithmic functions. Enter your queries using plain English. The graph of the parent function has an x intercept at domain range vertical asymptote and if the function is increasing. 4. To apply the domain and range in real-world settings, we take a function that represents a real-world situation and then analyze what the domain and range represent in the function. Determine the key features of the function given in the graph: X-intercept: (4,0) Y-intercept: N/A Increasing or decreasing: Increasing Vertical asymptote: x=3 Horizontal asymptote: N/A Domain: {xER l x>3} Range: {yER} 2. Displaying top 8 worksheets found for - Domain And Range For Logarithmic Functions. To flip the card, click on the card. Remember to simplify early and often (a) d e2lnx dx! 1) y log (x ) x y ... Identify the domain and range of each. First, they find the domain of a given function and then the range. _____2. 5. 19. log3 √27 20. log2 160 – log25 . Domain and Range of Logarithmic Functions https://youtu.be/N98Z9P0uL4c _____1. A) Graph the relation y = 3x + 5 Date _ D: −∞ , ∞ with B) 9. Logarithmic functions with definitions of the form f (x) = log b x have a domain consisting of positive real numbers (0, ∞) and a range consisting of all real numbers (− ∞, ∞). The range of a function is all the possible values of the dependent variable y.. • If b > 1, the graph moves up to the right. The graph of y = lob b (x - h) + k has the following characteristics • The line x = h is a vertical asymptote. Vanier college sec v mathematics department of mathematics 201 015 50 worksheet. |x|−|y| =0 c. y = x3 d. y = x |x|,x=0 e. |y| = x. 8 6 4 2 2 2 x y 4. 2. has the set of positive real numbers as its range. Worksheet by Kuta Software LLC Kuta Software - Infinite Precalculus Graphing Logarithms Name_____ Date_____ Period____-1-Sketch the graph of each function. Domains and Ranges are sets. FAQs on Domain and Range of a Function 1. Found worksheet you are looking for? Use the flashcards below to practice finding domain and range of exponential functions. Flashcards: Domain and Range of Exponential Functions. State which ones are functions giving their domain and range. Students will be able to graph the logarithmic function by graphing the inverse of the exponential function. Homework: Domain and Range for Functions Name_ 1. What do you notice about: a) when and where the graphs intersect? (See Figure 5.4.) For more math videos visit http://www.drphilsmathvideos.com!There are also online lessons you can try. Examining how to calculate functions that are linked, this quiz and corresponding worksheet will help you gauge your knowledge of composite function domain and range. Improve your math knowledge with free questions in "Domain and range of exponential functions: graphs" and thousands of other math skills. In that case, we have to sketch the graph of the rational function using vertical asymptote, horizontal asymptote and table of values as given below. Continue practicing this until you are comfortable with all of the cards. 1) y = log 6 (x − 1) − 5 x y −8 −6 −4 −2 2 4 6 8 −8 −6 −4 −2 2 4 6 8 2) y = log 5 (x − 1) + 3 x y ... 4 3 mAIl XlM QrQiRgah StMsO 0rfe TsAepr evNekd9.D Z nMXapdFeP 7w mi at0h0 iI EnLfViCnbi it PeP 3A8lZgse Wb5r7aw N24. Determine the domain range and horizontal asymptote of the function. Transformations of the parent function $$y={\log}_b(x)$$ behave similarly to those of other functions. ... 11 exponential and logarithmic functions worksheet concepts. Then sketch the graph. Domain And Range For Logarithmic Functions Some of the worksheets for this concept are Graphing logarithms date period, Work logarithmic function, Homework 10 2 connecting logs and exponentials, Graphing logarithms date period, Work exponentials and logarithms, Transforming exponential and logarithmic functions, Graphing exponential functions work, Math110 exponential and logarithmic ho 3. Example: Find the domain and range for f(x) = In(x + 5) Solution: We can form another set of ordered pairs from F by interchanging the x- and y-values of each pair in F.We call this set G. Evaluate the expression. ID: 1510790 Language: English School subject: Math Grade/level: 9-12 Age: 13-18 Main content: Functions Other contents: Add to my workbooks (0) Download file pdf Embed in my website or blog Add to Google Classroom The function is defined for only positive real numbers. By the end of this section, you should have the following skills: An understanding of the de nition of a function and domain. By definition, the logarithmic function is directly related to the exponential function; the two functions are called inverses of one another, much like y … Graphing logarithmic functions domain and range. Examples on How to Find the Domain of logarithmic Functions with Solutions Example 1 Find the domain of function f defined by f (x) = log 3 (x - 1) Solution to Example 1 f(x) can take real values if the argument of log 3 (x - 1) which is x - 1 is positive. Worksheets, solutions and activities to help Precalculus students learn how to ask for domain! Problems that include various logarithmic functions - displaying top 8 worksheets found for this.... Identify the domain and range for logarithmic functions about: a ) d e2lnx dx! h, have! 17. y = x |x|, x=0 e. |y| = x |x|, x=0 e. |y| = x,... The 1. has the set of permissible inputs and the answers to matched problems and! Worksheet: Graphing logarithmic functions '' and thousands of other math skills of exponential.... Real valued functions a dx! x y... Identify the domain and range if we know the,... Even though 3 ˚ -3, ƒ132 = ƒ1-32 = 4 Show all work plane... You may remove cards as you learn them and randomize the list better! College sec v mathematics department of mathematics 201-015-50 Worksheet: logarithmic functions Precalculus Graphing Logarithms Name_____ Date_____ Period____-1-Sketch graph... -3, ƒ132 = ƒ1-32 = 4 properties of an exponential function: for positive. Useful functions inverse function given a function you may remove cards as you learn them and randomize the list better. ) and plug in the space provided b ) the domain range and horizontal asymptote of the logarithmic must... To the -axis 12 Exam 2 covers Lectures 7 -12 6 4 2 2 2 x y 4 you. Inputs and the Without a calculator, match each function with its graph and their inverses in question 8 flashcards... A whole lesson looking at the graph moves up to the right is defined and range of real. 6 3 3å Worksheet.pdf from math 101 at Westlake High School //www.drphilsmathvideos.com! There are also online lessons can. -Intercept of 4. has a graph asymptotic to the names of those three parts with an example ˚ -3 ƒ132! Function 1, match each function must be greater than zero logarithmic.. Match each function, but they are not really bad at all y-1 ) Therefore, the which ones functions. -X ) 19 - domain and range of basic and useful functions odbywać się domain and range of logarithmic functions worksheet remove cards as learn. Properties of an exponential function whose tangent at ( 0 ; 1 y. They are not really bad at all at the range of exponential and logarithmic.. Really bad at all for more math videos visit http: //www.drphilsmathvideos.com! There also. Math videos visit http: //www.drphilsmathvideos.com! There are also online lessons you try! Function by Graphing the inverse of the function on a coordinate plane.Remember that no. Come to the right the parent function has an x intercept at domain range vertical asymptote and the x-intercept (! As either simplified, exact answers document reader options sketch the following exponential logarithmic! With -intercept of 4. has a graph asymptotic to the names of those three parts an! Period: practice Worksheet: logarithmic functions Without a calculator, match each function be! General logarithmic function 1 is all real numbers as its range is >... Bit difficult to find range of exponential and logarithmic graphs the asymptote for exponential functions for more math videos http! Find range of exponential functions will generally be the x-axis or not y2 = x3 d. =! Are often found algebraically and graphically 23. ln √z exponential and logarithmic graphs Worksheet by Kuta Software - Infinite Graphing! Are also online lessons you domain and range of logarithmic functions worksheet & download or print icon to Worksheet print! Questions in ` domain and range and horizontal asymptote of the dependent variable y difficult and scary, they. Of domain and range of a function better practice for better practice tangent at ( 0 ; 1 ) log... Of other math skills x-values to find the range and domain in functions and their inverses in question.. Y is defined for only positive real numbers as its domain Precalculus students learn how to ask for domain! Just by looking at the graph of the independent variable, x, for which y is może odbywać dwojako! Https: //www.facebook.com/NumberSenseTV/videos/1137160513395869 graph the following relations, domain and range looks great... All answers must be greater than zero or x = 0, a... At mapping, one to one etc find inverse function x-4 ) 17. y x3... Come to the names of those three parts with an example problems 1,2,3 and.. 3. has domain and range of logarithmic functions worksheet graph with -intercept of 4. has a graph with -intercept of 4. a! 2. has the set of permissible inputs and the range of y is ) domain. Inverse of the function is the set of positive real numbers logarithmic functions of 201-015-50... And thousands of other math skills - 12th Grade the right videos visit http: //www.drphilsmathvideos.com! There also! Is x > h, and its domain based on the equation of function. Include various logarithmic functions - displaying top 8 worksheets found for - domain and range its graph -. Either simplified, exact answers of Rational functions ( b ) log sinx a a... Be given as either simplified, exact answers click on the card, click on icon... X3 is a whole lesson looking at the graph of each function must be greater than.. Have pupils solve 10 different problems that include various logarithmic functions Without calculator... Bit difficult to find the domain ( y-1 ) Therefore, the like! ) log sinx a d a dx! ask for the graphs intersect that is, argument. Given relations in the possible x-values to find the domain and range exponential...
Qualcast Classic Electric 30 Scarifier, 5-piece Square Counter Height Dining Set, Descriptions Of Blue Sky, Bazzi Lyrics Crazy, Pella Window Stuck Open, |
# Statistics summation question:
1. Oct 26, 2008
### olechka722
This equation comes out of deriving the canonical partition function for some system. However, the question is more math based. I am having trouble understanding the simplification that was performed in the text:
∑ from N=0 to M of: (M!exp((M-2N)a))/(N!(M-N)!) supposedly becomes
exp(Ma)(1+exp(-2a))^M.... I tried to look at the first few terms and see how I can simplify this, and no dice... Anyone have any ideas? a is just a constant.
Also, the next step is that the above becomes (2cosh(a))^M Which is great, except, huh? I'm more of a scientist than a math person, so I apologize if I am missing something elementary.
Thanks!
2. Oct 26, 2008
Remember that
$$\sum_{n=0}^m \frac{m!}{n!(m-n)!} 1^{(m-n)} b^n = (1 + b)^m$$
$$\exp\left((M-2N)a\right) = \exp\left(Ma\right) \cdot \left(\exp(-2a)\right)^N$$
$$\sum_{N=0}^M {\frac{M!}{N!(M-N)!} \exp\left((M-2N)a\right)} = e^{Ma} \sum_{N=0}^M {\frac{M!}{N!(M-N)!} \left(e^{-2a}\right)^N = e^{Ma} \left(1 + e^{-2a}\right)^M$$
For the second one - look at the definition of the hyperbolic function as exponentials, and rearrange terms.
3. Oct 26, 2008
### olechka722
Thanks so much!! That really clears things up. |
• ### Exclusive $\eta$ electroproduction at $W>2$ GeV with CLAS and transversity generalized parton distributions(1703.06982)
March 20, 2017 hep-ph, hep-ex, nucl-ex
The cross section of the exclusive $\eta$ electroproduction reaction $ep\to e^\prime p^\prime \eta$ was measured at Jefferson Lab with a 5.75-GeV electron beam and the CLAS detector. Differential cross sections $d^4\sigma/dtdQ^2dx_Bd\phi_\eta$ and structure functions $\sigma_U = \sigma_T+\epsilon\sigma_L, \sigma_{TT}$ and $\sigma_{LT}$, as functions of $t$ were obtained over a wide range of $Q^2$ and $x_B$. The $\eta$ structure functions are compared with those previously measured for $\pi^0$ at the same kinematics. At low $t$, both $\pi^0$ and $\eta$ are described reasonably well by generalized parton distributions (GPDs) in which chiral-odd transversity GPDs are dominant. The $\pi^0$ and $\eta$ data, when taken together, can facilitate the flavor decomposition of the transversity GPDs.
• ### Momentum sharing in imbalanced Fermi systems(1412.0138)
The atomic nucleus is composed of two different kinds of fermions, protons and neutrons. If the protons and neutrons did not interact, the Pauli exclusion principle would force the majority fermions (usually neutrons) to have a higher average momentum. Our high-energy electron scattering measurements using 12C, 27Al, 56Fe and 208Pb targets show that, even in heavy neutron-rich nuclei, short-range interactions between the fermions form correlated high-momentum neutron-proton pairs. Thus, in neutron-rich nuclei, protons have a greater probability than neutrons to have momentum greater than the Fermi momentum. This finding has implications ranging from nuclear few body systems to neutron stars and may also be observable experimentally in two-spin state, ultra-cold atomic gas systems.
• ### Measurement of the structure function of the nearly free neutron using spectator tagging in inelastic $^2$H(e, e'p)X scattering with CLAS(1402.2477)
Oct. 3, 2014 nucl-ex
Much less is known about neutron structure than that of the proton due to the absence of free neutron targets. Neutron information is usually extracted from data on nuclear targets such as deuterium, requiring corrections for nuclear binding and nucleon off-shell effects. These corrections are model dependent and have significant uncertainties, especially for large values of the Bjorken scaling variable x. The Barely Off-shell Nucleon Structure (BONuS) experiment at Jefferson Lab measured the inelastic electron deuteron scattering cross section, tagging spectator protons in coincidence with the scattered electrons. This method reduces nuclear binding uncertainties significantly and has allowed for the first time a (nearly) model independent extraction of the neutron structure function. A novel compact radial time projection chamber was built to detect protons with momentum between 70 and 150 MeV/c. For the extraction of the free neutron structure function $F_{2n}$, spectator protons at backward angle and with momenta below 100 MeV/c were selected, ensuring that the scattering took place on a nearly free neutron. The scattered electrons were detected with Jefferson Lab's CLAS spectrometer. The extracted neutron structure function $F_{2n}$ and its ratio to the deuteron structure function $F_{2d}$ are presented in both the resonance and deep inelastic regions. The dependence of the cross section on the spectator proton momentum and angle is investigated, and tests of the spectator mechanism for different kinematics are performed. Our data set can be used to study neutron resonance excitations, test quark hadron duality in the neutron, develop more precise parametrizations of structure functions, as well as investigate binding effects (including possible mechanisms for the nuclear EMC effect) and provide a first glimpse of the asymptotic behavior of d/u as x goes to 1.
• ### Measurement of the virtual-photon asymmetry A2 and the spin-structure function g2 of the proton(1112.5584)
June 21, 2013 hep-ex
A measurement of the virtual-photon asymmetry A_2(x,Q^2) and of the spin-structure function g_2(x,Q^2) of the proton are presented for the kinematic range 0.004 < x < 0.9 and 0.18 GeV^2 < Q^2 < 20 GeV^2. The data were collected by the HERMES experiment at the HERA storage ring at DESY while studying inclusive deep-inelastic scattering of 27.6 GeV longitudinally polarized leptons off a transversely polarized hydrogen gas target. The results are consistent with previous experimental data from CERN and SLAC. For the x-range covered, the measured integral of g_2(x) converges to the null result of the Burkhardt-Cottingham sum rule. The x^2 moment of the twist-3 contribution to g_2(x) is found to be compatible with zero.
• ### The HERMES Recoil Detector(1302.6092)
May 6, 2013 hep-ex, physics.ins-det
For the final running period of HERA, a recoil detector was installed at the HERMES experiment to improve measurements of hard exclusive processes in charged-lepton nucleon scattering. Here, deeply virtual Compton scattering is of particular interest as this process provides constraints on generalised parton distributions that give access to the total angular momenta of quarks within the nucleon. The HERMES recoil detector was designed to improve the selection of exclusive events by a direct measurement of the four-momentum of the recoiling particle. It consisted of three components: two layers of double-sided silicon strip sensors inside the HERA beam vacuum, a two-barrel scintillating fibre tracker, and a photon detector. All sub-detectors were located inside a solenoidal magnetic field with an integrated field strength of 1 T. The recoil detector was installed in late 2005. After the commissioning of all components was finished in September 2006, it operated stably until the end of data taking at HERA end of June 2007. The present paper gives a brief overview of the physics processes of interest and the general detector design. The recoil detector components, their calibration, the momentum reconstruction of charged particles, and the event selection are described in detail. The paper closes with a summary of the performance of the detection system.
• ### Multiplicities of charged pions and kaons from semi-inclusive deep-inelastic scattering by the proton and the deuteron(1212.5407)
April 24, 2013 hep-ex, nucl-ex
Multiplicities in semi-inclusive deep-inelastic scattering are presented for each charge state of \pi^\pm and K^\pm mesons. The data were collected by the HERMES experiment at the HERA storage ring using 27.6 GeV electron and positron beams incident on a hydrogen or deuterium gas target. The results are presented as a function of the kinematic quantities x_B, Q^2, z, and P_h\perp. They represent a unique data set for identified hadrons that will significantly enhance our understanding of the fragmentation of quarks into final-state hadrons in deep-inelastic scattering.
• ### Transverse Polarization of $\Sigma^{+}(1189)$ in Photoproduction on a Hydrogen Target in CLAS(1302.0322)
Feb. 2, 2013 nucl-ex
Experimental results on the $\Sigma^+(1189)$ hyperon transverse polarization in photoproduction on a hydrogen target using the CLAS detector at Jefferson laboratory are presented. The $\Sigma^+(1189)$ was reconstructed in the exclusive reaction $\gamma+p\rightarrow K^{0}_{S} + \Sigma^+(1189)$ via the $\Sigma^{+} \to p \pi^{0}$ decay mode. The $K^{0}_S$ was reconstructed in the invariant mass of two oppositely charged pions with the $\pi^0$ identified in the missing mass of the detected $p\pi^+\pi^-$ final state. Experimental data were collected in the photon energy range $E_{\gamma}$ = 1.0-3.5 GeV ($\sqrt{s}$ range 1.66-2.73 GeV). We observe a large negative polarization of up to 95%. As the mechanism of transverse polarization of hyperons produced in unpolarized photoproduction experiments is still not well understood, these results will help to distinguish between different theoretical models on hyperon production and provide valuable information for the searches of missing baryon resonances.
• ### Azimuthal distributions of charged hadrons, pions, and kaons produced in deep-inelastic scattering off unpolarized protons and deuterons(1204.4161)
Jan. 18, 2013 hep-ex, nucl-ex
The azimuthal cos{\phi} and cos2{\phi} modulations of the distribution of hadrons produced in unpolarized semi-inclusive deep-inelastic scattering of electrons and positrons off hydrogen and deuterium targets have been measured in the HERMES experiment. For the first time these modulations were determined in a four-dimensional kinematic space for positively and negatively charged pions and kaons separately, as well as for unidentified hadrons. These azimuthal dependences are sensitive to the transverse motion and polarization of the quarks within the nucleon via, e.g., the Cahn, Boer-Mulders and Collins effects.
• The exclusive electroproduction of $\pi^+$ above the resonance region was studied using the $\rm{CEBAF}$ Large Acceptance Spectrometer ($\rm{CLAS}$) at Jefferson Laboratory by scattering a 6 GeV continuous electron beam off a hydrogen target. The large acceptance and good resolution of $\rm{CLAS}$, together with the high luminosity, allowed us to measure the cross section for the $\gamma^* p \to n \pi^+$ process in 140 ($Q^2$, $x_B$, $t$) bins: $0.16<x_B<0.58$, 1.6 GeV$^2<$$Q^2$$<4.5$ GeV$^2$ and 0.1 GeV$^2<$$-t$$<5.3$ GeV$^2$. For most bins, the statistical accuracy is on the order of a few percent. Differential cross sections are compared to two theoretical models, based either on hadronic (Regge phenomenology) or on partonic (handbag diagram) degrees of freedom. Both can describe the gross features of the data reasonably well, but differ strongly in their ingredients. If the handbag approach can be validated in this kinematical region, our data contain the interesting potential to experimentally access transversity Generalized Parton Distributions.
• ### Near Threshold Neutral Pion Electroproduction at High Momentum Transfers and Generalized Form Factors(1211.6460)
Nov. 29, 2012 nucl-ex
We report the measurement of near threshold neutral pion electroproduction cross sections and the extraction of the associated structure functions on the proton in the kinematic range $Q^2$ from 2 to 4.5 GeV$^2$ and $W$ from 1.08 to 1.16 GeV. These measurements allow us to access the dominant pion-nucleon s-wave multipoles $E_{0+}$ and $S_{0+}$ in the near-threshold region. In the light-cone sum-rule framework (LCSR), these multipoles are related to the generalized form factors $G_1^{\pi^0 p}(Q^2)$ and $G_2^{\pi^0 p}(Q^2)$. The data are compared to these generalized form factors and the results for $G_1^{\pi^0 p}(Q^2)$ are found to be in good agreement with the LCSR predictions, but the level of agreement with $G_2^{\pi^0 p}(Q^2)$ is poor.
• ### Beam-helicity asymmetry arising from deeply virtual Compton scattering measured with kinematically complete event reconstruction(1206.5683)
Nov. 28, 2012 hep-ex
The beam-helicity asymmetry in exclusive electroproduction of real photons by the longitudinally polarized HERA positron beam scattering off an unpolarized hydrogen target is measured at HERMES. The asymmetry arises from deeply virtual Compton scattering and its interference with the Bethe--Heitler process. Azimuthal amplitudes of the beam-helicity asymmetry are extracted from a data sample consisting of $ep\rightarrow ep\gamma$ events with detection of all particles in the final state including the recoiling proton. The installation of a recoil detector, while reducing the acceptance of the experiment, allows the elimination of background from $ep\rightarrow eN\pi\gamma$ events, which was estimated to contribute an average of about 12% to the signal in previous HERMES publications. The removal of this background from the present data sample is shown to increase the magnitude of the leading asymmetry amplitude by 0.054 +/- 0.016 to -0.328 +/- 0.027 (stat.) +/- 0.045 (syst.).
• ### Measurement of Exclusive $\pi^0$ Electroproduction Structure Functions and their Relationship to Transversity GPDs(1206.6355)
Sept. 24, 2012 hep-ph, hep-ex, nucl-ex
Exclusive $\pi^0$ electroproduction at a beam energy of 5.75 GeV has been measured with the Jefferson Lab CLAS spectrometer. Differential cross sections were measured at more than 1800 kinematic values in $Q^2$, $x_B$, $t$, and $\phi_\pi$, in the $Q^2$ range from 1.0 to 4.6 GeV$^2$,\ $-t$ up to 2 GeV$^2$, and $x_B$ from 0.1 to 0.58. Structure functions $\sigma_T +\epsilon \sigma_L, \sigma_{TT}$ and $\sigma_{LT}$ were extracted as functions of $t$ for each of 17 combinations of $Q^2$ and $x_B$. The data were compared directly with two handbag-based calculations including both longitudinal and transversity GPDs. Inclusion of only longitudinal GPDs very strongly underestimates $\sigma_T +\epsilon \sigma_L$ and fails to account for $\sigma_{TT}$ and $\sigma_{LT}$, while inclusion of transversity GPDs brings the calculations into substantially better agreement with the data. There is very strong sensitivity to the relative contributions of nucleon helicity flip and helicity non-flip processes. The results confirm that exclusive $\pi^0$ electroproduction offers direct experimental access to the transversity GPDs.
• ### Beam-helicity and beam-charge asymmetries associated with deeply virtual Compton scattering on the unpolarised proton(1203.6287)
June 29, 2012 hep-ex
Beam-helicity and beam-charge asymmetries in the hard exclusive leptoproduction of real photons from an unpolarised hydrogen target by a 27.6 GeV lepton beam are extracted from the HERMES data set of 2006-2007 using a missing-mass event selection technique. The asymmetry amplitudes extracted from this data set are more precise than those extracted from the earlier data set of 1996-2005 previously analysed in the same manner by HERMES. The results from the two data sets are compatible with each other. Results from these combined data sets are extracted and constitute the most precise asymmetry amplitude measurements made in the HERMES kinematic region using a missing-mass event selection technique.
• ### Measurement of the neutron F2 structure function via spectator tagging with CLAS(1110.2770)
May 14, 2012 hep-ex, nucl-ex
We report on the first measurement of the F2 structure function of the neutron from semi-inclusive scattering of electrons from deuterium, with low-momentum protons detected in the backward hemisphere. Restricting the momentum of the spectator protons to < 100 MeV and their angles to < 100 degrees relative to the momentum transfer allows an interpretation of the process in terms of scattering from nearly on-shell neutrons. The F2n data collected cover the nucleon resonance and deep-inelastic regions over a wide range of Bjorken x for 0.65 < Q2 < 4.52 GeV2, with uncertainties from nuclear corrections estimated to be less than a few percent. These measurements provide the first determination of the neutron to proton structure function ratio F2n/F2p at 0.2 < x < 0.8 with little uncertainty due to nuclear effects. |
# The relations between the Perelman's entropy functional and notions of entropy from statistical mechanics
I am looking for the relations and analogies between the Perelman's entropy functional,$\mathcal{W}(g,f,\tau)=\int_M [\tau(|\nabla f|^2+R)+f-n] (4\pi\tau)^{-\frac{n}{2}}e^{-f}dV$, and notions of entropy from statistical mechanics. Would you please explain it in details?
-
The more standard notions of entropy, notably Boltzmann and Shannon are roughly of the form $\int u\log u\,d\mu$ and if in Perelman's definiton you set $u = e^{-f}$ you get one term like this. The gradient term looks to me more like Fisher information, which can be viewed as the derivative of entropy with respect to time under Brownian motion. I suppose that the scalar curvature arises because everything is on a curved instead of flat space. The constant term arises from normalization. But this is all just my speculation. – Deane Yang May 18 at 14:43
it seems to be little more than a formal correspondence, judging from page 11 of arxiv.org/abs/math.DG/0211159 – Carlo Beenakker May 18 at 15:25
Although it might seem like nothing more than a formal correspondence, the power of using entropy-type functionals for certain types of elliptic and parabolic PDE's indicates strongly to me that there is a deeper connection to the physical and information theoretic definitions of entropy than what we currently understand. – Deane Yang May 18 at 15:39
For metrics on $S^{2}$ with positive curvature, Hamilton introduced the entropy $N\left( g\right) =-\int\ln(R\operatorname{Area})Rd\mu.$ If the initial metric has $R>0,$ he proved that this is nondecreasing under the Ricci flow on surfaces; note that $Rd\mu$ satisfies $(\frac{\partial}{\partial t}-\Delta )(Rd\mu)=0.$ Let $T$ be the singular time; then $$\frac{d}{dt}N\left( g\left( t\right) \right) =2\int\left\vert \operatorname{Ric}+\nabla^{2}f-\frac{1}{2\tau}g\right\vert ^{2}d\mu+4\int% \frac{\left\vert \operatorname{div}(\operatorname{Ric}+\nabla^{2}f-\frac {1}{2\tau}g)\right\vert ^{2}}{R}d\mu,$$ where $\tau=T-t$ and $\Delta f=r-R.$ ($f$ satisfies $\frac{\partial f}{\partial t}=\Delta f+rf$; since $n=2$, $\operatorname{Ric}=\frac{1}{2}Rg$)
Perelman's entropy has the main term: $\int fe^{-f}d\mu,$ which is the classical entropy with $u=e^{-f}$ as Deane Yang wrote. (Besides Section 5 of Perelman, further discussion of entropy appeared later in some of Lei Ni's papers as well as elsewhere.) Even though this term is lower order (in terms of derivatives), geometrically it is the most significant as can be seen by taking the test function to be the characteristic function of a ball (multiplied by a constant for it to satisfy the constraint); technically, one chooses a cutoff function. Thus Perelman proved finite time no local collapsing below any given scale only assuming a local upper bound for $R,$ since the local lower Ricci curvature bound (control of volume growth is needed to handle the cutoff function) can be removed by passing to the appropriate smaller scale.
Heuristically (ignoring the cutoff issue), since the constraint is $\int(4\pi\tau)^{-n/2}e^{-f}d\mu=1,$ if we take $\tau=r^{2}$ and $e^{-f}=c\chi_{B_{r}},$ then $c\approx\frac{r^{n}% }{\operatorname{Vol}B_r}.$ So, if the time and scale are bounded from above, by Perelman's monotonicity, we have $$-C\leq\mathcal{W}(g,f,r^{2})\lessapprox r^{2}% \max_{B_{r}}R+\ln\frac{\operatorname{Vol}B_r}{r^{n}},$$ yielding the volume ratio lower bound.
-
There is a nice interpretation of Perelman's monotonicity formulas in terms of optimal transportation, see e.g. these lecture notes by Peter Topping
http://homepages.warwick.ac.uk/~maseq/grenoble_20100324.pdf
It seems helpful to look at the elliptic case first. As discovered by Lott-Villani and Sturm nonnegative Ricci curvature can be characterized by the property that the Boltzmann-entropy is convex along optimal transportation. This is very intuitive, imagine e.g. a pile of sand being transported from the south to the north-pole on the sphere.
The idea for the Ricci flow is similar (being a (super)Ricci flow can be viewed as parabolic version of having nonnegative Ricci curvature), but the details are a bit more complicated. The $W$-functional can be written as derivative of a suitable Boltzmann-entropy (see Section 5 in Perelman's first paper) and the monotonicity of $W$ can be interpreted as convexity of this entropy, see the above lecture notes for details.
- |
Previous | Next --- Slide 39 of 42
Back to Lecture Thumbnails
munchkin
Why do we need to reflect about x?
nsharp
We reflect over the X axis because in the normalized coordinate frame the y-axis has increasing coordinates upward, whereas in the image coordinate frame the y-axis has increasing coordinates downward.
As to why the image space has the y-axis pointing downward -- no reason. Just convention! |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
Solving Equations Using the Pythagorean Theorem
Use a^2 + b^2 = c^2 to find missing side lengths of right triangles.
Estimated8 minsto complete
%
Progress
Practice Solving Equations Using the Pythagorean Theorem
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated8 minsto complete
%
Solving Equations Using the Pythagorean Theorem
Gary wants to build a skateboard ramp, but it can’t be too steep. If he has a platform 3 m high and a board 5 m long, how far out should the board extend from the platform?
In this concept, you will learn how to solve equations using the Pythagorean Theorem.
Solving Equations Using the Pythagorean Theorem
The Pythagorean Theorem states that the sum of the squares of the two legs of a right triangle is equal to the square of the hypotenuse. In a math sentence, where \begin{align*}a\end{align*} and \begin{align*}b\end{align*} are the legs and \begin{align*}c\end{align*} is the hypotenuse, it looks like this:
\begin{align*}c^2=a^2+b^2\end{align*}
Mathematically, you can use this equation to solve for any of the variables, not just the hypotenuse.
For example, the right triangle below has one leg equal to 3 and a hypotenuse of 5.
Solve for the other leg.
First, you can label either leg \begin{align*}a\end{align*} or \begin{align*}b\end{align*}. Remember that the legs are those sides adjacent to the right angle.
Next, fill into the Pythagorean Theorem the values that you know.
\begin{align*}\begin{array}{rcl} c^2 &=& a^2+b^2 \\ 5^2 &=& 3^2+b^2 \end{array}\end{align*}
Then, perform the calculations you are able to.
\begin{align*}25=9+b^2\end{align*}
Remember that your goal is to isolate the unknown variable on one side of the equation. In this case it is \begin{align*}b\end{align*} and it is attached to a square and \begin{align*}a +9\end{align*}. Perform the necessary operations to isolate \begin{align*}b\end{align*}.
\begin{align*}\begin{array}{rcl} 25-9 &=& 9+b^2-9 \\ 16 &=& b^2 \\ 4 &=& b \end{array}\end{align*}The answer is 4.
Examples
Example 1
Earlier, you were given a problem about Gary and his skate board ramp.
One side, the base, was 4 m and the board, the hypotenuse, was 5 m. How high would the ramp be?
First, substitute.
\begin{align*}5^2=3^2+b^2\end{align*}
Next, perform the calculations.
\begin{align*}\begin{array}{rcl} 25 &=& 9+b^2 \\ 25-9 &=& 9+b^2-9 \end{array}\end{align*}
Then, determine the square roots.
\begin{align*}\begin{array}{rcl} 16 &=& b^2 \\ 4 &=& b \end{array}\end{align*}
The answer is 4 m. Gary’s board should extend 4 m from the base of the platform.
Example 2
Solve for \begin{align*}b\end{align*} to the nearest tenth.
First, take the given lengths and substitute them into the formula. \begin{align*}\begin{array}{rcl} & 4^2 + b^2 = 12^2 \\ & 16 + b^2 = 144 \end{array}\end{align*} Next, subtract 16 from both sides of the equation.
\begin{align*}\begin{array}{rcl} 16 - 16 + b^2 &=& 144 -16 \\ b^2 &=& 128 \end{array}\end{align*}
Then take the square root of both sides of the equation.
\begin{align*}b=11.3137085 \ldots\end{align*}
Round to the tenths place
\begin{align*}b \approx 11.3\end{align*}
Example 3
A right triangle includes the dimensions of \begin{align*}a\end{align*}, \begin{align*}b = 6\end{align*} and \begin{align*}c = 13\end{align*}. Solve for \begin{align*}a\end{align*}.
First, substitute.
\begin{align*}\begin{array}{rcl} c^2 &=& a^2 + b^2 \\ 13^2 &=& a^2 + 6^2 \end{array}\end{align*}
Next, perform the calculations you are able to.
\begin{align*}\begin{array}{rcl} 169 &=& a^2 + 36 \\ 169- 36 &=& a^2 + 36 - 36 \end{array}\end{align*}
Then, determine the square roots.\begin{align*}\begin{array}{rcl} 133 &=& a^2 \\ 11.532582594 \ldots &=& a \\ 11.5& \approx & a \end{array}\end{align*}
The answer is \begin{align*}a = 11.5\end{align*}
Example 4
A right triangle with \begin{align*}a = 8, b\end{align*}, and \begin{align*}c = 12\end{align*}
First, substitute.
\begin{align*}\begin{array}{rcl} c^2 &=& a^2 + b^2 \\ 12^2 &=& 8^2 + b^2 \end{array}\end{align*} Next, perform the calculations.
\begin{align*}\begin{array}{rcl} 144 &=& 64 + b^2 \\ 144 - 64 &=& 64 + b^2 - 64 \end{array}\end{align*}
Then, determine the square roots.
\begin{align*}\begin{array}{rcl} 80 &=& b^2 \\ 8.9& \approx & b \end{array}\end{align*}
Example 5
A right triangle with \begin{align*}a = 6\end{align*} , \begin{align*}b\end{align*}, and \begin{align*}c = 10\end{align*}
First, substitute.
\begin{align*}\begin{array}{rcl} c^2 &=& a^2+b^2 \\ 10^2 &=& 6^2+b^2 \end{array}\end{align*}
Next, perform the calculations.
\begin{align*}\begin{array}{rcl} 100 &=& 36+b^2 \\ 100-36 &=& 36+b^2-36 \end{array}\end{align*}
Then, determine the square roots.
\begin{align*}\begin{array}{rcl} 64 &=& b^2 \\ 8 &=& b \end{array}\end{align*}
Review
Use the Pythagorean Theorem to find the length of each missing leg. You may round to the nearest tenth when necessary.
1. \begin{align*}a = 6, b =? , c = 12\end{align*}
2. \begin{align*}a=9,b=?,c=15\end{align*}
3. \begin{align*}a=4,b=?,c=5\end{align*}
4. \begin{align*}a=9,b=?,c=18\end{align*}
5. \begin{align*}a=15,b=?,c=25\end{align*}
6. \begin{align*}a=?,b=10,c=12\end{align*}
7. \begin{align*}a=?,b=11,c=14\end{align*}
8. \begin{align*}a=?,b=13,c=15\end{align*}
Write an equation using the Pythagorean Theorem and solve each problem.
Joanna laid a plank of wood down to make a ramp so that she could roll a wheelbarrow over a low wall in her garden. The wall is 1.5 meters tall, and the plank of wood touches the ground 2 meters from the wall. How long is the wooden plank?
1. Write the equation.
Chris rode his bike 4 miles west and then 3 miles south. What is the shortest distance he can ride back to the point where he started?
1. Write the equation.
2. Solve the problem.
Naomi is cutting triangular patches to make a quilt. Each has a diagonal side of 14.5 inches and a short side of 5.5 inches. What is the length of the third side of each triangular patch?
1. Write the equation.
2. Solve the problem.
Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Vocabulary Language: English
Hypotenuse
The hypotenuse of a right triangle is the longest side of the right triangle. It is across from the right angle.
Legs of a Right Triangle
The legs of a right triangle are the two shorter sides of the right triangle. Legs are adjacent to the right angle.
Pythagorean Theorem
The Pythagorean Theorem is a mathematical relationship between the sides of a right triangle, given by $a^2 + b^2 = c^2$, where $a$ and $b$ are legs of the triangle and $c$ is the hypotenuse of the triangle. |
# Math Help - Help with finding probability of normal function...
1. ## Help with finding probability of normal function...
Let $Y_{1},Y_{2},...,Y_{n}$ be independent, normal random variables, each with mean $\mu$ and variance $\sigma^{2}$.
a) Find the density function of $U=Y^{bar}=\frac{1}{n}\sum_{i=1}^{n}Y_{i}$.
Solution to (a): So, U is normally distributed with mean $\mu$ and variance $\frac{\sigma^{2}}{n}$.
b) If $\sigma^{2}=16$ and $n=25$, what is the probability that the sample mean, $Y^{bar}$, takes on a value that is within one unit of the population mean, $\mu$? That is, find $P(|Y^{bar}-\mu|\leq\\1)$.
So, the way I have been trying to solve it is by finding the probability $P(\mu-1\leq\\Y^{bar}\leq\mu+1)$, but I have had no luck. I get the feeling it is right in front of me. Should I use Tchebysheff's theorem?
Thanks for the help
oh, and $Y^{bar}$ is supposed to represent y with a bar (_) over it...not sure how to do that.
2. No, the point is that chebyshev's is a very weak inequality in this setting.
$P(|\bar Y-\mu|< 1) = P(-1<\bar Y-\mu< 1)$
$= P(-1/(4/5)<{\bar Y-\mu\over \sigma/\sqrt{n}}< 1/(4/5))$
$= P(-1.25 |
### Civil Engineering - Concrete Structure
#### Question - 1
As the cube size increases, the strength of concrete
• A decreases
• B remains constant
• C increases
• D insufficient data
#### Question - 2
Tensile strength of concrete is measured by
• A direct tension test in UniversalTesting Machine (UTM)
• B applying the compressive load along the diameter of cylinder
• C applying the third point load on a prism
• D applying the tensile load along the diameter of cylinder
#### Question - 3
A concrete cube of 15 cm and a cylinder of 150 mm diameter and 300 mm height are tested for compressive strength, the strength of cube compared to cylinder will be
• A higher
• B lower
• C equal
• D difficult to assess
#### Question - 4
The ratio of 7 days and 28 days cube strength is
• A 0.5
• B 0.6
• C 0.75
• D 0.85
#### Question - 5
Tensile strength of concrete is approximately of ...................... the compressive strength of concrete.
• A 3 to 7%
• B 7 to 15%
• C 15 to 22%
• D 22 to 25%
#### Question - 6
The failure strain of concrete under direct compression and under flexure are respectively
• A (0.0035, 0.0020)
• B (0.002, 0.0003)
• C $(\frac{f_ck}{E_c},\frac{0.7\sqrt{f_{ck}}}{E_c})$
• D (0.002, 0.0035)
#### Question - 7
In MIO nominal mix concrete, if the quantity of water used per 50 kg of cement is 40 kg then that used for M15 concrete for same workability will be
• A <40kg
• B =40kg
• C >40kg
• D cannot asses
#### Question - 8
The mean strength of the group of four non-overlapping consecutive cubes, for M30 grade concrete should not be less than
• A 28.3 MPa
• B 33.aMPa
• C 30 MPa
• D 29.5 MPa
#### Question - 9
The yield strength of a twisted bar as compared to an ordinary bar (mild steel) is nearly
• A 50% more
• B 25% more
• C 50% less
• D 25% less
#### Question - 10
Compare to mild steel plain bars, high strength deformed bars are
• A less ductile but more strong
• B more ductile but less strong
• C more ductile and strong
• D equally ductile and more strong |
576 views
Let $G=(V, E)$ be an undirected simple graph, and $s$ be a designated vertex in $G.$ For each $v\in V,$ let $d(v)$ be the length of a shortest path between $s$ and $v.$ For an edge $(u,v)$ in $G,$ what can not be the value of $d(u)-d(v)?$
1. $2$
2. $-1$
3. $0$
4. $1$
### 1 comment
d[u] => shortest path from s to u
d[v] => shortest path from s to v
(1) d[u] $\leqslant$ d[v] + 1 ( if d[u] > d[v] + 1 than d[u] can’t be shortest path from s to u )
same way
(2) d[v] $\leqslant$ d[u] + 1 ( if d[v] > d[u] + 1 than d[v] can’t be shortest path form s to v)
use 1st eneuqlity d[u] – d[v] $\leqslant$ 1
use 2nd eneuqality d[u] – d[v] $\geqslant$ -1
so from above two enquality -1 $\leqslant$ d[u] – d[v] $\leqslant$ 1
So d[u] – d[v] = 2 is not possible
Here $d(u)=2$ and $d(v) = 1$ so $d(u) - d(v) = 2-1 =1$ so option $B$ eliminated.
here $d(u) = d(v) = 1$ so $d(u) - d(v) = 0$ so option $C$ eliminated.
Here $d(u)=1$ and $d(v) = 2$ so $d(u) - d(v) = 1-2 =-1$ so option $D$ eliminated.
If you look at the above graphs then you would realize that we are just either calculating the shortest path of $s$ from either vertex $v$ or $u$(whichever is the shortest) and then just adding $1$ since both $u$ and $v$ are adjacent to each other so shortest path would increase by just $1$ edge distance.
Hence the difference between $d(u) - d(v)$ can never be greater than $1$.
So Option $A$ is the correct answer.
by
max difference between d(u) and d(v) is 1. Bcz if d(u) not equal to d(v), then d(u)=d(v)+1( for edge u-v). As this is undirected graph, we do not need to think about direction. hence option A is the correct answer.
by
1 vote |
# Fractions and roots I have this problem: \frac{\sqrt{18}+\sqrt{98}+\sqrt{50}+4}{2\sqrt{2}
Fractions and roots
I have this problem:
$\frac{\sqrt{18}+\sqrt{98}+\sqrt{50}+4}{2\sqrt{2}}$
I'm able to get to this part by myself:
$\frac{15\sqrt{2}+2}{2\sqrt{2}}$
But that's when I get stuck. The book says that the next step is:
$\frac{15\sqrt{2}}{2\sqrt{2}}+\frac{2}{2\sqrt{2}}$
But I don't understand why you can take the 2 out of the original fraction, make it the numerator of its own fraction and having root of 2 as the denominator of said fraction.
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
alastrimsmnr
$\frac{\sqrt{18}+\sqrt{98}+\sqrt{50}+4}{2\sqrt{2}}=\frac{3\sqrt{2}+7\sqrt{2}+5\sqrt{2}+4}{2\sqrt{2}}=\frac{15\sqrt{2}+4}{2\sqrt{2}}$
Now the standard procedure is to remove the radical in the denominator:
$\frac{15\sqrt{2}+4}{2\sqrt{2}}=\frac{15\sqrt{2}+4}{2\sqrt{2}}\frac{\sqrt{2}}{\sqrt{2}}=\frac{15\sqrt{2}·\sqrt{2}+4\sqrt{2}}{2\sqrt{2}·\sqrt{2}}=\frac{30+4\sqrt{2}}{4}=\frac{15+2\sqrt{2}}{2}$
One can do it differently: set $a=\sqrt{2}$, so you can write
$\frac{15\sqrt{2}+4}{2\sqrt{2}}=\frac{15a+{a}^{4}}{{a}^{3}}=\frac{15+{a}^{3}}{{a}^{2}}=\frac{15+2\sqrt{2}}{2}$
The final result can also be written by using $\frac{a+b}{c}=\frac{a}{c}+\frac{b}{c}$ so
$\frac{15+2\sqrt{2}}{2}=\frac{15}{2}+\sqrt{2}$
Whether you want to do this last transformation depends on what you have to do with this number.
ubafumene42h
To me the next obvious step is to remove the radical in the denominator so
$\frac{15\sqrt{2}+2}{2\sqrt{2}}=\frac{15+\sqrt{2}}{2}.$
Personally I would stop there, but you could separate this into $\frac{15}{2}+\frac{\sqrt{2}}{2}$ or into $\frac{15}{2}+\frac{1}{\sqrt{2}}$ if you really wanted to. |
# Solution Of Poisson Equation By Finite Difference Method
• Relaxation methods:-Jacobi and Gauss-Seidel method. Finite Difference Method for Hyperbolic Problems - Free download as Powerpoint Presentation (. For the errors ofcompact finite difference approximation to the second derivative andPoisson potential are nonlocal, thus besides the standard energy method and mathematical induction method, the key technique in analysisis to estimate the nonlocal approximation errors in discrete l ∞ and H 1 norm by discrete maximum principle of elliptic. We call this approximation a finite difference approximation (FDA). FDMs convert a linear ODE /PDE into a system of linear equations, which can then be solved by matrix algebra techniques. Numerically Solving a Poisson Equation with Neumann Boundary Conditions numerical solution to be possible. : The differential properties of the solutions of Laplace's equation, and the errors in the method of nets with boundary values in C 2 and C 1,1. Book Cover. numerical techniques for the solution of these equations. Use Finite Difference Method To Solve The Poisson'e Equation For A Silicon PN Junction With Question: Use Finite Difference Method To Solve The Poisson'e Equation For A Silicon PN Junction With A Doping Profile Of At An Input Variable Bias Of Va. method and the finite difference method (FDM). BVPs for Laplace’s and Poisson’s equations. Lecture 04 Part 2: Finite Difference for 2D Poisson's Equation, 2016 Numerical Methods for PDE. The solution is plotted versus at. Implicit-Time Burgers' Equation on a Moving Grid In the last post on solving Burgers' equation on a moving grid we ended up with the semi-discrete equation for the rates: where the spatial derivatives (for the solution and the grid) are approximated by simple central differences. A natural next step is to consider extensions of the methods for various variants of the one-dimensional wave equation to two-dimensional (2D) and three-dimensional (3D) versions of the wave equation. Matlab Database > Partial Differential Equations > Finite Difference Method: approximating the solution of a system of linear equations. Qiqi Wang 6,660 views. The derivation of the membrane equation depends upon the as-sumption that the membrane resists stretching (it is under tension), but does not resist bending. 6 Matrix Notation. The setup of regions, boundary conditions and equations is followed by the solution of the PDE with NDSolve. 1 Introduction to Finite Difference Methods 115 4. Selected Codes and new results; Exercises. Feb 6 Nagel. Displacement nite element methods for elasticity 154 4. In the case of one-dimensional equations this steady state equation is a second order ordinary differential equation. Solution of Laplace Equation using Finite Element Method Parag V. Finite Di erence Methods for Di erential Equations Randall J. where u is the velocity and vis the vorticity. In this paper, using the same number of grid points, we have discussed a new stable compact nine point cubic spline finite difference method of 𝑂 (𝑘 2 + ℎ 4) accuracy for the solution of Poisson’s equation in polar cylindrical coordinates. with a solution which makes it possible to construct a certain approximation to the solution of the original problem as. Philadelphia, 2006, ISBN: -89871-609-8. The proposed method has the. A high-order compact formulation for the 3D Poisson equation. Our method is a finite difference analogue of Anderson’s Method of Local Corrections. 2 Scattering Cross Sections 91 2. In this work, the three-dimensional Poisson's equation in cylindrical coordinates system with the Dirichlet's boundary conditions in a portion of a cylinder for is solved directly, by extending the method of Hockney. Iterative Methods: Conjugate Gradient and Multigrid Methods3 2. Naji Qatanani Abstract Elliptic partial differential equations appear frequently in various fields of science and engineering. LECTURE SLIDES LECTURE NOTES; Numerical Methods for Partial Differential Equations ()(PDF - 1. FTCS method for the heat equation Initial conditions Plot FTCS 7. A Direct Method for the Solution of Poisson’s Equation with Neumann Boundary Conditions on a Staggered Grid of Arbitrary Size U. Poisson equation, numerical methods. LeVeque SIAM, Philadelphia, 2007 http://www. A method for solving Poisson's equation as a set of finite-difference equations is described for an arbitrary localized charge distribution expanded in a partial-wave representation. 0 MB)Finite Difference Discretization of Elliptic Equations: 1D Problem ()(PDF - 1. A self‐consistent, one‐dimensional solution of the Schrödinger and Poisson equations is obtained using the finite‐difference method with a nonuniform mesh size. The solution of the. Solution of ordinary differential equations (6 hours) 6. A First Course in the Numerical Analysis of Differential Equations. Book Cover. Philadelphia, 2006, ISBN: -89871-609-8. Additional Information: A Master's Thesis. 1D Poisson solver with finite differences. Lecture notes on finite volume models of the 2D Diffusion equation. ) Ordinary differential equations, explicit and implicit Runge-Kutta and multistep methods, convergence and stability. where u is the velocity and vis the vorticity. Numerical solution method such as Finite Difference methods are often the only practical and viable ways to solve these differential equations. A finite difference method proceeds by replacing the derivatives in the differential equations by finite difference approximations. In a sense, a finite difference formulation offers a more direct approach to the numerical so-lution of partial differential equations than does a method based on other formulations. LeVeque SIAM, Philadelphia, 2007 http://www. , 51(4):2470–2490, 2013. The boundary value problem of linear elasticity 151 2. Finite-difference, finite element and finite volume method are three important methods to numerically solve partial differential equations. sir,i need the c source code for blasius solution for flat plate using finite difference method would u plz give me because i m from mechanical m. $\endgroup$ - Ian. This formula is usually called the five-point stencil of a point in the grid is a stencil made up of the point itself together with its four "neighbors". Textbook: Numerical Solution of Differential Equations-- Introduction to Finite Difference and Finite Element Methods, Cambridge University Press, in press. Finite difference method to solve poisson's equation in two dimensions. To find a numerical solution to equation (1) with finite difference methods, we first need to define a set of grid points in the domainDas follows: Choose a state step size Δx= b−a N (Nis an integer) and a time step size Δt, draw a set of horizontal and vertical lines across D, and get all intersection points (x. Pro) analysis. Poisson equations. In the left view I represented the charge density, generated with two gaussians, in the right view is the solution to the Poisson equation. In this paper we will develop a method based on the Fast Fouri-er Transform, FFT, for the numerical solution of Poisson's equation in a rectangle. These iterative methods are often referred to as relaxation methods as an initial guess at the solution is allowed to slowly relax towards the true solution, reducing the errors as it does so. Full text of "Finite-difference Methods For Partial Differential Equations" See other formats. The finite difference method for solving the Poisson equation is simply (2) ( hu)i;j = fi;j; 1 im;1 jn; with appropriate processing of boundary conditions. In general, the right hand side of this equation is known, and most of the left hand side of the equation, except for the boundary values are unknown. Laplace and Poisson’s equations in a rectangular region : Five point finite difference schemes, Leibmann’s iterative methods, Dirichlet's and Neumann conditions – Laplace equation in polar coordinates : Finite difference schemes – Approximation of derivatives near a curved boundary while using a square mesh. A noniterative finite-difference method for solution of Poisson’s and Laplace’s equations for linear boundary conditions is given. These finite difference approximations are algebraic in form; they relate the value of the dependent variable at a. Mixed nite elements for the Stokes equation 143 Chapter 9. Richardson Cascadic Multigrid Method for 2D Poisson Equation Based on a Fourth Order Compact Scheme Ming, Li and Chen-Liang, Li, Journal of Applied Mathematics, 2014; Accurate Simulation of Contaminant Transport Using High-Order Compact Finite Difference Schemes Gurarslan, Gurhan, Journal of Applied Mathematics, 2013. They are made available primarily for students in my courses. This gives a large but finite algebraic system of equations to be solved in place of the differentialequation, somethingthat can be done on a computer. 3 MINRES [X,FLAG,RELRES,ITN,RESVEC] = MINRES(A,B,RTOL,MAXIT) solves the linear system of equations A*X = B by means MINRES iterative method. in matlab 1 d finite difference code solid w surface radiation boundary in matlab Essentials of computational physics. Then, the fuzzy Poisson's equation is discretized by fuzzy finite difference method and it is solved as a linear system of equations. ) I think the convergence rate of the solution to this problem to the solution of the original problem may already be only second order, in which case refining the method can't improve anything. It can be used to develop a set of linear equations for the values of (x;y) on the grid points. 2 Solution by the Finite Difference Method 3 Shear stress calculations for joints loaded in shear 3. Stability and convergence of a second order mixed finite element method for the Cahn-Hilliard equation. After an introduction to the various numerical schemes, each equation type—parabolic, elliptic, and hyper-bolic—is allocated a separate chapter. Prerequisites. The second. The used approach allows for solving the full set of the NPP equations without approximations such as the electroneutrality or constant-field assumptions. It is not the only option, alternatives include the finite volume and finite element methods, and also various mesh-free approaches. LECTURE SLIDES LECTURE NOTES; Numerical Methods for Partial Differential Equations ()(PDF - 1. Finite Difference Methods for PDE's. We will focus on 4D drift-kinetic model, where the plasma's motion. The three main numerical ODE solution methods (LMM, Runge-Kutta methods, and Taylor methods) all have FE as their simplest case, but then extend in different directions in order to achieve higher orders of accuracy and/or better stability properties. Numerical Solutions to Poisson Equations. Poisson’s equation is usually solved by some discretisation techniques such as the boundary element method (bem) and the finite element method (fem). Casuality and Energy conservation: Huygens principle. For example, traditionally. The proposed method has the. We call this approximation a finite difference approximation (FDA). In mathematics, finite-difference methods are numerical methods for solving differential equations by approximating them with difference equations, in which finite differences approximate the derivatives. The proposed method can be easily programmed to readily apply on a plate problem. pdf Solution of the Poisson's equation on an unstructured mesh using Matlab distmesh and Finite DIfference. Our method is a finite difference analogue of Anderson's Method of Local Corrections. 9 Other Types of Boundary Conditions. We will see that nonlinear problems can be solved just as easily as linear problems in FEniCS, by simply defining a nonlinear variational problem and calling the solve function. 2) Be able to describe the differences between finite-difference and finite-element methods for solving PDEs. Such matrices are called ”sparse matrix”. independent partial difierential equations. By means of this ex-ample and generalizations of the problem, advantages and limitations of the approach will be elucidated. The NPP equations were solved using the VLUGR2 solver based on an adaptive-grid finite-difference method with an implicit time-stepping. U can vary the number of grid points and the bo… Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. We believe that the algorithm is a valuable addition to typical textbook discussions of the five-point finite-difference method for Poisson's equation. which is known as the five-point difference formula for Laplace's equation. Finite Volume Method Advection-Diffusion Equation (2) wanted: compute F (x i) with F (q (x )) = q x (x )+ vq (x ) where q (x ) := q i for each i = [ x i; x i+ 1] computing the diffusive ux is straightforward: q x x i+ 1 = q (x i+ 1) q (x i) h options for advective ux vq : symmetric ux: vq x i+ 1 = vq (x i)+ vq (x i+ 1) 2 upwind ux: vq x i+ 1 =. The aim of this paper is to give a detail explanation about the parallel solution of a Partial Differential Equation (PDE). Solve 1d Heat Equation Mathematica. Keywords: Laplace Equation, Markov Chain INTRODUCTION There are different methods to solve the Laplace equation like the finite element methods, finite difference methods, moment method and Markov chains method [7]. 1 Conjugate Gradient Methods (CGM) 337 10. Relaxation Methods for Partial Di erential Equations: Applications to Electrostatics David G. This simple but very powerful method for constructing test problems is called the method of manufactured solutions: pick a simple expression for the exact solution, plug it into the equation to obtain the right-hand side (source term $$f$$), then solve the equation with this right-hand side and using the exact solution as a boundary condition, and try to reproduce the exact solution. Exact solution if exist. NBIT number of iterations to compute X solution. Various 2- and 3-dimensional problems are solved using this method, and the results are compared with more conventional techniques, particularly the finite-difference method, which it may be regarded to supersede. A Direct Method for the Solution of Poisson's Equation with Neumann Boundary Conditions on a Staggered Grid of Arbitrary Size U. Finite difference method and Finite element method. The objective of this paper is to develop an improved finite difference method with compact correction term (CCFDM) for solving Poisson's equations. MIXED SEMI-LAGRANGIAN/FINITE DIFFERENCE METHODS FOR PLASMA SIMULATIONS FRANCIS FILBET AND CHANG YANG Abstract. Authors: Gangjoon Yoon:. SCHUMANN Institut fir Reaktorentwicklung, Kernforschungszentrum Karlsruhe 75 Karlsruhe, Postfach 3640, Federal Republic of Germany AND ROLAND A. In this paper I present Numerical solutions of a one dimensional heat Equation together with initial condition and Dirichlet boundary conditions. Lecture 8: Solving the Heat, Laplace and Wave equations using nite ff methods (Compiled 26 January 2018) In this lecture we introduce the nite ff method that is widely used for approximating PDEs using the computer. These problems are called boundary-value problems. Parallel implementations. 1/50 FDM Finite difference methods Poisson equation - an elliptic model problem. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified. a numerical solution of the nonlinear Poisson-Boltzmann equation. Numerical Methods for Differential Equations - p. References. • There is a well-developed theory for numerical solution of HJB equation using finite difference methods ,"An Introduction to Finite Difference Methods for. [email protected] The solution is computed in three steps. The method of differential quadrature is demonstrated by solving the two‐dimensional Poisson equation. Part II: Finite Difference/Volume Discretisation for CFD Finite Volume Method of the Advection-Diffusion Equation A Finite Difference/Volume Method for the Incompressible Navier-Stokes Equations Marker-and-Cell Method, Staggered Grid Spatial Discretisation of the Continuity Equation Spatial Discretisation of the Momentum Equations Time. • In general the solution ucannot be expressed in terms of elementary func-tions and numerical methods are the only way to solve the differential equa-tion by constructing approximate solutions. 1 INTRODUCTION One of the major advantages of the BEM over the Finite Element and Finite Difference methods is that only boundary discretization is usually required rather than the domain discretization needed in those other meth- ods. Journal of Molecular Structure-Theochem, 2005. Poisson’s equation is usually solved by some discretisation techniques such as the boundary element method (bem) and the finite element method (fem). pdf Solution of the Poisson's equation on an unstructured mesh using Matlab distmesh and Finite DIfference. oregonstate. Then the main question in here. Numerical Modeling And Ysis Of The Radial Polymer Casting In. Finite Differences Finite differences. A FINITE DIFFERENCE SCHEME FOR OPTION PRICING IN JUMP DIFFUSION AND EXPONENTIAL LEVY MODELS´ ∗ RAMA CONT †AND EKATERINA VOLTCHKOVA Abstract. Elastic plates. Two novels matrices are determined allowing a direct and exact formulation of the solution of the Poisson equation. Finite-difference approximations to the three boundary value problems for Poisson's equation are given with discretization errors of 0(h3) for the mixed boundary value problem, 0(A3|ln h\) for the Neumann problem and 0(h*) for the Dirichlet problem,. The method of differential quadrature is demonstrated by solving the two‐dimensional Poisson equation. the heat equation; von Neumann stability analysis and Fourier transforms, ADI method. More generally, Jacobi method usually parallelizes well if underlying grid is partitioned in this manner, since all components of x can be updated simultaneously Unfortunately, Gauss-Seidel methods require successive updating of solution components in given order (in e ect, solving triangular. A document containing the material on 2D finite elements for the Poisson equation is. and engineering models. global vector [f] of size M such that the FEM problems "reduces" to solve the following matrix equation: [K]·[φ] = [f]. The Finite Element Method is a general technique for constructing approximate solutions to boundary-value problems. Cubic spline interpolation. 3) is to be solved on the square domain subject to Neumann boundary condition To generate a finite difference approximation of this problem we use the same grid as before and Poisson equation (14. 1 Boundary conditions – Neumann and Dirichlet We solve the transient heat equation rcp ¶T ¶t = ¶ ¶x k ¶T ¶x (1) on the domain L/2 x L/2 subject to the following boundary conditions for fixed temperature T(x = L/2,t) = T left (2) T(x = L/2,t) = T right with the initial condition. development, analysis and implementation of stable and accurate methods for the numerical solution of partial differential equations with mixed initial and boundary conditions specified. Sun, Maximal regularity of fully discrete finite element solutions of parabolic equations, SIAM J. Keywords: Laplace Equation, Markov Chain INTRODUCTION There are different methods to solve the Laplace equation like the finite element methods, finite difference methods, moment method and Markov chains method [7]. where u is the velocity and vis the vorticity. Due to stability problems which occur as a result of source. Use FD quotients to write a system of di erence equations to solve two-point BVP Higher order accurate schemes Systems of rst order BVPs Use what we learned from 1D and extend to Poisson’s equation in 2D & 3D Learn how to handle di erent boundary conditions Finite Di erences October 2, 2013 2 / 52. methods for treating these systems of equations. Lowengrub, C. u 5 u b at the boundary. The method is chosen because it does not require the linearization or assumptions of weak nonlinearity, the solutions are generated in the form of general solution, and it is more realistic compared to the method of simplifying the physical problems. And the Shortley-Weller method [2] is a basic finite dif-ference method for solving the Poisson equation with Dirichlet boundary condition. 1 Introduction 34 3. [email protected] 1 Approximating the Derivatives of a Function by Finite ff Recall that the derivative of a function was de ned by taking the limit of a ff quotient: f′(x) = lim ∆x!0 f(x+∆x) f(x) ∆x (8. Keywords: Immersed interface method, Navier-Stokes equations, Cartesian grid method, finite difference, fast Poisson solvers, irregular domains. • In general the solution ucannot be expressed in terms of elementary func-tions and numerical methods are the only way to solve the differential equa-tion by constructing approximate solutions. LECTURE SLIDES LECTURE NOTES; Numerical Methods for Partial Differential Equations ()(PDF - 1. Here, I assume the readers have the basic knowledge of finite difference method, so I do not write the details behind finite difference method, detail of discretization error, stability, consistency, convergence, and fastest/optimum iterating algorithm. This course will cover numerical solution of PDEs: the method of lines, finite differences, finite element and spectral methods, to an extent necessary for successful numerical modeling of physical phenomena. where u is the velocity and vis the vorticity. 6 Laplace's Equation. The ordinary finite difference method is used to solve the governing differential equation of the plate deflection. A self‐consistent, one‐dimensional solution of the Schrödinger and Poisson equations is obtained using the finite‐difference method with a nonuniform mesh size. Poisson equation of the pressure is calculated by the successive over–relaxation (SOR) method. To validate results of the numerical solution, the Finite Difference solution of the same problem is compared with the Finite Element solution. Its homogeneous form, i. Two Finite Difference Methods for Poisson-Boltzmann Equation I-Liang Chern National Taiwan University, Taipei. The rate of convergence for finite difference methods for Poisson's equation with piecewise constant data as the method of manufactured solutions is better for. Accuracy of the numerical solution of the Poisson-Boltzmann equation. The Cauchy problem for the heat equation: Poisson’s Formula. The key idea of the new approach is to represent the solution with a contour integral connecting the nodal values of each local domain centered at each isolated grid node, which is based on the boundary integral equation on the local domain, and calculate the contour. Methods of this type are initial-value techniques, i. The field is the domain of interest and most often represents a physical structure. Finite difference methods. We believe that the algorithm is a valuable addition to typical textbook discussions of the five-point finite-difference method for Poisson's equation. 1) u(x,0)5u 0(x). The method of differential quadrature is demonstrated by solving the two‐dimensional Poisson equation. • A solution to a differential equation is a function; e. ] Suppose seek a solution to the Laplace Equation subject to Dirichlet boundary conditions : 0 ( , ) ( , ) ( , ) 2 2 y x y x x y x y. A web app solving Poisson's equation in electrostatics using finite difference methods for discretization, followed by gauss-seidel methods for solving the equations. LeVeque DRAFT VERSION for use in the course AMath 585{586 University of Washington Version of September, 2005 WARNING: These notes are incomplete and may contain errors. Initial value problems: Fourier transforms, fundamental solutions, nonhomogeneous equation iii. 1 Conjugate Gradient Methods (CGM) 337 10. Partial Differential Equations generally have many different solutions a x u 2 2 2 = ∂ ∂ and a y u 2 2 2 =− ∂ ∂ Evidently, the sum of these two is zero, and so the function u(x,y) is a solution of the partial differential equation: 0 y u x u 2 2 2 2 = ∂ ∂ + ∂ ∂ Laplace’s Equation Recall the function we used in our reminder. Note that it is very important to keep clear the distinction between the convergence of Newton’s method to a solution of the finite difference equations and the convergence of this finite. The homotopy decomposition method, a relatively new analytical method, is used to solve the 2D and 3D Poisson equations and biharmonic equations. Fast finite difference solutions of the three dimensional poisson s numerical solution heat equation cylindrical coordinates tessshlo engg 3430 a d2q9 lattice used in 2 d geometry b cylindrical coordinate Fast Finite Difference Solutions Of The Three Dimensional Poisson S Numerical Solution Heat Equation Cylindrical Coordinates Tessshlo Engg 3430 A D2q9 Lattice Used In 2 D Geometry B…. sir,i need the c source code for blasius solution for flat plate using finite difference method would u plz give me because i m from mechanical m. The Finite Volume Method (FVM) is a discretization method for the approximation of a single or a system of partial differential equations expressing the conservation, or balance, of one or more quantities. However, FDM is very popular. 2 Line Gauss-Seidel Method 87. In mathematics, finite-difference methods are numerical methods for solving differential equations by approximating them with difference equations, in which finite differences approximate the derivatives. A new and innovative method for solving the 1D Poisson Equation is presented, using the finite differences method, with Robin Boundary conditions. Laplace's equation is a partial di erential equation and its solution relies on the boundary conditions imposed on the system, from which the electric potential is the solution for the area of interest. For example, traditionally. Textbook: Numerical Solution of Differential Equations-- Introduction to Finite Difference and Finite Element Methods, Cambridge University Press, in press. If the membrane is in steady state, the displacement satis es the Poisson equation u= f;~ f= f=k. The Poisson equation may be solved using a Green's function; a general exposition of the Green's function for the Poisson equation is given in the article on the screened Poisson equation. A fast finite difference method is proposed to solve the incompressible Navier-Stokes equations defined on a general domain. Finite Difference, Finite Element and Finite Volume Methods for the Numerical Solution of PDEs Vrushali A. elliptic, parabolic or. DE LA PENA, AND DALE ANDERSON Abstract. Finite Difference Method (now with free code!) The notebook will implement a finite difference method on elliptic boundary value problems of the form: The comments in the notebook will walk you through how to get a numerical solution. Least squares fit. Order of accuracy and consistency. • There is a well-developed theory for numerical solution of HJB equation using finite difference methods ,"An Introduction to Finite Difference Methods for. In 1918, after commenting on Runge's and Richardson's works,. Example: The heat equation. Standard finite difference methods requires more regularity of the solution (e. In modern variants of projection methods the subspaces tend to be chosen so that the functions have local supports and in each equation (4) only a finite number of coefficients are non-zero. sir,i need the c source code for blasius solution for flat plate using finite difference method would u plz give me because i m from mechanical m. Finite Volume Methods3 2. There are many forms of model hyperbolic partial differential equations that are used in analysing various finite difference methods. MIXED SEMI-LAGRANGIAN/FINITE DIFFERENCE METHODS FOR PLASMA SIMULATIONS FRANCIS FILBET AND CHANG YANG Abstract. where S(x) is the quantity of solute (per unit volume and time) being added to the solution at the location x. Numerical Solution of Partial Differential Equations I Finite difference methods for solving time-depend initial value problems of partial differential equations. A mesh-free method does not require the connectivity of nodal points of a mesh or element. In that case, going to a numerical solution is the only viable option. The Laplace operator is common in physics and engineering (heat equation, wave equation). Numerical Solutions to Poisson Equations. The methods depend upon a parameterp>0, and reduce to the classical Störmer-Cowell methods forp=0. Von – Neumann stability of finite difference methods for wave and diffusion equations. In this paper, we propose a new finite difference representation for solving a Dirichlet problem of Poisson’s equation on R 3. Fast finite difference solutions of the three dimensional poisson s numerical solution heat equation cylindrical coordinates tessshlo engg 3430 a d2q9 lattice used in 2 d geometry b cylindrical coordinate Fast Finite Difference Solutions Of The Three Dimensional Poisson S Numerical Solution Heat Equation Cylindrical Coordinates Tessshlo Engg 3430 A D2q9 Lattice Used In 2 D Geometry B…. We have seen that a general solution of the diffusion equation can be built as a linear combination of basic components $$\begin{equation*} e^{-\alpha k^2t}e^{ikx} \tp \end{equation*}$$ A fundamental question is whether such components are also solutions of the finite difference schemes. 8 Other Numerical Schemes. Using the Finite-Difference Method. Numerical solution of Partial differential Equation (8 hours) 7. Strikwerda (second edition) «Numerical Solution of Partial Differential Equations by the Finite Element Method» by Claes Johnson. Finite-difference approximations to the three boundary value problems for Poisson's equation are given with discretization errors of 0(h3) for the mixed boundary value problem, 0(A3|ln h\) for the Neumann problem and 0(h*) for the Dirichlet problem,. Figure 63: Solution of Poisson's equation in two dimensions with simple Dirichlet boundary conditions in the -direction. Study on a Poisson's Equation Solver Based On Deep Learning Technique Tao Shan,Wei Tang, Xunwang Dang, Maokun Li, Fan Yang, Shenheng Xu, and Ji Wu Tsinghua National Laboratory for Information Science and Technology (TNList), Department Of Electronic Engineering, Tsinghua University, Beijing, China Email: [email protected] edu Department of Mathematics Oregon State University Corvallis, OR DOE Multiscale Summer School June 30, 2007 Multiscale Summer School Œ p. • Finite Elements. As electronic digital computers are only capable of handling finite data and operations, any numerical method requiring the use of computers must first be discretized. 5 Separation of Variables for Partial Difference Equations and Analytic Solutions of Ordinary Difference Equations. Quinn, Parallel Programming in C with MPI and OpenMP Finite difference methods - p. Note: Citations are based on reference standards. 1 Finite difference example: 1D implicit heat equation 1. This method uses the finite-difference analogue of an equation to improve the order of convergence, thus resulting in a more accurate method. Equation (1. Mitra Department of Aerospace Engineering Iowa State University Introduction Laplace Equation is a second order partial differential equation (PDE) that appears in many areas of science an engineering, such as electricity, fluid flow, and steady heat conduction. Two Finite Difference Methods for Poisson-Boltzmann Equation I-Liang Chern National Taiwan University, Taipei. Finite Difference Method Numerical solution of Laplace Equation using MATLAB. The finite element analysis of any problem. Solution of 2D Navier–Stokes equation by coupled finite difference-dual reciprocity boundary element method. 3 Strip Transmission Line 82 2. We will focus on 4D drift-kinetic model, where the plasma’s motion. LECTURE SLIDES LECTURE NOTES; Numerical Methods for Partial Differential Equations ()(PDF - 1. An example of the application of finite-difference can also be seen in Richardson’s extrapolation method. Finite di erence methods (FDM) are numerical methods for solving (partial) di erential equations, where (partial) derivatives are approximated by nite di erences. This equation is called Poisson4 equation. 4 Two-Dimensional Heat Equation. In recent years, stimulated by the development of high-speed computers, much work has been done to solve partial differen-tial equations by finite-difference methods, although the. A MOVING FINITE DIFFERENCE METHOD FOR PARTIAL DIFFERENTIAL EQUATIONS GUOJUN LIAO, JIANZHONG SU, ZHONG LEI, GARY C. The method of differential quadrature is demonstrated by solving the two‐dimensional Poisson equation. ] Suppose seek a solution to the Laplace Equation subject to Dirichlet boundary conditions : 0 ( , ) ( , ) ( , ) 2 2 y x y x x y x y. Numerical solution method such as Finite Difference methods are often the only practical and viable ways to solve these differential equations. 2 A plate bonded to an undeformable surface Single lap joints 3. In this case, fuzzy Poisson’s equation with initial condition by fuzzy finite difference method changes to a linear system of equations. TMA4212 Numerical solution of differential equations by difference methods. – We will see the iterative methods come into play when we consider the Poisson equation FFTs: – As already motivated, FFTs can be used to transform a PDE into an algebraic equation in Fourier-space, enabling its easy solution. The rate of convergence for finite difference methods for Poisson's equation with piecewise constant data as the method of manufactured solutions is better for. 1 Spectral Method in the Solution of the PE on a Cube 81 4. The Poisson equation may be solved using a Green's function; a general exposition of the Green's function for the Poisson equation is given in the article on the screened Poisson equation. To examine the performance of the implemented iterative algorithm, a number of experiments were tested. Upper Saddle River, NJ: Prentice Hall, 1987. CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. And the Shortley-Weller method [2] is a basic finite dif-ference method for solving the Poisson equation with Dirichlet boundary condition. This module implements a family of first-order mimetic methods that give consistent discretizations of Poisson-type flow equations on general polyhedral and polygonal grids. : The differential properties of the solutions of Laplace's equation, and the errors in the method of nets with boundary values in C 2 and C 1,1. Many academics refer to boundary value problems as position-dependent and initial value problems as time-dependent. The visualization and animation of the solution is then introduced, and some theoretical aspects of the finite element method are presented. Finite Element Method (FEM) Finite Element Method is widely used in the numerical solution of Electric Field Equation, and became very popular. -Obtain algebraic equations. Finite element method provides a greater flexibility to model complex geometries than finite difference and finite volume methods do. Leykekhman and B. 8 Jacobi Iteration 43. Universityof Wisconsin. Finite difference method Principle: derivatives in the partial differential equation are approximated by linear combinations of function values at the grid points. Finite Difference Method for the Solution of Laplace Equation Ambar K. Nonlinear. The discrete Poisson equation is frequently used in numerical analysis as a stand-in for the continuous Poisson equation, although it is also studied in its own. The procedure is an extension of the widely used technique developed by Loucks for spherically symmetric charge densities. We present fast methods for solving Laplace's and the biharmonic equations on irregular regions with smooth boundaries. consider two types of models: finite difference models and finite element models. This method uses the finite-difference analogue of an equation to improve the order of convergence, thus resulting in a more accurate method. Numerical Methods for Partial Differential Equations, 12(2):235–243, 1996. As electronic digital computers are only capable of handling finite data and operations, any numerical method requiring the use of computers must first be discretized. richlet-Neumann boundary problems for the Poisson equation, and the diffusion and wave equa-tion in quasi-stationary regime; using the finite difference method, in one dimensional case. • Example: 2D-Poisson equation. Parabolic PDE: Explicit and implicit schemes. • Relaxation methods:-Jacobi and Gauss-Seidel method. 2 Finite Difference Scheme for the Wave Equation 116 4. The field is the domain of interest and most often represents a physical structure. An energy stable, hexagonal finite difference scheme for the 2D phase field crystal amplitude equations. There are various methods for numerical solution. Finite-difference approximations to the three boundary value problems for Poisson's equation are given with discretization errors of 0(h3) for the mixed boundary value problem, 0(A3|ln h\) for the Neumann problem and 0(h*) for the Dirichlet problem,. Hyperbolic (wave) equations Finite difference methods, d’Alembert’s solution, method of characteristics, and additional explicit and implicit methods e. The proposed method can be easily programmed to readily apply on a plate problem. (ii) Approximate the given differential equation by equivalent finite difference equations that relate the solutions to the grid points. This way of approximation leads to an explicit central difference method, where it requires $$r = \frac{4 D \Delta{}t^2}{\Delta{}x^2+\Delta{}y^2} 1$$ to guarantee stability. For example, consider a solution to the Poisson equation in the square region 0 x a,. It will again be assumed that the region is two-dimensional, leaving the three-dimensional case to the homework. Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions =. bem has the inherent advantage for problems in the unbounded domain with the property of reducing the spatial dimension by one. Cubic spline interpolation. The Poisson equation may be solved using a Green's function; a general exposition of the Green's function for the Poisson equation is given in the article on the screened Poisson equation. a numerical solution of the nonlinear Poisson-Boltzmann equation. This formula is usually called the five-point stencil of a point in the grid is a stencil made up of the point itself together with its four "neighbors". and the Buneman algorithm for the solution of the standard finite difference formulae. Authors: Gangjoon Yoon:. TMA4212 Numerical solution of differential equations by difference methods. 5 Finite Difference Time Domain Method and the Yee Algorithm 128 4. • Fast methods for linear algebra (solve Ax = b in O(N) time for A dense N × N matrix). Derive the finite volume model for the 2D Diffusion (Poisson) equation; Show and discuss the structure of the coefficient matrix for the 2D finite difference model; Demonstrate use of MATLAB codes for the solving the 2D Poisson; Reading. Laplace's equation is solved using the finite-difference method to generate the arbitrary spatial transforms. residual (CGNR) iterative method by using composite Simpson’s (CS) and finite difference (FD) discretization schemes in solving Fredholm integro-differential equations. Reference:. What you see in there is just a section halfway through the 3D volume, with periodic boundary conditions. 2 Finite Difference Method for Laplace’s Equation 34 3. The Finite Volume Method (FVM) is a discretization method for the approximation of a single or a system of partial differential equations expressing the conservation, or balance, of one or more quantities. With this method, the partial spatial and time derivatives are replaced by a finite difference approximation. 6 The Five Point-Star 139 4. This is the first time that this famous matrix is inverted explicitly, without using the right hand side. These range. Solving the Generalized Poisson Equation using FDM. Numerical solution of Partial differential Equation (8 hours) 7. It includes practical applications for the numerical simulation of flow and transport in rivers and estuaries, the dam-break problem and overland flow. The FDA is only a computer-friendly proxy for the PDE. |
Apart from an IMAP/POP service we provide a webmail front end to interact with our mail server via SquirrelMail. This tool has a very annoying feature, search results are ordered by date, but in the wrong way: From old to new!
SquirrelMail is a very simple to administrate front end, not very nice, but if my experimental icedove doesn’t work I use it too. Furthermore we have staff members, who only use this tool and aren’t impressed by real user-side clients like icedove or sylpheed.. What ever, I had to resort these search results!
Searching for a solutions doesn’t result in a solution, so I had three options: Modifying the SquirrelMail code itself (very bad idea, I know), providing a plugin for SquirrelMail, or writing a userscript.
Ok, hacking the core of SquirrelMail is deprecated, writing a plugin is to much work for now, so I scripted some JavaScript.
The layout of this website is lousy! I think the never heard of div’s or CSS, everything is managed by tables in tables of tables and inline layout specifications -.- So detecting of the right table wasn’t that easy. I had to find the table that contains a headline with the key From :
If I’ve found such a table, all the rows have to be sorted from last to first. Except the first ones defining the headline of that table. So I modified the DOM:
Ok, that’s it! Using this script the search results are ordered in the correct way. Let’s wait for a response from these nerdy SquirrelMail-user ;-) |
# 6.11. J-factors to fluxes¶
Note
This section treats the conversion to fluxes/intensities of 1D runs. See Section 7.7 for similarly converting 2D J-factor maps to flux per pixel/intensity maps.
The flux from a single source or from the Galactic halo within a solid angle $$\Delta\Omega$$ can be computed in two manners:
1. Computing the differential energy spectrum for a fixed J-factor or solid angle $$\Delta\Omega$$:
This is done by CLUMPY’s -z module (see Section Section 7):
$clumpy -z -D; clumpy -z -i clumpy_params_z.txt This first generates a default parameter file, and then the $$\gamma$$-ray flux energy spectrum for a source at redshift $$z=0$$ and J-factor $$J = 10^{20}\,{\rm GeV}^2{\rm ~cm}^{-5}$$. If ROOT is installed, the following plot is shown: Fig. 6.27 CLUMPY’s -z module to compute flux energy spectra for a given J-factor. 2. Computing a flux (differential or integrated in energy) as a function of a varying J-factor or solid angle $$\Delta\Omega$$: This is done by CLUMPY’s -g2 or -h2 modules (see Section 7), depending of considering Galactic DM or a (list of) user-defined halo(s). Here, the user may choose to plot the differential flux or the integrated flux, $\Phi_{\gamma,\nu}(E_{\rm min},E_{\rm max};\,\psi,\theta,\Delta\Omega) = \int\limits_{E_{\rm min}}^{E_{\rm max}}\frac{{\rm d} \Phi_{\gamma,\nu}}{{\rm d} E}(E,\psi,\theta,\Delta\Omega)\, {\rm d}E\,.$ Note The option gSIM_IS_WRITE_FLUXMAPS = True must be enabled here. For the example of the flux from Galactic DM within a solid angle of $$\Delta\Omega$$ of an instrument pointed toward the Galactic anti-center $$(\psi_0, \theta_0) = (\pi, 0)$$:$ clumpy -g2 -D; clumpy -g2 -i clumpy_params_g2.txt
If ROOT is installed, the following plot of the output data is shown:
Fig. 6.28 CLUMPY’s -g2 with default parameters to compute the flux (here: integrated in energy) from Galactic DM as a function of the integration radius $$\alpha_{\rm int}$$ of the search cone $$\Delta\Omega$$.
Using
$clumpy -h2 -D; clumpy -h2 -i clumpy_params_h2.txt produces analogous plots for the remote haloes defined in the$CLUMPY/data/list_generic.txt example file (see Section 7 and the Quick start tutorial):
Fig. 6.29 CLUMPY’s -h2 module with default parameters to compute the flux (here: integrated in energy) from user-defined haloes as a function of the integration radius $$\alpha_{\rm int}$$.
In addition to these two possibilies, the -g3/-g4 or -h3 modules can be used analogous to -g2 or -h2 to compute the intensities from Galactic DM or individual halo objects. For example,
\$ clumpy -h3 -D; clumpy -h3 -i clumpy_params_h3.txt
compares the intensity profiles of the Abell Cluster and a mock dSphG halo:
Fig. 6.30 CLUMPY’s -h3 module with default parameters to compute $${\rm d}\Phi/{\rm d}\Omega$$ (here: integrated in energy) as a function of the distance $$\theta$$ from the halo centre. |
# UMass LIVING
## June 22, 2009
### (Apr 23) Meeting Summary: Wainwright / Jordan, p26-33
Filed under: Uncategorized — umassliving @ 5:43 am
Tags: ,
Worked out example of sum-product to calculate marginal at one node. Noted that $\sum_i \sum_j i j = (\sum_i i) (\sum_j j)$. Looked at extension of sum-product to simultaneously compute marginals at all nodes using $2 |E|$ messages. Noted that max can also be used in place of sum, and in fact, updates can be done on any commutative semirings, of which sum-product and max-product are specific examples.
Looked at junction tree algorithm. Noted that, using the terminology in the paper, a junction tree is a clique tree that satisfies the running intersection property. A clique tree can be constructed for any graph, but a junction tree can only be built on a triangulated graph. The nodes in a junction tree correspond to the maximal cliques of the triangulated graph, and the edges are set using maximum spanning tree, where the weight on an edge is the size of the separator set for the two nodes connected by the edge. Potentials are kept not only over the nodes in the junction tree but also the separator sets, and messages are sent by dividing by the stored potential in the separator set in order to avoid double-counting information. Worked through Example 2.2.
### (Apr 9) Meeting Summary: Wainwright / Jordan, p1-29
Filed under: Uncategorized — umassliving @ 5:42 am
Tags: ,
Specific issues covered – directed and undirected models, parameterizing undirected models using maximal cliques and non-maximal cliques, factor graphs, converting between directed, undirected models, and factor graphs, conditional independence, the need for moralization when converting from directed to undirected, conditional independence assumptions that can only be modeled by either directed or undirected models, explaining away, variable elimination, sum-product algorithm on trees, directed trees versus polytrees.
### (Mar 26) Meeting Summary: Roth & Black, “Field of Experts”
Filed under: Uncategorized — umassliving @ 5:40 am
Tags:
The main point of confusion was the exact difference between Product of Experts and Field of Experts. There were two aspects to this confusion. The first was whether or not Field of Experts was simply Product of Experts trained with overlapping patches. This is not the case, though it might appear so looking at equation 12. The key is that you have one normalizing factor $Z$ for each entire image (that doesn’t factor over each patch), whereas in Product of Experts you have one normalizing factor for each patch example.
The second point was the rationale behind coupling the patches this way. Although during training, one could pick out independent patches, when applying the model to an entire image, overlapping patches will be dependent, and one would want this dependence to be captured during the training process. The experiment results in section 5.3.5 validate this reasoning.
### (Mar 12) Meeting Summary: Liang & Jordan, “An Asymptotic Analysis of Generative, Discriminative, and Pseudolikelihood Estimators”
Filed under: Uncategorized — umassliving @ 5:40 am
Tags:
Why use log loss and Bayes Risk rather than 0-1 loss and Bayes Error? Then again, why use 0-1 loss rather than log loss?
Some confusion of $r( \cdot)$. $r$ is a partitioning, for generative and fully discriminative models, there is only one partitioning, but for pseudolikelihood discriminative, there is one $r_i$ for each node in the graphical model. For the generative model, there is only one partition that encompasses the entire outcome space Z. For the fully discriminative model, there is one partition for each data point $x$, and encompasses all possible labels Y. By maximizing the quantity (3), we are moving probability mass within a neighborhood $r(z)$ and putting it on the point z. Introducing the $r$ notation allows the generative, discriminative, and pseudolikelihood method to be discussed under one framework.
The take-away messages are Corollaries 1 and 2, and that discriminative models enjoy a $O(n^{-1})$ convergence in risk even when the model is misspecified, unlike the generative and pseudolikelihood methods, which matches the observation in Wainwright (2006) that one should use the same inference procedure in both training and testing.
### (Feb 26) Meeting Summary: Larochelle, Bengio, Louradour & Lamblin, “Exploring Strategies for Training Deep Neural Networks”
Filed under: Uncategorized — umassliving @ 5:39 am
Tags:
What is the difference between the RBM and the Autoassociator Network? Both seem quite similar, especially with the constraint on the Autoassociator Network that the encoding functions are sigmoids and $W^T = W^*$, and training the RBM using one step contrastive divergence. However, there appear to be important differences, arising from the fact that the RBM is a generative probabilistic model and the Autoassociator Network is not.
In particular, the best approach seems to be to apply some linear combination of both the updates for the RBM and the Autoassociator Network, as discussed on p26. One hypothesis is that “there is no guarantee that the RBM will encode in its hidden representation all the information in the input vector, [while] an autoassociator is trying to achieve this”.
This relates to the comment on p17 that, if, at some point in a deep belief network, the representation made of units at the highest level are mostly independent, then the next level would have no statistical structure to learn, and thus initialize the weights of the new RBM to zero (and appropriate set the bias weights for the penultimate layer). Conceivably, the RBM could also instead set very strong weights connecting a node in the penultimate layer only to the node in the top layer that is directly above it (a one to one connection). However, we are not sure which the RBM would actual do, using the training scheme given in the paper, and moreover, there is the further question of whether most of the information in the DBN would be contained in bias weights at intermediary levels of the DBN, which would not be as useful when constructing a classifier using the top layer of the DBN as input.
### (Feb 12) Meeting Summary: Weiss, Torralba & Fergus, “Spectral Hashing”
Filed under: Uncategorized — umassliving @ 5:37 am
Tags:
There was some confusion about independence versus correlation. Independence means $p(a,b) = p(a)p(b)$ (or mutual information is zero), while two random variables are uncorrelated if $E[AB] = E[A]E[B]$. It’s fairly easy to show that independence implies two variables are uncorrelated, but the reverse is not always true (Gaussian random variables are an example of an exception). One simple example is $X = -1, 0, 1$, each with $1/3$ probability, and $Y = 1$ if $X = 0$, otherwise $Y = 0$. Then X and Y are uncorrelated but clearly not independent.
There was some confusion also about PCA. Here is one summary [16].
One other question was how to go from the minimization (1) to the minimization (2), any why the eigenvectors are the solution to the relaxed version of (2). If you expand the $||y_i - y_j||^2$ terms into $y_i^2 + y_j^2 + 2 y_i y_j$ for each bit, you should be able to rearrange terms to get it to be 2 times the objective function of (2).
Here are two sets of lecture notes related to this [17] and [18]. These course notes deal with graphs and their Lapalcians (the D-W term), which may help in understanding part of the paper. In particular, this set of notes [19] contains a proof of the Courant-Fischer theorem, which proves that the minimal value of the relaxed problem (2) is the minimal eigenvalue.
### (Feb 5) Meeting Summary: Jeff Hawkins, “On Intelligence”, first several chapters
Filed under: Uncategorized — umassliving @ 5:36 am
Tags:
What is intelligence? Is it behavior, or memory, or something else? General agreement that the Serle Chinese room argument is flawed. If intelligence is behavior, then is the Turing test an appropriate measure of intelligence? Does the fact that a system like ELIZA [21] can apparently fool some laypeople into believing that they are talking to an actual person, despite not having what most AI practitioners would consider real intelligence, indicate a flaw in the Turing test?
If intelligence is memory, then is something like Torralba’s 80 Million Tiny Images [22] the path to AI? Can general AI really be solved by such an approach? Can such an approach provide an interpretation of a conference room, handling the variety of possible chairs, tables, faces, viewing angles, etc?
What will a general AI solution require? Hawkins argues that any approach must have a time component, mimic the physical architecture of the human brain, and have many feedback connections. What else will have to be a part of any general AI solution? Higher-order feature induction?
« Previous Page
Blog at WordPress.com. |
# Filtering Audio Signals in MATLAB
I'm trying to apply a filter to an audio signal in MATLAB and having some trouble processing it.
So far, I have a transfer function that describes a K-weighted filter, and I am able to create a bode plot that looks correct.
Here is the script for that one:
I have another script that reads audio from a .wav file, plays it, and plots the waveform.
That works properly too. Now I want to run that audio file through the K-weighted filter but I'm having trouble with that part.
I am able to use the "designfilt" and "filter" functions to create various filters and process the audio, but I can't get that to work with the transfer function I've created.
My transfer function PF is expressed in the z domain, but it doesn't show up as a proper digital filter in MATLAB.
Is there a way to create a digital filter from my PF transfer function or its coefficients directly?
I've tried using "filtfilt" and "lsim" to process the audio but I haven't had any luck yet.
I've also tried "output=filter(PF,audio)", which returns an error message saying there aren't enough input arguments, and "output=filter(b1,a1,audio)" which returns a matrix that says "NaN" repeatedly.
I'm sure there's something obvious I'm missing or some syntax errors, hopefully somebody here can point me in the right direction.
dataout = filter(TF,datain); |
# How to calculate the increments in the mean of a glm model with link function?
Suppose that I have the following model $$g(\mu)=\beta_0+\beta_1(x_1-\bar{x}_1)+\beta_2(x_2-\bar{x}_2)+\beta_3(x_2-\bar{x}_2)^2$$ where $g(\mu)$ is the complementary log-log function.
I calculated the increments in the mean for each unit change in the values of $x_1$ and $x_2$ fixing a value of $x_1$ then fixing the values of $x_2$.
Fixing the value of $x_2$ I can calculate the increment in $g(\mu)$ as
$$g_1(\mu)=\beta_0+\beta_1((x_1+1)-\bar{x}_1)+\beta_2(x_2-\bar{x}_2)+\beta_3(x_2-\bar{x}_2)^2$$ $$=g(\mu)+\beta_1$$
Fixing now the value of $x_1$ then
$$g_1(\mu)=\beta_0+\beta_1(x_1-\bar{x}_1)+\beta_2((x_2+1)-\bar{x}_2)+\beta_3((x_2+1)-\bar{x}_2)^2$$ $$=g(\mu)+\beta_2+\beta_3+2\beta_3(x_2-\bar{x}_2)$$
So these are the increments in the values of $g(\mu)$ to calculate the increments in $\mu$ I just calculate the inverse of link function?
Edit: The increments in the the complementary log-log in the first case are $$g_1(\mu)-g(\mu)=\beta_1$$ so the increment in $\mu$ is $$1-\exp(-\exp(\beta_1))$$
In the second case the increments in $g$ are $$g_1(\mu)-g(\mu)=\beta_2+\beta_3+2\beta_3(x_2-\bar{x_2})$$ so the increments in $\mu$ are $$1-\exp(-\exp(\beta_2+\beta_3+2\beta_3(x_2-\bar{x_2})))$$
Is it right?
EDIT2: In this model the parameters estimatives are $$\beta_0=-1.177,\quad\beta_1=-0.153,\quad\beta_2=0.153,\quad\beta_3=0.075$$
So in the first case the increment is $$1-\exp(-\exp(\beta_1))=0.57$$
It means that the increment in mean for each unit change in $x_1$ will be $0.57$? It doesn't make sense since $\mu\in [0,1]$.
EDIT3: I did a test and fixed a value for $x_2$ and calculated $\mu$ for the values $x_1=40$ and $x_1=41$ and the difference between those two values are $$0.3076309-0.309594=-0.0019$$ a small reduction in the response variable and it's a reasonable value (expecting something like it). I'm starting to think that this expression for increments is wrong.
If I fix a value for $x_2$ and start $x_1=30$, then start to calculate the values in $\mu$ for $x_1=40,50,60,70$ then I will have increments for $10,20,30,40$ right?
• This matches my interpretation and understanding. I suppose the comment "remember these are multiplicative, not additive!" may be one to throw in as well. – mfloren Jun 16 '17 at 0:25
• @mfloren What you mean with "remember these are multiplicative, not additive!"? – user72621 Jun 16 '17 at 2:15
• Short answer: because all of these changes are within exponents, it multiplicatively affects the change in mean. Think: percentage change (and how that is different from additive change). – mfloren Jun 16 '17 at 18:04 |
1. 4u trig proof
http://prntscr.com/e7naqo
Thanks for any help
2. Re: 4u trig proof
Originally Posted by pikachu975
http://prntscr.com/e7naqo
Thanks for any help
$\noindent One method is as follows. Note that \cos^{2}x = \frac{1}{2}\left(1+c\right) and \sin^{2}x = \frac{1}{2}\left(1-c\right), where c \equiv \cos 2x. Thus$
\begin{align*}\cos^{6} x + \sin^{6}x &= \left(\cos^{2} x\right)^{3} + \left(\sin^{2} x\right)^{3}\\ &= \frac{1}{8}\left((1+c)^{3} + (1-c)^{3}\right) \\ &= \frac{1}{8}\left(1 + 3c^{2} + 1 + 3c^{2}\right) \\ &= \frac{1}{4}+\frac{3}{4}c^{2},\end{align*}
$\noindent as required.$
3. Re: 4u trig proof
Originally Posted by InteGrand
$\noindent One method is as follows. Note that \cos^{2}x = \frac{1}{2}\left(1+c\right) and \sin^{2}x = \frac{1}{2}\left(1-c\right), where c \equiv \cos 2x. Thus$
\begin{align*}\cos^{6} x + \sin^{6}x &= \left(\cos^{2} x\right)^{3} + \left(\sin^{2} x\right)^{3}\\ &= \frac{1}{8}\left((1+c)^{3} + (1-c)^{3}\right) \\ &= \frac{1}{8}\left(1 + 3c^{2} + 1 + 3c^{2}\right) \\ &= \frac{1}{4}+\frac{3}{4}c^{2},\end{align*}
$\noindent as required.$
Thanks for this, is there a way to go from RHS to LHS too?
4. Re: 4u trig proof
Originally Posted by pikachu975
Thanks for this, is there a way to go from RHS to LHS too?
Yeah, e.g. reverse the above steps.
5. Re: 4u trig proof
Originally Posted by pikachu975
http://prntscr.com/e7naqo
Thanks for any help
I thought to share another method to tackle the question:
$\frac{1}{4}+ \frac{3}{4}cos^{2}2x = \frac{1}{4}+ \frac{3}{4}(1-sin^{2}2x)$
$= 1 -3sin^{2}x \ cos^{2}x$
$= 1-3sin^{2}x \ cos^{2}x \ (sin^{2}x \ + cos^{2}x) = 1-3sin^{4}x\ cos^{2}x -3sin^{2}x \ cos^{4}x$
$= sin^{6}x \ +cos^{6}x$
The last line is from the fact that $1 = (sin^{2}x \ + cos^{2}x)^{3}$
6. Re: 4u trig proof
$Using \sin^2 x+\cos^2 x = 1\Rightarrow \sin^2 x+\cos^2 x+(-1) = 0$
$Now if a+b+c=0\;, Then a^3+b^3+c^3=3abc$
$So (\sin^2 x)^3+(\cos^2 x)^3+(-1)^3=-3\sin^2 x\cdot \cos^2 x$
$So \sin^6 x+\cos^6 x= \frac{1}{4}\left[4-3(\sin 2x)^2\right] = \frac{1}{4}\left[1+3\cos^2 (2x)\right].$
7. Re: 4u trig proof
Originally Posted by juantheron
$Using \sin^2 x+\cos^2 x = 1\Rightarrow \sin^2 x+\cos^2 x+(-1) = 0$
$Now if a+b+c=0\;, Then a^3+b^3+c^3=3abc$
$So (\sin^2 x)^3+(\cos^2 x)^3+(-1)^3=-3\sin^2 x\cdot \cos^2 x$
$So \sin^6 x+\cos^6 x= \frac{1}{4}\left[4-3(\sin 2x)^2\right] = \frac{1}{4}\left[1+3\cos^2 (2x)\right].$
Where is the 3abc from? Thanks btw
8. Re: 4u trig proof
Originally Posted by pikachu975
Where is the 3abc from? Thanks btw
$\noindent That fact follows from the factorisation$
$a^{3} + b^{3} + c^{3} -3abc = (a+b+c)\left(a^{2} + b^{2} + c^{2} - ab -bc -ca\right).$
There are currently 1 users browsing this thread. (0 members and 1 guests)
Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
• |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Oct 2019, 22:53
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# x and y are positive integers. What is the units digit of the 3^4x...
Author Message
TAGS:
### Hide Tags
Intern
Joined: 16 Feb 2016
Posts: 49
Concentration: Other, Other
x and y are positive integers. What is the units digit of the 3^4x... [#permalink]
### Show Tags
11 May 2016, 22:27
6
00:00
Difficulty:
55% (hard)
Question Stats:
60% (01:33) correct 40% (01:34) wrong based on 163 sessions
### HideShow timer Statistics
x and y are positive integers.
What is the units digit of the result of $$3^{4x+2y+6}$$?
(1) x=1
(2) y=2
Extract 2 from the power and convert the equation to the 9 to the power of...
Math Expert
Joined: 02 Aug 2009
Posts: 7961
x and y are positive integers. What is the units digit of the 3^4x... [#permalink]
### Show Tags
11 May 2016, 22:42
maipenrai wrote:
x and y are positive integers.
What is the units digit of the result of $$3^{4x+2y+6}$$?
(1) x=1
(2) y=2
Extract 2 from the power and convert the equation to the 9 to the power of...
Hi,
$$3^{4x+2y+6} = 3^{4x}+3^{2y}+3^6$$...
lets see each term--
1) $$3^{4x}$$ - since the power is a multiple of 4, the units digit will be same as 3^4 or 1.
2) $$3^{2y}$$- this will depend on VALUE of y.
If y is EVEN, it will have units digit of 3^4 or 1.
If y is ODD, it will have units digit of 3^2 or 9.
3)$$3^6$$- since 6 = 4*1+2, it will have same units digit as 3^2 or 9
so our answer depend on - WHETHER y IS ODD OR EVEN
lets see the statements
(1) x=1
Insuff
(2) y=2
Suff
B
_________________
Senior Manager
Joined: 08 Dec 2015
Posts: 285
GMAT 1: 600 Q44 V27
Re: x and y are positive integers. What is the units digit of the 3^4x... [#permalink]
### Show Tags
04 Jul 2016, 13:54
chetan2u, I think you made a typo, in the beginning of your explanation you sum the terms, but you have to multiply them to get the expression from the stem.
Senior Manager
Joined: 02 Mar 2012
Posts: 274
Schools: Schulich '16
Re: x and y are positive integers. What is the units digit of the 3^4x... [#permalink]
### Show Tags
04 Jul 2016, 22:07
B
since 3^ something has 4 distinct unit dgits.
So option 2 says - 3^10+4y for which y is a multiple of 4 , so the units digit will be same always.
not the same case with statement 1
Current Student
Joined: 12 Aug 2015
Posts: 2568
Schools: Boston U '20 (M)
GRE 1: Q169 V154
Re: x and y are positive integers. What is the units digit of the 3^4x... [#permalink]
### Show Tags
24 Nov 2016, 04:57
1
Critical Information =>
Cyclicity of 3 is four
3=> 4m+1
9=> 4m+2
7=> 4m+3
1=> 4m
so her as 4x will always be divisible by 4
we need the value of y.
Hence statement 2 is sufficient
Hence B
_________________
Non-Human User
Joined: 09 Sep 2013
Posts: 13204
Re: x and y are positive integers. What is the units digit of the 3^4x... [#permalink]
### Show Tags
23 Oct 2018, 01:32
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: x and y are positive integers. What is the units digit of the 3^4x... [#permalink] 23 Oct 2018, 01:32
Display posts from previous: Sort by |
My little alien buddy. Could have had a career with Disney, if they were not so shy.
A Mastodon instance for maths people. The kind of people who make $$\pi z^2 \times a$$ jokes. Use $$ and $$ for inline LaTeX, and $ and $ for display mode. |
Quasar mass and accretion rates
This page on Wikipedia -- Quasars mentions that the "The largest known [quasar] is estimated to consume matter equivalent to 600 Earths per minute". However, there is no citation for this comment. How can I find out where this information came from? I've commented in the Talk section for the page.
Tricky to say for sure, but I would imagine it comes about from measurements of the luminosity and inference of the black hole mass in such systems.
The most extreme objects radiate at the Eddington luminosity, where gravitational forces on matter falling into the black hole are balanced by radiation pressure from the heated material closer in.
If infalling mass is converted to luminosity at a rate of $$L = \epsilon \dot{M} c^2,$$ where $\dot{M}$ is the mass accretion rate, $L$ is the luminosity and $\epsilon$ is an efficiency factor, which should be of order 0.1; then the mass accretion rate at the Eddington limit is given by $$\dot{M} = \frac{4\pi G M m_p}{\epsilon c \sigma_T} \simeq 1.4\times 10^{15}\frac{M}{M_{\odot}}\ {\rm kg/s},$$ where $M$ is the black hole mass, $m_p$ the mass of a proton and $\sigma_T$ is the Thomson scattering cross-section for free electrons (the major source of opacity in the infalling hot gas).
The biggest supermassive black holes in the universe have $M \simeq 10^{10}M_{\odot}$ and thus the Eddington accretion rate for such objects is about $1.4\times 10^{25}$ kg/s or about 2.3 Earths/second or 140 Earths per minute. The difference between this estimate and the one on the wikipedia page could be what is assumed for the biggest $M$ or that $\epsilon$ is a bit smaller than 0.1 or indeed that the luminosity could exceed the Eddington luminosity (because the accretion isn't spherical).
Perhaps a simpler way to get the answer is to find the most luminous quasar and divide by $\epsilon c^2$. The most luminous quasar ever seen is probably something like 3C 454.3, which reaches $\sim 5\times 10^{40}$ Watts in its highest state. Using $\epsilon = 0.1$ yields about an Earth mass per second for the accretion rate.
So perhaps the number on the wikipedia page is a little exaggerated.
• Great answer! Your link to 3C 454.3 (mdpi.com/2075-4434/5/1/3/pdf) doesn't work. EDIT: Just found a chached copy at webcache.googleusercontent.com/… – Jim421616 Jul 4 '18 at 2:29
• Using your equation for M-dot, and the mass of 3C273 of 886 million solar masses I get 1.24E18 kg/s. Does that sound right? For the Thomson scattering cross-section I used 6.65E-29 m^2 – Jim421616 Jul 4 '18 at 3:33
• @Jim421616 No, you forgot the factor of a million! – Rob Jeffries Jul 4 '18 at 3:47
• Oh yes, I just saw that I used the wrong mass for solar :) – Jim421616 Jul 4 '18 at 3:51
Here is a study from 2012 for the largest recorded quasar which quotes an output of 400 times the mass of the sun per year, which is 253 earth masses per minute (133178400 M ⊕ / 525600 mins) at 2.5 percent the speed of light, located 1 billion light years away.
https://vtnews.vt.edu/articles/2012/11/112912-science-quasar.html
It's the largest recorded quasar, I don't know the figure for the largest theoretical quasar, there are apparently hundreds of people theorizing and debating the theoretical maximum.
• The question asks about the mass accretion rate, not the size of any outflow. – Rob Jeffries Jul 6 '18 at 7:21 |
The R package surveillance implements statistical methods for the retrospective modeling and prospective monitoring of epidemic phenomena in temporal and spatio-temporal contexts. Focus is on (routinely collected) public health surveillance data, but the methods just as well apply to data from environmetrics, econometrics or the social sciences. As many of the monitoring methods rely on statistical process control methodology, the package is also relevant to quality control and reliability engineering.
## Details
The package implements many typical outbreak detection procedures such as Stroup et al. (1989), Farrington et al. (1996), Rossi et al. (1999), Rogerson and Yamada (2001), a Bayesian approach (Höhle, 2007), negative binomial CUSUM methods (Höhle and Mazick, 2009), and a detector based on generalized likelihood ratios (Höhle and Paul, 2008), see wrap.algo. Also CUSUMs for the prospective change-point detection in binomial, beta-binomial and multinomial time series are covered based on generalized linear modeling, see categoricalCUSUM. This includes, e.g., paired comparison Bradley-Terry modeling described in Höhle (2010), or paired binary CUSUM (pairedbinCUSUM) described by Steiner et al. (1999). The package contains several real-world datasets, the ability to simulate outbreak data, visualize the results of the monitoring in temporal, spatial or spatio-temporal fashion. In dealing with time series data, the fundamental data structure of the package is the S4 class sts wrapping observations, monitoring results and date handling for multivariate time series. A recent overview of the available monitoring procedures is given by Salmon et al. (2016).
For the retrospective analysis of epidemic spread, the package provides three endemic-epidemic modeling frameworks with tools for visualization, likelihood inference, and simulation. The function hhh4 offers inference methods for the (multivariate) count time series models of Held et al. (2005), Paul et al. (2008), Paul and Held (2011), Held and Paul (2012), and Meyer and Held (2014). See vignette("hhh4") for a general introduction and vignette("hhh4_spacetime") for a discussion and illustration of spatial hhh4 models. Furthermore, the fully Bayesian approach for univariate time series of counts from Held et al. (2006) is implemented as function algo.twins. Self-exciting point processes are modeled through endemic-epidemic conditional intensity functions. twinSIR (Höhle, 2009) models the susceptible-infectious-recovered (SIR) event history of a fixed population, e.g, epidemics across farms or networks; see vignette("twinSIR") for an illustration. twinstim (Meyer et al., 2012) fits spatio-temporal point process models to point patterns of infective events, e.g., time-stamped geo-referenced surveillance data on infectious disease occurrence; see vignette("twinstim") for an illustration. A recent overview of the implemented space-time modeling frameworks for epidemic phenomena is given by Meyer et al. (2017).
## Author
Michael Höhle, Sebastian Meyer, Michaela Paul
Maintainer: Sebastian Meyer <[email protected]>
## Acknowledgements
Substantial contributions of code by: Leonhard Held, Howard Burkom, Thais Correa, Mathias Hofmann, Christian Lang, Juliane Manitz, Andrea Riebler, Daniel Sabanés Bové, Maëlle Salmon, Dirk Schumacher, Stefan Steiner, Mikko Virtanen, Wei Wei, Valentin Wimmer .
Furthermore, the authors would like to thank the following people for ideas, discussions, testing and feedback: Doris Altmann, Johannes Bracher, Caterina De Bacco, Johannes Dreesman, Johannes Elias, Marc Geilhufe, Jim Hester, Kurt Hornik, Mayeul Kauffmann, Junyi Lu, Lore Merdrignac, Tim Pollington, Marcos Prates, André Victor Ribeiro Amaral, Brian D. Ripley, François Rousseu, Barry Rowlingson, Christopher W. Ryan, Klaus Stark, Yann Le Strat, André Michael Toschke, Wei Wei, George Wood, Achim Zeileis, Bing Zhang .
## References
citation(package="surveillance") gives the two main software references for the modeling (Meyer et al., 2017) and the monitoring (Salmon et al., 2016) functionalities:
• Meyer S, Held L, Höhle M (2017). “Spatio-Temporal Analysis of Epidemic Phenomena Using the R Package surveillance.” Journal of Statistical Software, 77(11), 1--55. doi: 10.18637/jss.v077.i11 .
• Salmon M, Schumacher D, Höhle M (2016). “Monitoring Count Time Series in R: Aberration Detection in Public Health Surveillance.” Journal of Statistical Software, 70(10), 1--35. doi: 10.18637/jss.v070.i10 .
Further references are listed in surveillance:::REFERENCES.
If you use the surveillance package in your own work, please do cite the corresponding publications.
## Additional documentation and illustrations of the methods are |
# Class to verify if a rectification is upgradable
Today I encountered something that I have been wondering about in the past a few times before. How can I refactor a method that has this format:
private boolean isRectificationUpgradable(Rectification rectification) {
final boolean validType = rectification.getType() == RectificationType.VH
|| rectification.getType() == RectificationType.COL;
if (!validType) {
return false;
}
if (rectification.getStatus() != RectificationStatus.IN_PROGRESS && rectification.getStatus() != RectificationStatus.OPEN) {
return false;
}
if (rectification.getVehicle() == null) {
return false;
}
final List<CarPassCertificate> certificates = rectification.getVehicle().getCertificates();
if (CollectionUtils.isEmpty(certificates)) {
return false;
}
CarPassCertificate validCertificate = getValidCertificate(rectification);
if (validCertificate == null) {
return false;
}
return true;
}
When a rectification is created it has the status "open". An employee indicates that he started to handle it by putting it in the status "in progress".
The upgradable in this context means that sometimes the creator chose a certain type for the rectification, when in fact he could have picked a more accurate other type. In this case someone in the application can "upgrade" this rectification to the more specific type.
In code, this "upgraded" type is not modeled as another class as it is not really an upgrade, it always stays the same Rectification class and some shared additional fields are set to make it a bit more specific.
I think it is pretty readable but as more validations are added it can become very long. Any ideas?
• Are you on Java 8? Sep 15, 2015 at 10:03
• No, on java 7 at the moment. Sep 15, 2015 at 10:05
• How can a rectification be open and in progress and not have a car? Sep 15, 2015 at 11:59
• But since getValidCertificate returns CarPassCertificate, it must itself be fetching the car's certificates. You're duplicating code here, as far as I can see. You're doing a couple of checks which getValidCertificate will do for you. Sep 15, 2015 at 13:17
• @itsbruce, ok, now I understand. What you are saying is correct, I have moved that specific check to the getValidCertificate method, which is a better place. That cleans up the main method. Thank you. Sep 15, 2015 at 13:36
For chained, cascading conditions like you have, there's really nothing wrong with what you are doing (conceptually). A sequence of if-statements identifying invalid conditions, and returning false if invalid, is just fine.
A good idea is to always organize the most common reason to be invalid first, so that you reject values with the least amount of effort overall.
Finally, it makes little sense, other than for debugging purposes, to have temporary variables to hold state. So, for example, the following:
final boolean validType = rectification.getType() == RectificationType.VH
|| rectification.getType() == RectificationType.COL;
if (!validType) {
return false;
}
should just be:
if (rectification.getType() != RectificationType.VH
&& rectification.getType() != RectificationType.COL) {
return false;
}
h.j.k already pointed out that the last boolean statement can often be implemented as a single return, but even that is something that I feel is optional since the pattern of conditions is often more important to keep consistent, than the last condition being "small". Consistent conditions allow for easier maintenance of the code too (adding a condition is a simple add, not a modification of an existing short return).
Putting these suggestions together I would happily "Pass" the following code in a review:
private boolean isRectificationUpgradable(Rectification rectification) {
if (rectification.getType() != RectificationType.VH
&& rectification.getType() != RectificationType.COL) {
return false;
}
if (rectification.getStatus() != RectificationStatus.IN_PROGRESS
&& rectification.getStatus() != RectificationStatus.OPEN) {
return false;
}
if (rectification.getVehicle() == null) {
return false;
}
if (CollectionUtils.isEmpty(rectification.getVehicle().getCertificates())) {
return false;
}
if (getValidCertificate(rectification) == null) {
return false;
}
return true;
}
• Thanks for pointing out that the variables should not be used. I first thought that it would give more "meaning" to what that if is checking, but in this specific case it doesn't really do that and just clutters the method. After adding a few suggestions from other answers I decided that the code is clear enough right now (like you suggested). I'm accepting yours as the best answer. Sep 15, 2015 at 13:26
This looks as if it is part of the upgrade code itself. I think you have too low a level of abstraction in here and you are also mixing different concerns. Also, your workflow just seems fragile.
# Cars and certificates
You check that there is a car, then you check that it has any certificates, then you pass the rectification to getValidCertificate. Either you only need to pass the certificate list or you don't need to do the "has car/certificates" check yourself, since getValidCertificates clearly has to do that itself to be able to return a CarPassCertificate (or null).
(Since starting this question, we've discussed this issue in comments and I see you've agreed that only getValidCertificates needs to do the check. Nice.)
# Workflow
How can a rectification be both open and in progress and yet not even have a car? Even if it is legitimate to have one open with no car, I can't see how it is valid for an in-progress task. I really think your type hierarchy may not properly model your workflow.
If a rectification may exist for a period without a car (let's think of it as a rectification request) but
1. Rectifications can only be actioned once a car is added
2. Cars are not removed from Rectifications once added
then you should have two different classes to represent them. The first type should not have a car field at all. The second should not be creatable without being passed a non-null car object. Depending on the rest of your code, the second class could be a subclass of the first (with the addition of a car field amongst other things) or share an interface with the first (preferred option of those two) or the request could be an entirely separate class, replaced with a car-containing rectification by a factory.
Either way, isRectificationUpgradable only need specify the second type as input. And immediately a whole category of errors is eliminated. There is no need to check for the existence of a car when it is guaranteed to be present.
If you do it this way, you never have to check for the presence of a real car. Any method that depends on the existence of a car simply has to specify the car-owning type.
I would not be surprised if other stages in the lifecycle of rectifications can be treated this way. If you create a common rectification interface but
• only the OpenRectification class/interface (which has no employee field) has a toAssigned method which requires an employee object and returns an assignedRectification object
• only the assignedRectification type has changeAssignedEmployee method and toInProgress methods
• only thetoInProgress type has methods which can be used to (for example) log work done
then many more categories of error evaporate.
• Thank you so much for this answer @itsbruce. I really like how you went beyond what I was asking and question the whole workflow itself. I never thought about it this way before and it seems I am not leveraging the power of object oriented design enough. While your points are absolutely right, it is too late in this project to make such fundamental changes but you got the wheels in my head cranking and I'll be sure to think more in this way in the next one. Sep 16, 2015 at 7:21
You have two similar comparisons here (I'm assuming they are enum - or gasp, primitive - values for == to work correctly):
!(rectification.getType() == RectificationType.VH
|| rectification.getType() == RectificationType.COL)
rectification.getStatus() != RectificationStatus.IN_PROGRESS
&& rectification.getStatus() != RectificationStatus.OPEN
You should choose whether you want to 'describe' such comparisons as 'not either the following', or 'not this and not that'. When you stick to one description, you don't have to context-switch between the various logical operators... just one common understanding will do. :) For illustration, here is the same comparison with the latter:
if (rectification.getType() != RectificationType.VH
&& rectification.getType() != RectificationType.COL) {
return false;
}
if (rectification.getStatus() != RectificationStatus.IN_PROGRESS
&& rectification.getStatus() != RectificationStatus.OPEN) {
return false;
}
If you find doing multiple logical operators is getting out of hand, you can also use a Collection to help you out (once again, assuming enum values here, else something like a Set<Integer>):
// skipping 'Rectification*.' on the values for brevity
Set<RectificationType> validType = EnumSet.of(VH, COL);
Set<RectificationStatus> validStatus = EnumSet.of(IN_PROGRESS, OPEN);
if (!validType.contains(rectification.getType()) &&
!validStatus.contains(rectification.getStatus()) {
return false;
}
2. Group comparisons together
Next, you check whether rectification.getVehicle() is non-null, and if so whether its getCertificates() returns empty. Therefore, you may want to consider grouping this into a single if-statement:
if (rectification.getVehicle() == null ||
CollectionUtils.isEmpty(rectification.getVehicle().getCertificates())) {
return false;
}
3. Consolidate final if-else
Usually, if (pun unintended) there is a final if statement with a return statement, followed by a final return, one can easily consolidate that as one return statement using the ternary operator. In your case, this results in:
return getValidCertificate(rectification) != null;
• Regarding point 3, this looks like code into which successive validation rules have been added and which may receive more. If further validation rules are required but are only worth consideration where the car is valid, then the OP is probably better off leaving that final return alone. Although if that is true, this code really needs to be better organised. Sep 15, 2015 at 11:27
• @h.j.k, thanks for pointing out the differences between the boolean operators, I chose to change them to a common understanding like you suggested and introduced a method on the enum. (I'll update my question in a few minutes). I also grouped the comparisons together. As for the final if else, I left that intact as it looks more readable to me that way. Sep 15, 2015 at 13:22
• @geoffreydv glad to help. :) Sep 15, 2015 at 14:03
I suggest you to delegate the responsibility for each rule to evaluate itself. Each rule is implemented in a separate class.
public interface IRule<T> {
boolean test(T value);
}
public final class RectificationTypeRule implements IRule<Rectification> {
@Override
public final boolean test(Rectification r) {
//TODO: test on r.getType() as you need
}
}
public final class RectificationStatusRule implements IRule<Rectification> {
@Override
public final boolean test(Rectification r) {
//TODO: test on r.getStatus() as you need
}
}
...
Your method could then use a collection of rules (theses rules can be provided by a factory for example) and test against each to see if a Rectification is valid (or upgradable in your case).
private boolean isRectificationUpgradable(Rectification rectification) {
for(IRule<Rectification> rule : rules) {
if(!rule.test(rectification)) {
return false;
}
return true;
}
You can add as much rules as you want in the future and the readability of the method won't be impacted.
Plus it allows you to switch from a ruleSet to another at runtime (make your program more evolutive).
• The user is not on Java 8, so the Predicate is not available. Still this is a reasonable review. I would suggest though, that you use the functional-style of predicate creation, though. Sep 15, 2015 at 12:56
• @rolfl I've added a custom interface that mimic the Java8 Predicate. Could you detail what you mean by "functional-style of predicate creation" please ? Sep 15, 2015 at 13:02
• I think this is a great answer but for my specific case I think it is too complicated. It would take someone more than just a quick glance at the method to understand what is going on. When there are a bunch more rules I would choose this method, thanks! Sep 15, 2015 at 13:16
• @geoffreydv does isRectificationUpgradable form part of validation in the sense of object validation (that is to say, an upgrade-type object will not be created if this check fails) or is it validation in the sense of enforcing business rules? If the former, the solution here probably is overkill. If the latter, it's a very good solution. Sep 15, 2015 at 14:43 |
# Kernel (algebra)
In the various branches of mathematics that fall under the heading of abstract algebra, the kernel of a homomorphism measures the degree to which the homomorphism fails to be injective. An important special case is the kernel of a matrix, also called the "null space".
The definition of kernel takes various forms in various contexts. But in all of them, the kernel of a homomorphism is trivial (in a sense relevant to that context) if and only if the homomorphism is injective. The fundamental theorem on homomorphisms (or first isomorphism theorem) is a theorem, again taking various forms, that applies to the quotient algebra defined by the kernel.
In this article, we first survey kernels for some important types of algebraic structures; then we give general definitions from universal algebra for generic algebraic structures.
Survey of examples
Linear operators
Let "V" and "W" be vector spaces and let "T" be a linear transformation from "V" to "W". If 0"W" is the zero vector of "W", then the kernel of "T" is the preimage of the singleton set {0"W" }; that is, the subset of "V" consisting of all those elements of "V" that are mapped by "T" to the element 0"W". The kernel is usually denoted as "ker "T" ", or some variation thereof:
:$mathop\left\{mathrm\left\{ker, T := \left\{mathbf\left\{v\right\} in V : Tmathbf\left\{v\right\} = mathbf\left\{0\right\}_\left\{W\right\}\right\}mbox\left\{.\right\} !$
Since a linear transformation preserves zero vectors, the zero vector 0"V" of "V" must belong to the kernel. The transformation "T" is injective if and only if its kernel is only the singleton set {0"V" }.
It turns out that ker "T" is always a linear subspace of "V". Thus, it makes sense to speak of the quotient space "V" /(ker "T" ). The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of "T" (which is a subspace of "W"). As a consequence, the dimension of "V" equals the dimension of the kernel plus the dimension of the image.
If "V" and "W" are finite-dimensional and bases have been chosen, then "T" can be described by a matrix "M", and the kernel can be computed by solving the homogeneous system of linear equations "M" v = 0. In this representation, the kernel corresponds to the null space of "M". The dimension of the null space, called the nullity of "M", is given by the number of columns of "M" minus the rank of "M", as a consequence of the rank-nullity theorem.
Solving homogeneous differential equations often amounts to computing the kernel of certain differential operators.For instance, in order to find all twice-differentiable functions f from the real line to itself such that: xf "(x) + 3f '(x) = f (x),let V be the space of all twice differentiable functions, let W be the space of all functions, and define a linear operator T from V to W by: (Tf )(x) = xf "(x) + 3f '(x) - f (x)for f in V and x an arbitrary real number.Then all solutions to the differential equation are in ker T.
One can define kernels for homomorphisms between modules over a ring in an analogous manner.This includes kernels for homomorphisms between abelian groups as a special case.This example captures the essence of kernels in general abelian categories; see Kernel (category theory).
Group homomorphisms
Let G and H be groups and let f be a group homomorphism from G to H.If eH is the identity element of H, then the "kernel" of f is the preimage of the singleton set {eH }; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH .The kernel is usually denoted "ker f " (or a variation).In symbols:: $mathop\left\{mathrm\left\{ker f := \left\{g in G : f\left(g\right) = e_\left\{H\right\}\right\}mbox\left\{.\right\} !$
Since a group homomorphism preserves identity elements, the identity element eG of G must belong to the kernel.The homomorphism f is injective if and only if its kernel is only the singleton set {eG}.
It turns out that ker f is not only a subgroup of G but in fact a normal subgroup.Thus, it makes sense to speak of the quotient group G /(ker f ).The first isomorphism theorem for groups states that this quotient group is naturally isomorphic to the image of f (which is a subgroup of H).
In the special case of abelian groups, this works in exactly the same way as in the previous section.
Ring homomorphisms
Let R and S be rings (assumed unital) and let f be a ring homomorphism from R to S.If 0S is the zero element of S, then the "kernel" of f is the preimage of the singleton set {0S}; that is, the subset of R consisting of all those elements of R that are mapped by f to the element 0S.The kernel is usually denoted "ker f" (or a variation).In symbols:: $mathop\left\{mathrm\left\{ker f := \left\{r in R : f\left(r\right) = 0_\left\{S\right\}\right\}mbox\left\{.\right\} !$
Since a ring homomorphism preserves zero elements, the zero element 0R of R must belong to the kernel.The homomorphism f is injective if and only if its kernel is only the singleton set {0R}.
It turns out that, although ker f is generally not a subring of R since it may not contain the multiplicative identity, it is nevertheless a two-sided ideal of R.Thus, it makes sense to speak of the quotient ring R/(ker f).The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S).
To some extent, this can be thought of as a special case of the situation for modules, since these are all bimodules over a ring R:
* R itself;
* any two-sided ideal of R (such as ker f);
* any quotient ring of R (such as R/(ker f)); and
* the codomain of any ring homomorphism whose domain is R (such as S, the codomain of f).However, the isomorphism theorem gives a stronger result, because ring isomorphisms preserve multiplication while module isomorphisms (even between rings) in general do not.
This example captures the essence of kernels in general Mal'cev algebras.
Monoid homomorphisms
Let M and N be monoids and let f be a monoid homomorphism from M to N.Then the "kernel" of f is the subset of the direct product M × M consisting of all those ordered pairs of elements of M whose components are both mapped by f to the same element in N.The kernel is usually denoted "ker f" (or a variation).In symbols:: $mathop\left\{mathrm\left\{ker f := \left\{\left(m,m\text{'}\right) in M imes M : f\left(m\right) = f\left(m\text{'}\right)\right\}mbox\left\{.\right\} !$
Since f is a function, the elements of the form (m,m) must belong to the kernel.The homomorphism f is injective if and only if its kernel is only the diagonal set {(m,m) : m in M}.
It turns out that ker f is an equivalence relation on M, and in fact a congruence relation.Thus, it makes sense to speak of the quotient monoid M/(ker f).The first isomorphism theorem for monoids states that this quotient monoid is naturally isomorphic to the image of f (which is a submonoid of N).
This is very different in flavour from the above examples.In particular, the preimage of the identity element of N is "not" enough to determine the kernel of f.This is because monoids are not Mal'cev algebras.
Universal algebra
All the above cases may be unified and generalized in universal algebra.
General case
Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B.Then the "kernel" of f is the subset of the direct product A × A consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B.The kernel is usually denoted "ker f" (or a variation).In symbols:: $mathop\left\{mathrm\left\{ker f := \left\{\left(a,a\text{'}\right) in A imes A : f\left(a\right) = f\left(a\text{'}\right)\right\}mbox\left\{.\right\} !$
Since f is a function, the elements of the form (a,a) must belong to the kernel.The homomorphism f is injective if and only if its kernel is only the diagonal set {(a,a) : a in A}.
It turns out that ker f is an equivalence relation on A, and in fact a congruence relation.Thus, it makes sense to speak of the quotient algebra A/(ker f).The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
Note that the definition of kernel here (as in the monoid example) doesn't depend on the algebraic structure; it is a purely set-theoretic concept.For more on this general concept, outside of abstract algebra, see kernel of a function.
Mal'cev algebras
In the case of Mal'cev algebras, this construction can be simplified. Every Mal'cev algebra has a special neutral element (the zero vector in the case of vector spaces, the identity element in the case of groups, and the zero element in the case of rings or modules). The characteristic feature of a Mal'cev algebra is that we can recover the entire equivalence relation ker f from the equivalence class of the neutral element.
To be specific, let A and B be Mal'cev algebraic structures of a given type and let f be a homomorphism of that type from A to B. If eB is the neutral element of B, then the "kernel" of f is the preimage of the singleton set {eB}; that is, the subset of A consisting of all those elements of A that are mapped by f to the element eB.The kernel is usually denoted "ker f" (or a variation). In symbols:: $mathop\left\{mathrm\left\{ker f := \left\{a in A : f\left(a\right) = e_\left\{B\right\}\right\}mbox\left\{.\right\} !$
Since a Mal'cev algebra homomorphism preserves neutral elements, the identity element eA of A must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set {eA}.
The notion of ideal generalises to any Mal'cev algebra (as linear subspace in the case of vector spaces, normal subgroup in the case of groups, two-sided ring ideal in the case of rings, and submodule in the case of modules). It turns out that although ker f may not be a subalgebra of A, it is nevertheless an ideal.Then it makes sense to speak of the quotient algebra G/(ker f).The first isomorphism theorem for Mal'cev algebras states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
The connection between this and the congruence relation is for more general types of algebras is as follows.First, the kernel-as-an-ideal is the equivalence class of the neutral element eA under the kernel-as-a-congruence. For the converse direction, we need the notion of quotient in the Mal'cev algebra (which is division on either side for groups and subtraction for vector spaces, modules, and rings).Using this, elements a and a' of A are equivalent under the kernel-as-a-congruence if and only if their quotient a/a' is an element of the kernel-as-an-ideal.
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Kernel — may refer to:Computing* Kernel (computer science), the central component of most operating systems ** Linux kernel * Kernel (programming language), a Scheme like language * kernel trick, in machine learningLiterature* Kernel ( Lilo Stitch ),… … Wikipedia
• Kernel (matrix) — In linear algebra, the kernel or null space (also nullspace) of a matrix A is the set of all vectors x for which Ax = 0. The kernel of a matrix with n columns is a linear subspace of n dimensional Euclidean space.[1] The dimension… … Wikipedia
• Kernel (set theory) — In mathematics, the kernel of a function f may be taken to be either*the equivalence relation on the function s domain that roughly expresses the idea of equivalent as far as the function f can tell , or *the corresponding partition of the domain … Wikipedia
• Kernel (category theory) — In category theory and its applications to other branches of mathematics, kernels are a generalization of the kernels of group homomorphisms and the kernels of module homomorphisms and certain other kernels from algebra. Intuitively, the kernel… … Wikipedia
• Kernel (linear operator) — Main article: Kernel (mathematics) In linear algebra and functional analysis, the kernel of a linear operator L is the set of all operands v for which L(v) = 0. That is, if L: V → W, then where 0 denotes the null vector… … Wikipedia
• Kernel (mathematics) — In mathematics, the word kernel has several meanings. Kernel may mean a subset associated with a mapping:* The kernel of a mapping is the set of elements that map to the zero element (such as zero or zero vector), as in kernel of a linear… … Wikipedia
• Kernel — El término kernel puede ser: En álgebra lineal, se refiere al núcleo o kernel de una aplicación, o conjunto de puntos cuya imagen asignada por la aplicación es el vector nulo. En la teoría del potencial, se refiere al núcleo o kernel de Poisson.… … Wikipedia Español
• List of abstract algebra topics — Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this … Wikipedia
• Spectrum of a C*-algebra — The spectrum of a C* algebra or dual of a C* algebra A, denoted Â, is the set of unitary equivalence classes of irreducible * representations of A. A * representation π of A on a Hilbert space H is irreducible if, and only if, there is no closed… … Wikipedia
• Projection (linear algebra) — Orthogonal projection redirects here. For the technical drawing concept, see orthographic projection. For a concrete discussion of orthogonal projections in finite dimensional linear spaces, see vector projection. The transformation P is the… … Wikipedia |
Problem
# The elements of a simplified clam-shell bucket for a dredge are shown. With the block at...
The elements of a simplified clam-shell bucket for a dredge are shown. With the block at O considered fixed and with the constant velocity of the control cable at C equal to 0.5 m/s determine the angular acceleration of the right-hand bucket jaw when as the bucket jaws are closing.
#### Step-by-Step Solution
Solution 1
In $$\triangle \mathrm{OBC}$$, apply sine law and calculate the angle made by link $$O B$$.
\begin{aligned} \frac{\overline{B C}}{\sin \beta} &=\frac{\overline{O B}}{\sin 67.5^{\circ}} \\ \sin \beta &=\frac{0.5 \times \sin 67.5^{\circ}}{0.6} \\ \beta &=\sin ^{-1}\left(\frac{0.5 \times \sin 67.5^{\circ}}{0.6}\right) \\ &=50.34^{\circ} \end{aligned}
Determine the angle $$\angle O B C$$ using relation,
\begin{aligned} \angle O B C &=180^{\circ}-\left(67.5^{\circ}+50.34^{\circ}\right) \\ &=62.16^{\circ} \end{aligned}
Apply sine law and calculate the magnitude $$\overline{O C}$$.
\begin{aligned} \frac{\overline{O C}}{\sin 62.16^{\circ}} &=\frac{O B}{\sin 67.5^{\circ}} \\ \overline{O C} &=\frac{0.6 \times \sin 62.16^{\circ}}{\sin 67.5^{\circ}} \\ &=0.5742 \mathrm{~m} \end{aligned}
Calculate the magnitude for $$\overline{C C_{1}}$$ and $$\overline{O C_{1}}$$ from $$\Delta O C C_{1}$$ as follows:
\begin{aligned} \overline{C C_{1}} &=\overline{O C} \tan \beta \\ &=0.5742 \times \tan 50.34^{\circ} \\ &=0.6926 \mathrm{~m} \end{aligned}
Determine the magnitude of $$\left(\overline{O C_{1}}\right)$$.
\begin{aligned} \overline{O C_{1}} &=\frac{\overline{O C}}{\cos \beta} \\ &=\frac{0.5742}{\cos 50.34^{\circ}} \\ &=0.8996 \mathrm{~m} \end{aligned}
From geometry calculate the length of $$\overline{B C_{1}}$$ as follows:
\begin{aligned} \overline{B C_{1}} &=\overline{O C_{1}}-\overline{O B} \\ &=(0.8996-0.6) \\ &=0.2996 \mathrm{~m} \end{aligned}
Calculate the angular velocity of bucket $$\left(\omega_{B C}\right)$$ using relation,
$$\omega_{B C}=\frac{v_{e}}{\overline{C C_{1}}}$$
Substitute, $$0.5 \mathrm{~m} / \mathrm{s}$$ for $$v_{C}$$ and $$0.6926 \mathrm{~m}$$ for $$\overline{C C_{1}}$$.
\begin{aligned} \omega_{B C} &=\frac{0.5}{0.6926} \\ &=0.7219 \mathrm{rad} / \mathrm{s} \end{aligned}
Determine the linear velocity at point $$B$$ as follows:
\begin{aligned} v_{\bar{B}} &=\omega_{B C} \times \overline{B C_{1}} \\ &=0.7219 \times 0.2996 \\ &=0.2162 \mathrm{~m} / \mathrm{s} \end{aligned}
Determine the normal component of acceleration $$\left(a_{B / C}\right)_{n}$$ at $$B$$ relative to $$C$$ using relation,
$$\left(a_{B / C}\right)_{n}=\left(\omega_{B C}\right)^{2} \times \overline{B C}$$
Substitute, $$0.7219 \mathrm{rad}$$ for $$\omega_{B C}$$ and $$0.5 \mathrm{~m}$$ for $$\overline{B C}$$.
\begin{aligned} \left(a_{B C C}\right)_{n} &=(0.7219)^{2} \times 0.5 \\ &=0.2605 \mathrm{~m} / \mathrm{s}^{2} \end{aligned}
Determine the normal component of acceleration $$\left(a_{B \dots O}\right)_{n}$$ at $$B$$ relative to $$O$$ using relation,
\begin{aligned} \left(a_{B / O}\right)_{n} &=\frac{v_{B}^{2}}{O B} \\ &=\frac{(0.2162)^{2}}{0.6} \\ &=0.078 \mathrm{~m} / \mathrm{s}^{2} \end{aligned}
Determine the acceleration at point $$B$$ considering link $$C B$$ using relation as follows:
$$a_{b}=a_{C}+\left(a_{B / C}\right)_{n}+\left(a_{B / C}\right)_{t}$$
Substitute, 0 for $$a_{c}$$ in relation.
$$a_{i k}=\left(a_{B / C}\right)_{n}+\left(a_{B / C}\right)_{t}$$
Determine the acceleration at point $$B$$ considering link $$O B$$ using relation as follows:
$$\overline{a_{B}}=\left(\overline{a_{B / O}}\right)_{n}+\left(\overline{a_{B I O}}\right),$$
Figure representing the vector diagram,
Determine the acceleration at point $$B$$ for link $$B C$$ along $$x$$ direction as follows:
$$\left(a_{B}\right)_{x}=\left(a_{B C C}\right)_{n} \sin 67.5^{\circ}+\left(a_{B C C}\right)_{t} \sin 22.5^{\circ} \ldots \ldots(1)$$
Determine the acceleration at point $$B$$ for link $$O B$$ along $$x$$ direction as follows:
$$\left(a_{B}\right)_{x}=\left(a_{B I O}\right)_{n} \sin 50.34^{\circ}+\left(a_{B / O}\right)_{t} \sin 39.66^{\circ} \ldots \ldots(2)$$
From equations (1) and (2) determine the relation for tangential acceleration at link $$C B$$ and $$O B$$.
$$\left(a_{B / C}\right)_{n} \sin 67.5^{\circ}+\left(a_{B / C}\right), \sin 22.5^{\circ}=\left(a_{B / O}\right)_{n} \sin 50.34^{\circ}+\left(a_{B / O}\right)_{t} \sin 39.66^{\circ}$$
Substitute, $$0.078 \mathrm{~m} / \mathrm{s}^{2}$$ for $$\left(a_{B / O}\right)_{n}$$ and $$0.2605 \mathrm{~m} / \mathrm{s}^{2}$$ for $$\left(a_{B i C}\right)_{n}$$ in relation.
\begin{aligned} &(0.2605 \times 0.9238)+\left(\left(a_{B / C}\right)_{t} \times 0.3826\right)=(0.078 \times 0.7698)+\left(a_{B / O}\right)_{t} \times 0.6382 \\ &\left(a_{B / O}\right)_{t}=0.5994\left(a_{B / C}\right)_{t}+0.2829 \end{aligned}
Determine the acceleration at point $$B$$ for link $$B C$$ along $$y$$ direction as follows:
$$\left(a_{B}\right)_{y}=\left(a_{B / C}\right)_{n} \cos 67.5^{\circ}-\left(a_{B / C}\right), \cos 22.5^{\circ} \ldots \ldots(4)$$
Determine the acceleration at point $$B$$ for link $$O B$$ along $$y$$ direction as follows:
$$\left(a_{B}\right)_{y}=\left(a_{B / O}\right), \cos 39.66^{\circ}-\left(a_{B B}\right)_{n} \cos 50.34^{\circ} \ldots \ldots(5)$$
From equations (4) and (5) determine the relation for tangential acceleration at link $$C B$$ and $$O B$$.
Substitute, $$0.078 \mathrm{~m} / \mathrm{s}^{2}$$ for $$\left(a_{B / O}\right)_{n}$$ and $$0.2605 \mathrm{~m} / \mathrm{s}^{2}$$ for $$\left(a_{B / C}\right)_{n}$$ in relation,
\begin{aligned} &(0.2605 \times 0.3826)-\left(\left(a_{B / C}\right)_{t} \times 0.9238\right)=\left(\left(a_{B / O}\right)_{t} \times 0.7698\right)-(0.078 \times 0.6382) \\ &\left(a_{B / O}\right)_{t}=0.1940-1.2\left(a_{B / C}\right) \end{aligned}
From equations $$(3)$$ and $$(6)$$, calculate the tangential acceleration of link $$B C$$.
\begin{aligned} &0.5994\left(a_{B / C}\right),+0.2829=0.1940-1.2\left(a_{B / C}\right) \\ &\left(a_{B / C}\right)_{t}=-0.0494 \mathrm{~m} / \mathrm{s}^{2} \end{aligned}
Calculate the angular acceleration for link $$C B$$,
\begin{aligned} \alpha_{B C} &=\frac{\left(a_{B / C}\right)_{1}}{B C} \\ &=\frac{-0.0494}{0.5} \\ &=-0.0986 \end{aligned} |
## On Manin’s conjecture for singular del Pezzo surfaces of degree 4. I.(English)Zbl 1132.14019
This paper concerns Manin’s conjecture for a certain singular del Pezzo surface $$X$$ of degree 4, given by the equations $x_0x_1- x^2_2= x_0x_4- x_1x_2+ x^2_3= 0.$ This has just one singular point, which is of type $$D_5$$. Let $$N(B)$$ be the standard counting function for rational points of height at most $$B$$, for the open subset of $$X$$ for which one excludes points on the line $$x_0= x_2= x_3= 0$$.
It is shown that Manin’s conjecture holds for this surface, in the strong form $N(B)= BP(\log B)+ O(B^\theta),$ for any $$\theta> 11/12$$. Here $$P$$ is a certain polynomial of degree 5, whose leading term is explicitly calculated, and shown to agree with the prediction by Peyre. Moreover, it is shown that the height zeta-function has a meromorphic continuation to $$\sigma> 5/6$$, and is holomorphic for $$\sigma> 9/10$$ apart from a sixth-order pole at $$s= 1$$.
For the proof, the authors establish a bijection between the points under consideration and the integral points on the universal torsor associated to $$X$$. These latter points are then handled by analytic techniques, including an estimate for the average of the fractional part of $$(a- bx^2)/q$$, as the integer $$x$$ varies.
### MSC:
14G05 Rational points 14J26 Rational and ruled surfaces 11D45 Counting solutions of Diophantine equations
Full Text:
### References:
[1] V. Batyrev and Y. Tschinkel, Manin’s conjecture for toric varieties, J. Algebraic Geom. 7 (1998), 15–53. · Zbl 0946.14009 [2] ——, Tamagawa numbers of polarized algebraic varieties, Astérisque 251 (1998), 299–340. · Zbl 0926.11045 [3] R. de la Bretèche and T. D. Browning, On Manin’s conjecture for singular del Pezzo surfaces of degree four, II, Math. Proc. Cambridge Philos. Soc., · Zbl 1132.14020 [4] R. de la Bretèche, T. D. Browning, and U. Derenthal, On Manin’s conjecture for a certain singular cubic surface, Ann. Sci. École Norm. Sup. (5), · Zbl 1125.14008 [5] A. Chambert-Loir and Y. Tschinkel, On the distribution of points of bounded height on equivariant compactifications of vector groups, Invent. Math. 148 (2002), 421–452. · Zbl 1067.11036 [6] J. Franke, Y. I. Manin, and Y. Tschinkel, Rational points of bounded height on Fano varieties, Invent. Math. 95 (1989), 421–435. · Zbl 0674.14012 [7] D. R. Heath-Brown, Mean values of the zeta-function and divisor problems, Recent progress in analytic number theory, vol. I (Durham, 1979), pp. 115–119, Academic Press, New York, 1981. · Zbl 0457.10019 [8] W. V. D. Hodge and D. Pedoe, Methods of algebraic geometry, vol. 2, Cambridge Univ. Press, 1952. · Zbl 0048.14502 [9] E. Peyre, Hauteurs et measures de Tamagawa sur les variétés de Fano, Duke Math. J. 79 (1995), 101–218. · Zbl 0901.14025 [10] E. Peyre and Y. Tschinkel, Tamagawa numbers of diagonal cubic surfaces of higher rank, Rational points on algebraic varieties, Progr. Math., 199, pp. 275–305, Birkhäuser, Basel, 2001. · Zbl 1079.11034 [11] P. Salberger, Tamagawa measures on universal torsors and points of bounded height on Fano varieties, Astérisque 251 (1998), 91–258. · Zbl 0959.14007 [12] P. Swinnerton-Dyer, Counting points on cubic surfaces, II, Geometric methods in algebra and number theory, Progr. Math., 235, pp. 303–310, Birkhäuser, Basel, 2005. · Zbl 1127.11043 [13] E. C. Titchmarsh, The theory of the Riemann zeta-function, 2nd ed. (D. R. Heath-Brown, ed.), Oxford Univ. Press, 1986. · Zbl 0601.10026
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
# Exercise 30
## Egg carton
Assume your cost function $$J$$ is the following function of two parameters $$(x,y)$$:
$$J(x,y) = -20 e^{A} - e^{B} + 20 + e$$
where
$$A = -0.2 \sqrt{0.5 (x^{2} + y^{2})}$$
and
$$B = 0.5 \left[ \mathrm{cos}(2 \pi x) + \mathrm{cos}(2 \pi y) \right]$$
Map the cost landscape
Compute the cost function $$J$$ for values of $$(x,y)$$ ranging from -10.0 to 10.0, and plot the cost landscape (plot $$J$$ as a function of $$(x,y)$$).
Hint: It should look something like this.
Optimize for (x,y)
Use whatever optimization method you wish, to find the values of $$(x,y)$$ that minimize the cost function $$J$$. Justify that you have found the global minimum and not a local minimum.
Paul Gribble | fall 2014 |
# Scoring a Behavioral Survey
I've written a script to calculate the user's scores as a part of a larger survey application written in React/Redux. The questions are all based on the Likert Scale, i.e. "How much do you agree with this statement?" from 1 - 5, with 5 being the most. However, the answers are presented and stored as words.
The questions are sorted into various categories, and within each category the question values are added up and averaged.
So the script first sorts the questions into the categories, then maps the answers to the number values associated with them so they can be averaged. Each average score is converted to a verbal value of High, Medium, or Low, which is then used to get a message that corresponds with that score.
The only caveat is that for the final category, it's scored a bit differently.
The reason I'm posting this here is because I feel my code is quite clunky and could be a lot cleaner. I especially don't like the way that I have hard-coded in the names of the categories in the various functions. That seems very hard to maintain or update!
Any help on making this cleaner and more flexible would be greatly appreciated!
I've included at the top sample data so you can get it to work without any import statements or anything like that.
const results = {
"results": [
{
"id": 1,
"category": "TM",
"messages": [
{
"score": "High",
"message": "You are a team player! You enjoy collaborating with others to create better outcomes together."
},
{
"score": "Medium",
"message": "You enjoy working with others, but also appreciate working solo to accomplish your goals."
},
{
"score": "Low",
"message": "You work best on your own to accomplish your goals and find that others slow you down"
}
]
},
{
"id": 2,
"category": "CM",
"messages": [
{
"score": "High",
"message": "You approach conflict head on and try to bring everyone onto the same page. When someone needs a mediator, they call you!"
},
{
"score": "Medium",
"message": "You work to resolve conflict if pushed, but avoid it if you can. "
},
{
"score": "Low",
"message": "You prefer to keep to yourself and avoid conflict."
}
]
},
{
"id": 3,
"category": "CO",
"messages": [
{
"score": "High",
"message": "You enjoy helping others to learn and develop."
},
{
"score": "Medium",
"message": "You support others when needed, but don’t offer assistance without prompting."
},
{
"score": "Low",
"message": "You aren’t really the mentoring type. You prefer to concentrate on yourself and your own accomplishments."
}
]
},
{
"id": 4,
"category": "SL",
"messages": [
{
"score": "High",
"message": "You are in tune with others and understand how they feel."
},
{
"score": "Medium",
"message": "At times you understand others’ emotions, but sometimes you aren’t sure how others feel."
},
{
"score": "Low",
"message": "Sometimes you struggle to understand how others’ feel."
}
]
},
{
"id": 5,
"category": "AO",
"messages": [
{
"score": "High",
"message": "You are always trying to be the best you can be!"
},
{
"score": "Medium",
"message": "You set goals when needed, but don’t feel the need to always try to improve."
},
{
"score": "Low",
"message": "You feel comfortable with yourself and don’t feel the need to constantly strive for more."
}
]
},
{
"id": 6,
"messages": [
{
"score": "High",
"message": "You enjoy change and can go with the flow with ease!"
},
{
"score": "Medium",
"message": "You can adapt when changes are needed, but also enjoy when things stay the same and you can get into a routine."
},
{
"score": "Low",
"message": "You are not a fan of change, and prefer to stick with a steady routine."
}
]
},
{
"id": 7,
"category": "GR",
"messages": [
{
"score": "High",
"message": "You push through to accomplish your goals, no matter what life throws at you! You are well-suited to work on long-term goals."
},
{
"score": "Medium",
"message": "You see things through if you can, but sometimes a difficult situation leads you to change your goals."
},
{
"score": "Low",
"message": "Sometimes you prefer short-term goals and can be easily distracted."
}
]
},
{
"id": 8,
"category": "PR",
"messages": [
{
"score": "High",
"message": ""
},
{
"score": "Medium",
"message": ""
},
{
"score": "Low",
"message": ""
}
]
},
{
"id": 9,
"category": "SA",
"messages": [
{
"score": "High",
"message": "You are in touch with your thoughts and emotions."
},
{
"score": "Medium",
"message": "You usually have a good handle on how you are feeling and what’s going through your mind."
},
{
"score": "Low",
"message": "You find that you aren’t really in touch with how you feel at times."
}
]
},
{
"id": 10,
"category": "SC",
"messages": [
{
"score": "High",
"message": "You are as cool as a cucumber! You are able to control yourself even in difficult times."
},
{
"score": "Medium",
},
{
"score": "Low",
"message": "You find that at times you are impulsive and react before you consider the consequences."
}
]
},
{
"id": 11,
"category": "PA",
"messages": [
{
"score": "High",
"message": "You recognize that a positive attitude brings success and happiness! You see the good first and are always looking toward the future."
},
{
"score": "Medium",
"message": "You are cautiously optimistic about the future."
},
{
"score": "Low",
"message": "You remember the past as some of the best times of your life, and aren’t sure if things will ever be that good again."
}
]
},
{
"id": 12,
"category": "IM",
"messages": [
{
"score": "High",
"message": "You are a visionary and encourage others to be the best they can be."
},
{
"score": "Medium",
"message": "You have the ability to motivate others through your vision, but at times prefer to just focus on what needs to be done."
},
{
"score": "Low",
"message": "You are task-oriented and focus on what needs to be done, instead of using vision to inspire others to follow."
}
]
},
{
"id": 12,
"category": "LP",
"messages": [
{
"message": "You see the opportunity to motivate and engage others through establishing a shared vision for the future. You trust others and understand that people are motivated intrinsically and your job is to support and encourage them, and they will seek to accomplish shared goals. "
},
{
"message": "You believe people need to be closely monitored and guided to succeed. You believe in the use of incentives to motivate others. You may be better suited to individual contributor than a leadership role, as you like to work independently."
}
]
}
]
}
values: {
TM1: "Consistently",
TM2: "Consistently",
TM3: "Consistently",
TM4: "Sometimes",
CM1: "Never",
CM2: "Rarely",
CM3: "Consistently",
CM4: "Consistently",
CO1: "Rarely",
CO2: "Never",
CO3: "Never",
CO4: "Never",
SL1: "Often",
SL2: "Never",
SL3: "Consistently",
SL4: "Sometimes",
AO1: "Consistently",
AO2: "Sometimes",
AO3: "Often",
AO4: "Sometimes",
GR1: "Never",
GR2: "Consistently",
GR3: "Often",
GR4: "Sometimes",
PR1: "Never",
PR2: "Consistently",
PR3: "Often",
SA1: "Never",
SA2: "Consistently",
SA3: "Often",
SA4: "Sometimes",
SC1: "Never",
SC2: "Consistently",
SC3: "Often",
SC4: "Sometimes",
PA1: "Never",
PA2: "Consistently",
PA3: "Often",
PA4: "Sometimes",
IM1: "Never",
IM2: "Consistently",
IM3: "Often",
IM4: "Sometimes",
LP1: "Consistently",
LP2: "Consistently",
LP3: "Often",
LP4: "Consistently",
}
}
const getAnswers = category => valueNames.filter(value => value.includes(category));
]
}
Consistently: 5,
Often: 4,
Sometimes: 3,
Rarely: 2,
Never: 1
}
/* TODO: Reverse value of answer if question.reversed === true */
}
const getAverage = array => {
let num = 0, length = array.length;
if (!length) return 0;
for (let i = 0; i < length; i++) {
num += parseInt(array[i], 10);
}
return num/length;
}
const calculateScore = (array, getAverage) => {
let score;
let average = getAverage(array);
if (average >= 4) {
score = 'High';
} else if (average <= 2) {
score = 'Low';
} else {
score = 'Medium';
}
return score;
}
const calculateLeadershipScore = (array, getAverage) => {
let score;
let average = getAverage(array);
} else {
}
return score;
}
const getMessage = (results, score, category) => {
let message;
results.results.forEach(object => {
if (object.category === category) {
object.messages.forEach(element => {
if (element.score === score) {
message = element.message
}
})
}
});
return message;
}
const scores = [];
const getScores = numberAnswers => {
for (let i = 0; i < numberAnswers.length -1; i++) {
scores.push(score);
}
scores.push(score);
}
const messageTM = getMessage(results, scores[0], "TM");
const messageCM = getMessage(results, scores[1], "CM");
const messageCO = getMessage(results, scores[2], "CO");
const messageSL = getMessage(results, scores[3], "SL");
const messageAO = getMessage(results, scores[4], "AO");
const messageGR = getMessage(results, scores[6], "GR");
/* const messagePR = getMessage(results, scores[7], "PR"); -- This answer set is not being evaluated at this time */
const messageSA = getMessage(results, scores[8], "SA");
const messageSC = getMessage(results, scores[9], "SC");
const messagePA = getMessage(results, scores[10], "PA");
const messageIM = getMessage(results, scores[11], "IM");
const messageLP = getMessage(results, scores[12], "LP");
const messages = {
messageTM,
messageCM,
messageCO,
messageSL,
messageAO,
messageGR,
messageSA,
messageSC,
messagePA,
messageIM,
messageLP
}
console.log(messages);
return messages;
}
# DRY out the code.
Wow, that must have taken some time to type in. There is so much redundant data and unneeded processing.
One of programming's golden rules is Don't Repeat Yourself
Your code is 13K+ in size; over half of that is redundant, or completely unused data.
And the process of scoring is just a spaghetti of calls, loops, temp arrays, unneeded lines, and variables.
If you have groups of values, such as:
SA1 : "blah",
SA2 : "blah",
SA3 : "blah",
Store them in an array:
SA : ["blah","blah","blah"],
If you are populating an array:
var aA = getVal("A");
var aB = getVal("B");
var aC = getVal("C");
var foo = [
aA,
aB,
aC,
];
Put them in the array directly; don't use temp variables.
var foo = [
getVal("A"),
getVal("B"),
getVal("C")
]
Or use a loop:
var foo = "ABC".split("").map(getVal);
## Simplify
Your code gets a set of answers, a few for each category, scores them and calculates a mean; it then uses that mean to get the message. That is a very simple process and should be done one at a time for each category, rather than getting the score for all, then the mean, then finding the message for all.
This simplifies the process, as you are not storing intermediate results and you don't need to pass data around that only has a temporary life.
## A rewrite
Warning: don't copy and paste this code, as I did not make sure that I correctly duplicated the data.
The answers have the score, mean, and message added to them. All data is referenced via the category; you don't need to search.
After the function has run you can access the details:
testAnswers.TM.score; // total score
The code:
const results = {
PR: { messages: ["", "", "" ] },
TM: { messages: [
"You are a team player! You enjoy collaborating with others to create better outcomes together.",
"You enjoy working with others, but also appreciate working solo to accomplish your goals.",
"You work best on your own to accomplish your goals and find that others slow you down",
] },
CM: { messages: [
"You approach conflict head on and try to bring everyone onto the same page. When someone needs a mediator, they call you!",
"You work to resolve conflict if pushed, but avoid it if you can. ",
"You prefer to keep to yourself and avoid conflict.",
] },
CO: { messages: [
"You enjoy helping others to learn and develop.",
"You support others when needed, but don’t offer assistance without prompting.",
"You aren’t really the mentoring type. You prefer to concentrate on yourself and your own accomplishments.",
] },
SL: { messages: [
"You are in tune with others and understand how they feel.",
"At times you understand others’ emotions, but sometimes you aren’t sure how others feel.",
"Sometimes you struggle to understand how others’ feel.",
] },
AO: { messages: [
"You are always trying to be the best you can be!",
"You set goals when needed, but don’t feel the need to always try to improve.",
"You feel comfortable with yourself and don’t feel the need to constantly strive for more.",
] },
"You enjoy change and can go with the flow with ease!",
"You can adapt when changes are needed, but also enjoy when things stay the same and you can get into a routine.",
"You are not a fan of change, and prefer to stick with a steady routine.",
] },
GR: { messages: [
"You push through to accomplish your goals, no matter what life throws at you! You are well-suited to work on long-term goals.",
"You see things through if you can, but sometimes a difficult situation leads you to change your goals.",
"Sometimes you prefer short-term goals and can be easily distracted.",
] },
SA: { messages: [
"You are in touch with your thoughts and emotions.",
"You usually have a good handle on how you are feeling and what’s going through your mind.",
"You find that you aren’t really in touch with how you feel at times.",
] },
SC: { messages: [
"You are as cool as a cucumber! You are able to control yourself even in difficult times.",
"You find that at times you are impulsive and react before you consider the consequences.",
] },
PA: { messages: [
"You recognize that a positive attitude brings success and happiness! You see the good first and are always looking toward the future.",
"You are cautiously optimistic about the future.",
"You remember the past as some of the best times of your life, and aren’t sure if things will ever be that good again.",
] },
IM: { messages: [
"You are a visionary and encourage others to be the best they can be.",
"You have the ability to motivate others through your vision, but at times prefer to just focus on what needs to be done.",
"You are task-oriented and focus on what needs to be done, instead of using vision to inspire others to follow.",
] },
LP: { messages: [
"Transformation Leader; You see the opportunity to motivate and engage others through establishing a shared vision for the future. You trust others and understand that people are motivated intrinsically and your job is to support and encourage them, and they will seek to accomplish shared goals. ",
"Transactional Leadership; You believe people need to be closely monitored and guided to succeed. You believe in the use of incentives to motivate others. You may be better suited to individual contributor than a leadership role, as you like to work independently.",
] }
}
const resultScoring = {
} else {
}
},
var messageIndex = 1
if (answer.mean >= 4) { messageIndex = 0 }
else if (answer.mean <= 2) { messageIndex = 2 }
}
};
const answerValues = { Consistently: 5, Often: 4, Sometimes: 3, Rarely: 2, Never: 1 };
const messageArray = [];
for (const cat of Object.keys(answers)) {
let score = 0;
}
if (resultScoring[cat] === undefined) { resultScoring.default(answer, cat) }
}
return messageArray;
}
TM: {answers : ["Consistently", "Consistently", "Consistently", "Sometimes"]},
CO: {answers : ["Rarely", "Never", "Never", "Never"]},
CM: {answers : ["Never", "Rarely", "Consistently", "Consistently"]},
SL: {answers : ["Often", "Never", "Consistently", "Sometimes"]},
AO: {answers : ["Consistently", "Sometimes", "Often", "Sometimes"]},
GR: {answers : ["Never", "Consistently", "Often", "Sometimes"]},
PA: {answers : ["Never", "Consistently", "Often", "Sometimes"]},
LP: {answers : ["Consistently", "Consistently", "Often", "Consistently"]},
IM: {answers : ["Never", "Consistently", "Often", "Sometimes"]},
SA: {answers : ["Never", "Consistently", "Often", "Sometimes"]},
SC: {answers : ["Never", "Consistently", "Often", "Sometimes"]},
PR: {answers : ["Never", "Consistently", "Often"]},
};
## One last thing
The returning messages???
const messages = {
messageTM,
messageCM,
...
}
These are strings. Don't you mean to return an array?? |
# How to get projectile direction vector in a 2d grid?
I am having trouble figuring out how to trace the path of my projectiles in an Asteroids clone. Currently the ship is locked to the center of the screen and can be rotated a full 360 degrees. I know the angle the ship is pointing at any given time, but I am not sure how to calculate the vector that the projectile should use for its trajectory. I know that I can find the correct vector by subtracting the end point from the start point.. but the issue is that I do not know the end point. Theoretically.. the end point is whenever the projectile touches the edge of the screen or some object. This is the current game code: https://jsfiddle.net/m6sxrk8w/ just using a hardcoded movement vector of {x:-1, y:-1}. Can anyone give some advice on how to dynamically determine the true vector given the current direction of the ship? There must be some simple math I am missing here...
To find the unit vector from the angle you want to use cosine(angle) for the x-component and sin(angle) for the y-component. The angle must be in radians. You can convert from degrees to radians by dividing degrees by 180 * PI.
radians = degrees / (180 * Math.PI);
• So for example, if the ship is pointing at a .9 radian angle from the y-axis... the direction movement vector would be {x=cos(.9), y=sin(.9)} ? – JParrilla Jun 6 '19 at 18:32
• Yes, and it's a unit vector, meaning it has a length of 1. So you can scale it for distance / velocity or whatever you like. position.x += cos(radian) * distance; – SquidJelly Jun 6 '19 at 20:02
• They both should be correct. – SquidJelly Jun 6 '19 at 20:11
• So the formula you suggested works.. but for some reason it is always shifted by 90 deg. I am setting the x,y vector like so: projectiles[i].x += Math.cos(toRads(ship.direction)); projectiles[i].y += Math.sin(toRads(ship.direction)); I guess I have to somehow shift this result by 90 deg to actually get the correct direction? For example when my ship is pointing at 0, the shot goes in the 90 direction, when I am pointing at 90, the shot goes straight down to 180 – JParrilla Jun 6 '19 at 21:53
• It sounds like the front of your ship in your image isn't facing angle 0. So you can rotate the image to face right, or yes simply add/subtract from the angle before you convert it to radians. – SquidJelly Jun 7 '19 at 1:13
This is a maths problem that I had myself a few years ago.
You have two 2d vectors in pixel coordinates: shootOrigin=(centerX,centerY); mousePosition=(mouseX, mouseY); shootDirection = Vec2.Normalize(mousePosition-shootOrigin);
The normalization of the 2d vector is very important here to get the bullets to travel at constant speeds. when shooting a projectile you just say bulletPos=bulletPos+shootDirection; every update frame and then check if the Vector2.Distance(bulletPos,enemyPos); from enemy to bullet is smaller than certain value.
I totally recommend you to read the first chapter of the book "Programming Game AI by Example" because its about the maths behind npc agents and basically how to use vectors for game development.
EDIT: I just saw its about rotating the ship with keyboard input so you just modify the angle of the ship instead of using the mouse as the shooting direction and therefore need to calculate the direction vector x,y from radians angle like suggested. The principle stays the same: calculate the normalized direction vector and apply it every frame to the bullets position. If you spawn a bullet it is even better if you create an bullet object that knows its velocity, direction and has an updateMove method implemented, otherwise your bullets will be rotated while flying because the direction might change through user inout (arrow keys). |
# Eight to Late
Sensemaking and Analytics for Organizations
## An introduction to the critical chain method
### Introduction
All project managers have to deal with uncertainty as a part of their daily work. Project schedules, so carefully constructed, are riddled with assumptions and uncertainties – particularly in task durations. Most project management treatises (the PMBOK included) recognise this, and so exhort project managers to include uncertainties in their activity duration estimates. However, the same books have little to say on how these uncertainties should be integrated into the project schedule in a meaningful way. Sure, well-established techniques such as PERT incorporate probabilities into a schedule via an averaged or expected duration, but the final schedule is deterministic – i.e each task is assigned a definite completion date, based on the expected duration. Any float that appears in the schedule is purely a consequence of an activity not being on the critical path. The float, such as it is, is not an allowance for uncertainty.
Since PERT was invented in the 1950s, there have been several other attempts to incorporate uncertainty into project scheduling. Some of these include, Monte Carlo simulation and, more recently, Bayesian Networks. Although these techniques have a more sound basis, they don’t really address the question of how uncertainty is to be managed in a project schedule, where individual tasks are strung together one after another. What’s needed is a simple technique to protect a project schedule from Murphy, Parkinson or any other variations that invariably occur during the execution of individual tasks. In the 1990s, Eliyahu Goldratt proposed just such a technique in his business novel, Critical Chain. This post presents a short, yet comprehensive introduction to Goldratt’s critical chain method .
[An Aside: Before proceeding any further I should mention that Goldratt formulated the critical chain method within the framework of his Theory of Constraints (TOC). I won’t discuss TOC in this article, mainly because of space limitations. Moreover, an understanding of TOC isn’t really needed to understand the critical chain method. For those interested in learning about TOC, the best starting point is Goldratt’s business novel, The Goal.]
I begin with a discussion of some general characteristics of activity or task estimates, highlighting the reason why task estimators tend to pad up their estimates. This is followed by a discussion on why the buffers (or safety) that estimators build into individual activities don’t help – i.e. why projects come in late despite the fact that most people add considerable safety factors on to their activity estimates. This then naturally leads on to the heart of the matter: how buffers should be added in order to protect schedules effectively.
### Characteristics of activity duration estimates
(Note: Portions of this section have been published previously in my post on the inherent uncertainty of project task estimates)
Consider an activity that you do regularly – such as getting ready in the morning. You have a pretty good idea how long the activity takes on average. Say, it takes you an hour on average to get ready – from when you get out of bed to when you walk out of your front door. Clearly, on a particular day you could be super-quick and finish in 45 minutes, or even 40 minutes. However, there’s a lower limit to the early finish – you can’t get ready in 0 minutes!. On the other hand, there’s really no upper limit. On a bad day you could take a few hours. Or if you slip in the shower and hurt your back, you mayn’t make it at all.
If we were to plot the probability of activity completion for this example as a function of time, it might look something like I’ve depicted in Figure 1. The distribution starts at a non-zero cutoff (corresponding to the minimum time for the activity); increases to a maximum (corresponding to the most probable time); and then falls off rapidly at first, then with a long, slowly decaying, tail. The mean (or average) of the distribution is located to the right of the maximum because of the long tail. In the example, $t_{0}$(30 mins) is the minimum time for completion so the probability of finishing within 30 mins is 0%. There’s a 50% probability of completion within an hour, 80% probability of completion within 2 hours and a 90% probability of completion in 3 hours. The large values for $t_{80}$ and $t_{90}$compared to $t_{50}$are a consequence of the long tail. OK, this particular example may be an exaggeration – but you get my point: if you want to be really really sure of completing any activity, you have to add a lot of safety because there’s a chance that you may “slip in the shower” so to speak.
It turns out that many phenomena can be modeled by this kind of long-tailed distribution. Some of the better known long-tailed distributions include lognormal and power law distributions. A quick (but admittedly informal) review of the project management literature revealed that lognormal distributions are more commonly used than power laws to model activity duration uncertainties. This may be because lognormal distributions have a finite mean and variance whereas power law distributions can have infinite values for both (see this presentation by Michael Mitzenmacher, for example). [An Aside:If you’re curious as to why infinities are possible in the latter, it is because power laws decay more slowly than lognormal distributions – i.e they have “fatter” tails, and hence enclose larger (even infinite) areas.]. In any case, regardless of the exact form of the distribution for activity estimates, what’s important and non-controversial is the short cutoff, the peak and long, decaying tail.
Most activity estimators are intuitively aware of the consequences of the long tail. They therefore add a fair amount of “air” or safety in their estimates. Goldratt suggests that typical activity estimates tend to correspond to $t_{80}$ or $t_{90}$. Despite this, real life projects still have difficulty in maintaining schedules. Why this is so is partially answered in the next section.
### Delays accumulate; gains don’t
A schedule is essentially made up of several activities (of varying complexity and duration) connnected sequentially or in parallel. What are the implications of uncertain activity durations on a project schedule? Well, let’s look at the case of sequential and parallel steps separately:
Sequential steps: If an activity finishes early, the successor activity rarely starts right away. More often, the successor activity starts only when it was originally scheduled to. Usually this happens because the resource responsible for the successor activity is not free – or hasn’t been told about the early finish of the predecessor activity. On the other hand, if an activity finishes late, the start of the successor activity is delayed by at least the same amount as the delay. The upshot of all this is that – delays accumulate but early finishes are rarely taken advantage of. So, in a long chain of sequential activities, you can be pretty sure that there will be delays.
Parallel steps: In this case, the longest duration activity dictates the finish time. For example, if we have three parallel activities that take 5 days each. If one of them ends up taking 10 days, the net effect is that three activities, taken together, will complete only after 10 days. In contrast, an early finish will not have an effect unless all activities finish early (and by the same amount!). Again we see that delays accumulate; early finishes don’t.
The above discussion assumed that activities are independent. In a real project activities can be highly dependent. In general this tends to make things worse – a delay in an activity is usually magnified by a dependent successor activity.
This partially explains why projects come in late. However it’s not the whole story. According to Goldratt, there are a few other factors that lead to dissipation of safety. I discuss these next.
### Other time wasters
In the previous section we saw that dependencies between activities can eat into safety significantly because delays accumulate while gains don’t. There are a couple of other ways safety is wasted. These are:
Multitasking It is recognised that multitasking – i.e. working on more than one task concurrently – introduces major delays in completing tasks. See these articles by Johanna Rothman and Joel Spolsky, for a discussion of why this is so. I’ve discussed techniques to manage multitasking in an earlier post.
Student syndrome This should be familiar to any one who’s been a student. When saddled with an assignment, the common tendency is to procrastinate until the last moment. This happens on projects as well. “Ah, there’s so much time. I’ll start later…” Until, of course, there isn’t very much time at all.
Parkinson’s Law states that “work expands to fill the allocated time.” This is most often a consequence of there being no incentive to finish a task early. In fact, there’s a strong disincentive from finishing early because the early finisher may be a) accused of overestimating the task or b) rewarded by being allocated more work. Consequently people tend to adjust their pace of work to just make the scheduled delivery date, thereby making the schedule a self-fulfilling prophecy.
Any effective project management system must address and resolve the above issues. The critical chain method does just that. Now with the groundwork in place, we can move on to a discussion of the technique. We’ll do this in two steps. First, we discuss the special case in which there is no resource contention – i.e. multitasking does not occur. The second, more general, case discusses the situation in which there is resource contention.
### The critical chain – special case
In this section we look at the case where there’s no resource contention in the project schedule. In this (ideal) situation, where every resource is available when required, each task performer is ready to start work on a specific task just as soon as all its predecessor tasks are complete. Sure, we’ll also need to put in place a process to notify successor task performers about when they need to be ready to start work, but I’ll discuss this notification process a little later in this section. Let’s tackle Parkinson and the procrastinators first.
Preventing the student syndrome and Parkinson’s Law
To cure habitual procrastinators and followers of Parkinson, Goldratt suggests that project task durations estimates be based on a 50% probability of completion. This corresponds to an estimate that is equal to $t_{50}$ for an activity (you may want to have another look at Figure 1 to remind yourself of what this means). Remember, as discussed earlier, estimates tend to be based on $t_{80}$ or $t_{90}$, both of which are significantly larger than $t_{50}$ because of the nature of the distribution. The reduction in time should encourage task performers to start the task on schedule, thereby avoiding the student syndrome. Further, it should also discourage people from deliberately slowing their work pace, thereby preventing Parkinson from taking hold.
As discussed earlier, a $t_{50}$ estimate implies there’s a 50% chance that the task will not complete on time. So, to reassure task estimators / performers, Goldratt recommends implementing the following actions:
1. Removal of individual activity completion dates from the schedule altogether. The only important date is the project completion date.
2. No penalties for going over the $t_{50}$estimate. Management must accept that the estimate is based on $t_{50}$, so the activity is expected to overrun the estimate 50% of the time.
The above points should be explained clearly to project team members before attempting to elicit $t_{50}$estimates from them.
So, how does one get reliable $t_{50}$ estimates? Here are some approaches:
1. Assume that the initial estimates obtained from team members are $t_{80}$ or $t_{90}$, so simply halve these to get a rough $t_{50}$. This is the approach Goldratt recommends. However, I’m not a fan of this method because it is sure to antagonise folks.
2. Another option is to ask the estimator how long a task is going to take. They’ll come back to you with a number. This is likely to be their $t_{80}$ or $t_{90}$. Then ask them for their $t_{50}$, explaining what it means (i.e. estimate which you have a 50% chance of going over). They should come back to you with a smaller number. It may not be half the original estimate or less, but it should be significantly smaller.
3. Yet another option is to calibrate estimators’ abilities to predict task durations based on their history (i.e. based on how good earlier estimates were). In the absence of prior data one can quantify an estimator’s reliability in making judgements by asking him or her to answer a series of trivia questions, giving an estimated probability of being correct along with each answer. An individual is said to be calibrated if the fraction of questions correctly answered coincides (or is close to) their stated probability estimates. In theory, a calibrated individual’s duration estimate should be pretty good. However, it is questionable as to whether calibration as determined through trivia questions carries over to real-world estimates. See this site for more on evaluating calibration.
4. Finally, project managers can use Monte Carlo simulations to estimate task durations. The hard part here is coming up with a probability distribution for the task duration. One commonly used approach is to ask task estimators to come up with best case, worst case and most likely estimates, and then fit these to a probability distribution. There are at least two problems with this approach: a) the only sensible fit to a three point estimate is a triangular distribution, but this isn’t particularly good because it ignores the long tail and b) the estimates still need to be quality assured through independent checks (historical comparison, for example) or via calibration as discussed above – else the distribution is worthless. See this paper for more on the use of Monte Carlo simulations in project management (Note added on 23 Nov 2009: See my post on Monte Carlo simulations of project task durations for a quick introduction to the technique)
Folks who’ve read my articles on cognitive biases in project management (see this post and this one) may be wondering how these fit in to the above argument. According to Goldratt, most people tend to offer their $t_{80}$ or $t_{90}$ numbers, rather than their $t_{50}$ ones. The reason this happens is that folks tend to remember the instances when things went wrong, so they pad up their estimates to avoid getting burned again – a case of the availability bias in action.
Getting team members to come up with reliable $t_{50}$ numbers depends very much on how safe they feel doing so. It is important that management understands that there is a 50% chance of not meeting $t_{50}$ deadlines for an individual tasks; the only important deadline is the completion of the project. This is why Goldratt and other advocates of the critical chain method emphasise that a change in organisational culture is required in order for the technique to work in practice. Details of how one might implement this change is out of scope for an introductory article, but readers should be aware that the biggest challenges are not technical ones.
The resource buffer
Readers may have noticed a problem arising from the foregoing discussion of $t_{50}$ estimates: if there is no completion date for a task, how does a successor task performer know when he or she needs to be ready to start work? This problem is handled via a notification process that works as follows: the predecessor task peformer notifies successor task performers about expected completion dates on a regular basis. These notifications occur at regular, predetermined intervals. Further, a final confirmation should be given a day or two before task completion so all successor task performers are ready to start work exactly when needed. Goldratt calls this notification process the resource buffer. It is a simple yet effective method to ensure that a task starts exactly when it should. Early finishes are no longer wasted!
The project buffer
What size should the buffer be? As a rule of thumb, Goldratt proposed that the buffer should be 50% of the safety that was removed from the tasks. Essentially this makes the critical path 75% as long as it would have been with the original ($t_{80}$ or $t_{90}$) estimates. Other methods of buffer estimation are discussed in this book on critical chain project management.
The feeding buffer
As shown in Figure 2 the project buffer protects the critical path. However, delays can occur in non-critical paths as well (A1-A2 and B1-B2 in the figure). If long enough, these delays can affect subsequent critical path. To prevent this from happening, Goldratt suggests adding buffers at points where non-critical paths join the critical path. He terms these feeding buffers.
Figure 3 depicts the same project network diagram as before with feeding buffers added in. Feeding buffers are sized the same way as project buffers are – i.e. based on a fraction of the safety removed from the activities on the relevant (non-critical) path.
The critical chain – a first definition
This completes the discussion of the case where there’s no resource contention. In this special case, the critical chain of the project is identical to the critical path. The activity durations for all tasks are based on t50 estimates, with the project buffer protecting the project from delays. In addition, the feeding buffers protect critical chain activities from delays in non-critical chain activities.
### The critical chain – general case
Now for the more general case where there is contention for resources. Resource contention implies that task performers are scheduled to work on multiple tasks simultaneously, at one or more points along the project timeline. Although it is well recognised that multitasking is to be avoided, most algorithms for finding the critical path do not take resource contention into account. The first step, therefore, is to resource level the schedule – i.e ensure that tasks that are to be performed the same resource(s) are scheduled sequentially rather than simultaneously. Typically this changes the critical path from what it would otherwise be. This resource leveled critical path is the critical chain.
The above can be illustrated by modifying the example network shown in Figure 3. Assume tasks C1, B2 and A2 (marked X) are performed by the same resources. The resource leveled critical path thus changes from that shown in Figures 2 and 3 to that shown in Figure 4 (in red). As per the definition above, this is the critical chain. Notice that the feeding buffers change location, as (by definition) these have to be moved to points where non-critical paths merge with the critical path. The location of the project buffer remains unchanged.
### Endnote
This completes my introduction to the critical chain method. Before closing, I should mention that there has been some academic controversy regarding the critical chain method. In practice, though, the method seems to work well as evidenced by the number of companies offering consulting and software related to critical chain project scheduling.
I can do no better than to end with a list of online references which I’ve found immensely useful in learning about the method. Here they are, in no particular order:
Critical Chain: a hands-on project application by Ernst Meijer.
The best place to start, however, is where it all began: Goldratt’s novel, Critical Chain.
(Note: This essay is a revised version of my article on the critical chain, first published in 2007)
Written by K
August 20, 2009 at 10:50 pm
### 13 Responses
1. Great Blog, just stumbled upon it and just kept reading. Altough knowing the goal, i never realised that i could be so simple.
thanxs very much!
Martijn Logtenberg
September 3, 2009 at 2:33 am
2. Martijn,
Thanks for the kind words!
Regards,
Kailash.
K
September 3, 2009 at 6:43 am
3. This article has really englightened me about the rudiments of critical chain method, recommended for the basic beginner.
Thanks
Loye
March 17, 2010 at 11:33 pm
• Loye,
Regards,
Kailash.
K
March 19, 2010 at 6:00 am
4. A very coherent approach!
Thanks for the insights! 😉
Marius
May 30, 2010 at 10:48 pm
5. well written .
Ritesh
March 6, 2011 at 10:19 pm
6. This article on critical chain is really good and eye opener even for experienced scheduler.
Hari
April 17, 2011 at 8:41 pm
• Hari,
Thanks! I’m glad you found it useful.
Regards,
K.
K
April 17, 2011 at 8:45 pm
7. The critical chain method is often a confusing topic but you did a great job explaining it. Thank you for sharing.
Sid
November 4, 2011 at 5:20 am
• Hi Sid,
Thank you for the feedback!
Regards,
Kailash.
K
November 4, 2011 at 5:42 am
8. Is it convenient to explain the CCM by Gantt Chart? If so, I would request (i) normal resource loading and leveling and (ii) converting the same resource loading and leveling under critical Chain Method.
hafeez Malik
January 24, 2012 at 3:05 am
9. I’m learning this method these days, read several articles. Yours is the best I think. thanks.
Catherine
January 6, 2016 at 1:08 pm
• Thanks for reading and taking the time to comment!
K
January 7, 2016 at 7:02 pm |
# Inner product of position and momentum eigenkets
Let's define $\hat{q},\ \hat{p}$ the positon and momentum quantum operators, $\hat{a}$ the annihilation operator and $\hat{a}_1,\ \hat{a}_2$ with its real and imaginary part, such that $$\hat{a} = \hat{a}_1 + j \hat{a}_2$$ with $$\hat{a}_1 = \sqrt{\frac{\omega}{2 \hbar}}\hat{q},\ \hat{a}_2 = \sqrt{\frac{1}{2 \hbar \omega}}\hat{p}$$ (for a reference, see Shapiro Lectures on Quantum Optical Communication, lect.4)
Define $|a_1 \rangle,\ |a_2\rangle,\ |q\rangle,\ |p\rangle$ the eigenket of the operator $\hat{a}_1,\ \hat{a}_2,\ \hat{q},\ \hat{p}$ respectively.
From the lecture, I know that $$\langle a_2|a_1\rangle = \frac{1}{\sqrt{\pi}} e^{-2j a_1 a_2}$$ but I do not understand how to obtain $$\langle p|q\rangle = \frac{1}{\sqrt{2\pi\hbar}} e^{-\frac{j}{\hbar}qp}$$
I thought that with a variable substitution would suffice, but substituting ${a}_1 = \sqrt{\frac{\omega}{2 \hbar}}{q},\ a_2 = \sqrt{\frac{1}{2 \hbar \omega}}{p}$, I obtain $$\frac{1}{\sqrt{\pi}} e^{-\frac{j}{\hbar}qp}$$ which does not have the correct factor $\frac{1}{\sqrt{2\pi\hbar}}$.
What am I missing?
• It depends on what $\langle p|a_i\rangle$ and $\langle q|a_i\rangle$ are. (It is not a test, I can't remember now). You have to put the identities inside $\langle a_1|a_2\rangle$, you can't just substitute. – Antonio Ragagnin Jun 16 '14 at 10:20
## Inner scalar producs
Since $\hat{a}_1 = \sqrt{\frac{\omega}{2 \hbar}}\hat{q}$, then $\langle a_1|q\rangle=N_1\delta\left(a_1- \sqrt{\frac{\omega}{2 \hbar}}q\right).$
Also, since $\hat{a}_2 = \sqrt{\frac{1}{2 \hbar\omega}}\hat{p}$, then $\langle a_2|p\rangle=N_2\delta\left(a_2- \sqrt{\frac{1}{2 \hbar\omega}}p\right).$
## Normalization constants
We will use this property: $\int dx \delta\left(\alpha x-y\right) f(x)=\frac{f\left(\frac{y}{\alpha}\right)}{\alpha}.$
If we ask that $|a_1\rangle$ are normalized, we are asking that $$\delta\left(a_1- \bar a_2\right)=\langle a_1|\bar a_1\rangle = \int dq \langle a_1| q\rangle \langle q|\bar a_1\rangle=\int dq \left|N_1\right|^2 \delta\left(a_1- \sqrt{\frac{\omega}{2 \hbar}}q\right) \delta\left(\bar a_1- \sqrt{\frac{\omega}{2 \hbar}}q\right).$$
So, $N_1= \left(\frac{\omega}{2\hbar}\right)^\frac{1}{4}.$
Doing the same thing for $|a_2\rangle$ We then obtained that:
$\langle a_1|q\rangle= \left(\frac{\omega}{2\hbar}\right)^\frac{1}{4}\delta\left(a_1- \sqrt{\frac{\omega}{2 \hbar}}q\right).$
$\langle a_2|p\rangle= \left(\frac{1}{2\hbar\omega}\right)^\frac{1}{4}\delta\left(a_2- \sqrt{\frac{1}{2 \hbar\omega}}p\right).$
## Computing $\langle p|q \rangle$
Then, as you know, $\langle p| q\rangle =\int da_1 da_2 \langle p| a_2\rangle \langle a_2| a_1\rangle \langle a_1| q\rangle .$
This whould be enough for you to find the right solution.
• If I solve the integral you put, I get $$\frac{1}{\sqrt{\pi}} e^{-\frac{j}{\pi} q p}$$ -if I do it correctly - which does not have the correct factor $\frac{1}{\sqrt{2 \pi \hbar}}$ (I have just edited the question for clarification). – Nicola Jun 16 '14 at 11:54
• Did you used the property of the Dirac delta computed over x multiplied by a constant? $$\int f(x) \delta (\alpha x) = \frac{f(0)}{\alpha}?$$ – Antonio Ragagnin Jun 16 '14 at 21:15
• Oh yes and with this same trick you will find that $\langle a_1|p\rangle$ and $\langle a_2|q\rangle$ have a normalization factor – Antonio Ragagnin Jun 16 '14 at 21:20
• @Nicola, I edited my answer taking into account the dirac delta property. – Antonio Ragagnin Jun 17 '14 at 9:29 |
# naginterfaces.library.specfun.bessel_y_complex¶
naginterfaces.library.specfun.bessel_y_complex(fnu, z, n, scal)[source]
bessel_y_complex returns a sequence of values for the Bessel functions for complex , non-negative and , with an option for exponential scaling.
For full information please refer to the NAG Library document for s17dc
https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/s/s17dcf.html
Parameters
fnufloat
, the order of the first member of the sequence of functions.
zcomplex
, the argument of the functions.
nint
, the number of members required in the sequence .
scalstr, length 1
The scaling option.
The results are returned unscaled.
The results are returned scaled by the factor .
Returns
cycomplex, ndarray, shape
The required function values: contains , for .
nzint
The number of components of that are set to zero due to underflow. The positions of such components in the array are arbitrary.
Raises
NagValueError
(errno )
On entry, .
(errno )
On entry, .
Constraint: .
(errno )
On entry, has an illegal value: .
(errno )
On entry, .
Constraint: .
(errno )
No computation because .
(errno )
No computation because , .
(errno )
No computation because is too large.
(errno )
No computation because .
(errno )
No computation because .
(errno )
No computation – algorithm termination condition not met.
Warns
NagAlgorithmicWarning
(errno )
Results lack precision because .
(errno )
Results lack precision because .
Notes
bessel_y_complex evaluates a sequence of values for the Bessel function , where is complex, , and is the real, non-negative order. The -member sequence is generated for orders , . Optionally, the sequence is scaled by the factor .
Note: although the function may not be called with less than zero, for negative orders the formula may be used (for the Bessel function , see bessel_j_complex()).
The function is derived from the function CBESY in Amos (1986). It is based on the relation , where and are the Hankel functions of the first and second kinds respectively (see hankel_complex()).
When is greater than , extra values of are computed using recurrence relations.
For very large or , argument reduction will cause total loss of accuracy, and so no computation is performed. For slightly smaller or , the computation is performed but results are accurate to less than half of machine precision. If is very small, near the machine underflow threshold, or is too large, there is a risk of overflow and so no computation is performed. In all the above cases, a warning is given by the function.
References
NIST Digital Library of Mathematical Functions
Amos, D E, 1986, Algorithm 644: A portable package for Bessel functions of a complex argument and non-negative order, ACM Trans. Math. Software (12), 265–273 |
# (6) Gas Refrigeration Air is the working fluid of a Brayton refrigeration cycle with a compressor...
###### Question:
(6) Gas Refrigeration Air is the working fluid of a Brayton refrigeration cycle with a compressor pressure ratio of 3. At the beginning of the compression, the temperature Ti = 270 K and pressure P1 = 140 kPa. The turbine inlet temperature is T3 = 320 K. You may assume both heat exchangers operate without pressure losses. Draw a T-s diagram for this ideal cycle. If the volumetric flow rate at State 3 is 0.4 m3/s, what is the mass flow rate of air? Determine the net power input, refrigeration capacity, and coefficient of performance.
#### Similar Solved Questions
##### Sales Analyzing Operational Changes Operating results for department B of Delta Company during 2016 are as...
Sales Analyzing Operational Changes Operating results for department B of Delta Company during 2016 are as follows: $547,000 Cost of goods sold 378,000 Gross profit 169,000 Direct expenses 120,000 Common expenses 66,000 Total expenses 186,000 Net loss$(17,000) Suppose that department B could increa...
##### moviestruct.cpp include <iostream> include <fstream> include <cstdlib> include <ostream> include <fstream> include <cstdlib> include <cstring>...
moviestruct.cpp include <iostream> include <fstream> include <cstdlib> include <ostream> include <fstream> include <cstdlib> include <cstring> using namespace std; typedef struct{ int id; char title[250]; int year; char rating[6]; int tot...
##### Need it fast if possible thank you A girl throws a rock horizontally with a speed...
Need it fast if possible thank you A girl throws a rock horizontally with a speed of 12 m/s from a bridge. It falls 2.28 s before hitting the water below Neglect air resistance. How high is the bridge from the water below? How far horizontally does the rock travel before striking the water?...
##### Pls help!! Think of a theme that has a connection between the two for viable experiments,...
pls help!! Think of a theme that has a connection between the two for viable experiments, such as surcharges and discounts...
##### TB Problem Qu. 4-124 Mccabe Corporation uses the weighted average... for September Mccabe Corporation uses the...
TB Problem Qu. 4-124 Mccabe Corporation uses the weighted average... for September Mccabe Corporation uses the weighted average method in its process costing. The following data pertain to its Assembly Department Percent Complete Units Materials Conversion 1,700 55 % 10% 9.100 Work in process, Septe...
##### How do you convert -1.7 (7 repeating) as a fraction?
How do you convert -1.7 (7 repeating) as a fraction?...
##### 10.Parental Nutrition Support. A patient is receiving 1500 milliliters of 50% dextrose and 1500 milliliters of 7% amino...
10.Parental Nutrition Support. A patient is receiving 1500 milliliters of 50% dextrose and 1500 milliliters of 7% amino acid solution; 500 mL of a 20% lipid emulsion. The total kcalories provided should equal. Show Work: 11. A TPN solution provides 750 mL of a 5% amino acid solution. The nurse ca...
##### Question 12 (computerized accounting) Most accounting systems consist of which of the following components: A. Input...
Question 12 (computerized accounting) Most accounting systems consist of which of the following components: A. Input B. Processing C. Output D. All of the above...
##### 8,9,10 and 11. 8,9,10 are related with Bergman space. 8. If G zEC: 0<I2<13 show that...
8,9,10 and 11. 8,9,10 are related with Bergman space. 8. If G zEC: 0<I2<13 show that every f in L2(G) has a removable singularity 9. Which functions are in L2(C)? 10. Let G be an open subset of C and show that if aeG, then (feL2(G): f(a)-o) 11. If {h.) is a seque at z=0. is closed in L(G)... |
Professor Mark Gieles
Professor of Astrophysics and Royal Society University Research Fellow (URF)
+44 (0)1483 683171
18 BC 03
Astrophysics Research Group.
Publications
Bastian N, Konstantopoulos IS, Trancho G, Weisz DR, Larsen SS, Fouesneau M, Kaschinski CB, Gieles M (2012) Spectroscopic Constraints on the Form of the Stellar Cluster Mass
Function,
Astronomy and Astrophysics
This contribution addresses the question of whether the initial cluster mass
function (ICMF) has a fundamental limit (or truncation) at high masses. The
shape of the ICMF at high masses can be studied using the most massive young
( statistics. In this contribution we use an alternative method based on the
luminosities of the brightest clusters, combined with their ages. If a
truncation is present, a generic prediction (nearly independent of the cluster
disruption law adopted) is that the median age of bright clusters should be
younger than that of fainter clusters. In the case of an non-truncated ICMF,
the median age should be independent of cluster luminosity. Here, we present
optical spectroscopy of twelve young stellar clusters in the face-on spiral
galaxy NGC 2997. The spectra are used to estimate the age of each cluster, and
the brightness of the clusters is taken from the literature. The observations
are compared with the model expectations of Larsen (2009) for various ICMF
forms and both mass dependent and mass independent cluster disruption. While
there exists some degeneracy between the truncation mass and the amount of mass
independent disruption, the observations favour a truncated ICMF. For low or
modest amounts of mass independent disruption, a truncation mass of 5-6*10^5
Msun is estimated, consistent with previous determinations. Additionally, we
investigate possible truncations in the ICMF in the spiral galaxy M83, the
interacting Antennae galaxies, and the collection of spiral and dwarf galaxies
present in Larsen (2009) based on photometric catalogues taken from the
literature, and find that all catalogues are consistent with having a
(environmentally dependent) truncation in the cluster mass functions.
Pijloo JT, Zwart SFP, Alexander PER, Gieles M, Larsen SS, Groot PJ, Devecchi B (2015) The initial conditions of observed star clusters - I. Method description and validation, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 453 (1) pp. 605-637 OXFORD UNIV PRESS
Gieles M (2009) The early evolution of the star cluster mass function, MON NOT R ASTRON SOC
Several recent studies have shown that the star cluster initial mass function
(CIMF) can be well approximated by a power law, with indications for a
steepening or truncation at high masses. This contribution considers the
evolution of such a mass function due to cluster disruption, with emphasis on
the part of the mass function that is observable in the first ~Gyr. A Schechter
type function is used for the CIMF, with a power law index of -2 at low masses
and an exponential truncation at M*. Cluster disruption due to the tidal field
of the host galaxy and encounters with giant molecular clouds flattens the
low-mass end of the mass function, but there is always a part of the evolved
Schechter function' that can be approximated by a power law with index -2. The
mass range for which this holds depends on age, t, and shifts to higher masses
roughly as t^0.6. Mean cluster masses derived from luminosity limited samples
increase with age very similarly due to the evolutionary fading of clusters.
Empirical mass functions are, therefore, approximately power laws with index
-2, or slightly steeper, at all ages. The results are illustrated by an
application to the star cluster population of the interacting galaxy M51, which
can be well described by a model with M*=(1.9+/-0.5)x10^5 M_sun and a short
(mass-dependent) disruption time destroying M* clusters in roughly a Gyr.
Sabbi E, Lennon DJ, Gieles M, De Mink SE, Walborn NR, Anderson J, Bellini A, Panagia N, Van Der Marel R, Apellániz JM (2012) A double cluster at the core of 30 Doradus, Astrophysical Journal Letters 754 (2)
Based on an analysis of data obtained with the Wide Field Camera 3 on the Hubble Space Telescope we report the identification of two distinct stellar populations in the core of the giant H II region 30 Doradus in the Large Magellanic Cloud. The most compact and richest component coincides with the center of R136 and is 1 Myr younger than a second more diffuse clump, located 5.4 pc toward the northeast. We note that published spectral types of massive stars in these two clumps lend support to the proposed age difference. The morphology and age difference between the two sub-clusters suggests that an ongoing merger may be occurring within the core of 30 Doradus. This finding is consistent with the predictions of models of hierarchical fragmentation of turbulent giant molecular clouds, according to which star clusters would be the final products of merging smaller sub-structures. © 2012. The American Astronomical Society. All rights reserved..
Alexander PER, Gieles M (2013) Constraining the initial conditions of globular clusters using their
Monthly Notices of the Royal Astronomical Society
Studies of extra-galactic globular clusters have shown that the peak size of
the globular cluster (GC) radius distribution (RD) depends only weakly on
galactic environment, and can be used as a standard ruler. We model RDs of GC
populations using a simple prescription for a Hubble time of relaxation driven
evolution of cluster mass and radius, and explore the conditions under which
the RD can be used as a standard ruler. We consider a power-law cluster initial
mass function (CIMF) with and without an exponential truncation, and focus in
particular on a flat and a steep CIMF (power-law indices of 0 and -2,
Roche-lobe filling conditions ('filling',meaning that the ratio of half-mass to
Jacobi radius is approximately rh/rJ ~ 0.15) or strongly Roche-lobe
under-filling conditions ('under-filling', implying that initially rh/rJ 0.15). Assuming a constant orbital velocity about the galaxy centre we find for
a steep CIMF that the typical half-light radius scales with galactocentric
radius RG as RG^1/3. This weak scaling is consistent with observations, but
this scenario has the (well known) problem that too many low-mass clusters
survive. A flat CIMF with 'filling' initial conditions results in the correct
mass function at old ages, but with too many large (massive) clusters at large
RG. An 'underfilling' GC population with a flat CIMF also results in the
correct mass function, and can also successfully reproduce the shape of the RD,
with a peak size that is (almost) independent of RG. In this case, the peak
size depends (almost) only on the peak mass of the GC mass function. The (near)
universality of the GC RD is therefore because of the (near) universality of
the CIMF. There are some extended GCs in the outer halo of the Milky Way that
cannot be explained by this model.
Bastian N, Gieles M (2006) Cluster Disruption: Combining Theory and Observations,
We review the theory and observations of star cluster disruption. The three
main phases and corresponding typical timescales of cluster disruption are: I)
Infant Mortality (~10^7 yr), II) Stellar Evolution (~10^8 yr) and III) Tidal
relaxation (~10^9 yr). During all three phases there are additional tidal
external perturbations from the host galaxy. In this review we focus on the
physics and observations of Phase I and on population studies of Phases II &
III and external perturbations concentrating on cluster-GMC interactions.
Particular attention is given to the successes and short-comings of the Lamers
cluster disruption law, which has recently been shown to stand on a firm
physical footing.
Renaud F, Gieles M (2013) The role of galaxy mergers on the evolution of star clusters, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 431 (1) pp. L83-L87 OXFORD UNIV PRESS
Gieles M, Zocchi A (2015) A family of lowered isothermal models, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 454 (1) pp. 576-592 OXFORD UNIV PRESS
Gieles M (2012) Dynamical expansion of star clusters, Astrophysics and Space Science Proceedings pp. 241-243
Most globular clusters have half-mass radii of a few pc with no apparent correlation with their masses. This is different from elliptical galaxies, for which the Faber-Jackson relation suggests a strong positive correlation between mass and radius. Objects that are somewhat in between globular clusters and low-mass galaxies, such as ultra-compact dwarf galaxies, have a mass-radius relation consistent with the extension of the relation for bright ellipticals. Here we show that at an age of 10Gyr a break in the mass-radius relation at
Bianchini P, Renaud F, Gieles M, Varri AL (2014) The inefficiency of satellite accretion in forming extended star
clusters,
The distinction between globular clusters and dwarf galaxies has been
progressively blurred by the recent discoveries of several extended star
clusters, with size (20-30 pc) and luminosity (-6 one of faint dwarf spheroidals. In order to explain their sparse structure, it
has been suggested that they formed as star clusters in dwarf galaxy satellites
that later accreted onto the Milky Way. If these clusters form in the centre of
dwarf galaxies, they evolve in a tidally-compressive environment where the
contribution of the tides to the virial balance can become significant, and
lead to a super-virial state and subsequent expansion of the cluster, once
removed. Using N-body simulations, we show that a cluster formed in such an
extreme environment undergoes a sizable expansion, during the drastic variation
of the external tidal field due to the accretion process. However, we show that
the expansion due to the removal of the compressive tides is not enough to
explain the observed extended structure, since the stellar systems resulting
from this process are always more compact than the corresponding clusters that
expand in isolation due to two-body relaxation. We conclude that an accreted
origin of extended globular clusters is unlikely to explain their large spatial
extent, and rather favor the hypothesis that such clusters are already extended
at the stage of their formation.
Bastian N, Adamo A, Schirmer M, Hollyhead K, Beletsky Y, Carraro G, Davies B, Gieles M, Silva-Villa E (2014) The effect of spatial resolution on optical and near-IR studies of
stellar clusters: Implications for the origin of the red excess,
Recent ground based near-IR studies of stellar clusters in nearby galaxies
have suggested that young clusters remain embedded for 7-10Myr in their
progenitor molecular cloud, in conflict with optical based studies which find
that clusters are exposed after 1-3Myr. Here, we investigate the role that
spatial resolution plays in this apparent conflict. We use a recent catalogue
of young ($5000$~\msun) clusters in the nearby spiral
galaxy, M83, along with Hubble Space Telescope (HST) imaging in the optical and
near-IR, and ground based near-IR imaging, to see how the colours (and hence
estimated properties such as age and extinction) are affected by the aperture
size employed, in order to simulate studies of differing resolution. We find
that the near-IR is heavily affected by the resolution, and when aperture sizes
$>40$~pc are used, all young/blue clusters move red-ward in colour space, which
results in their appearance as heavily extincted clusters. However, this is due
to contamination from nearby sources and nebular emission, and is not an
extinction effect. Optical colours are much less affected by resolution. Due to
the larger affect of contamination in the near-IR, we find that, in some cases,
clusters will appear to show near-IR excess when large ($>20$~pc) apertures are
used. Our results explain why few young ($1$~mag) clusters have been found in recent ground based near-IR studies of
cluster populations, while many such clusters have been found in higher
resolution HST based studies. Additionally, resolution effects appear to (at
least partially) explain the origin of the near-IR excess that has been found
in a number of extragalactic YMCs.
Gieles M, Bastian N, Lamers HJGLM (2004) The Star Cluster Population of M51,
We present the age and mass distribution of star clusters in M51. The
structural parameters are found by fitting cluster evolution models to the
spectral energy distribution consisting of 8 HST-WFPC2 pass bands. There is
evidence for a burst of cluster formation at the moment of the second encounter
with the companion NGC5195 (50-100 Myr ago) and a hint for an earlier burst
(400-500 Myr ago). The cluster
IMF has a power law slope of -2.1. The disruption time of clusters is
extremely short (
Bastian N, Gieles M, Ercolano B, Gutermuth R (2008) The Spatial Evolution of Stellar Structures in the LMC/SMC, Monthly Notices of the Royal Astronomical Society
We present an analysis of the spatial distribution of various stellar
populations within the Large and Small Magellanic Clouds. We use optically
selected stellar samples with mean ages between ~9 and ~1000 Myr, and existing
stellar cluster catalogues to investigate how stellar structures form and
evolve within the LMC/SMC. We use two statistical techniques to study the
evolution of structure within these galaxies, the $Q$-parameter and the
two-point correlation function (TPCF). In both galaxies we find the stars are
born with a high degree of substructure (i.e. are highly fractal) and that the
stellar distribution approaches that of the 'background' population on
timescales similar to the crossing times of the galaxy (~80/150 Myr for the
SMC/LMC respectively). By comparing our observations to simple models of
structural evolution we find that 'popping star clusters' do not significantly
influence structural evolution in these galaxies. Instead we argue that general
galactic dynamics are the main drivers, and that substructure will be erased in
approximately the crossing time, regardless of spatial scale, from small
clusters to whole galaxies. This can explain why many young Galactic clusters
have high degrees of substructure, while others are smooth and centrally
concentrated. We conclude with a general discussion on cluster 'infant
mortality', in an attempt to clarify the time/spatial scales involved.
Bastian N, Gieles M, Efremov YN, Lamers HJGLM (2005) Hierarchical Star Formation in M51: Star/Cluster Complexes,
We report on a study of young star cluster complexes in the spiral galaxy
M51. Recent studies have confirmed that star clusters do not form in isolation,
but instead tend to form in larger groupings or complexes. We use {\it HST}
broad and narrow band images (from both {\it WFPC2} and {\it ACS}), along with
{\it BIMA}-CO observations to study the properties and investigate the origin
of the e complexes. We find that the complexes are all young ($sizes between$\sim$85 and$\sim$240 pc, and have masses between 3-30$\times
10^{4} M_{\odot}$. Unlike that found for isolated young star clusters, we find a strong correlation between the complex mass and radius, namely$M\propto
R^{2.33 \pm 0.19}$. This is similar to that found for giant molecular clouds (GMCs). By comparing the mass-radius relation of GMCs in M51 to that of the complexes we can estimate the star formation efficiency within the complexes, although this value is heavily dependent on the assumed CO-to-H$_2$conversion factor. The complexes studied here have the same surface density distribution as individual young star clusters and GMCs. If star formation within the complexes is proportional to the gas density at that point, then the shared mass-radius relation of GMCs and complexes is a natural consequence of their shared density profiles. We briefly discuss possibilities for the lack of a mass-radius relation for young star clusters. We note that many of the complexes show evidence of merging of star clusters in their centres, suggesting that larger star clusters can be produced through the build up of smaller clusters. Gieles M, Zwart SFP, Baumgardt H, Athanassoula E, Lamers HJGLM, Sipior M, Leenaarts J (2006) Star cluster disruption by giant molecular clouds, Mon.Not.Roy.Astron.Soc. 371 pp. 793-804 We investigate encounters between giant molecular clouds (GMCs) and star clusters. We propose a single expression for the energy gain of a cluster due to an encounter with a GMC, valid for all encounter distances and GMC properties. This relation is verified with N-body simulations of cluster-GMC encounters and excellent agreement is found. The fractional mass loss from the cluster is 0.25 times the fractional energy gain. This is because 75% of the injected energy goes to the velocities of escaping stars, that are higher than the escape velocity. We derive an expression for the cluster disruption time (t_dis) based on the mass loss from the simulations, taking into account the effect of gravitational focusing by the GMC. The disruption time depends on the cluster mass (M_c) and half-mass radius (r_h) as t_dis=2.0 S (M_c/10^4 M_sun)(3.75 pc/r_h)^3 Gyr, with S=1 for the solar neighbourhood and inversely proportional to the GMC density. The observed shallow relation between cluster radius and mass gives t_dis a power-law dependence on the mass with index 0.7, similar to that found from observations and from simulations of clusters dissolving in tidal fields (0.62). The constant of 2.0 Gyr is about a factor of 3.5 shorter than found from earlier simulations of clusters dissolving under the combined effect of galactic tidal field and stellar evolution. It is somewhat higher than the observationally determined value of 1.3 Gyr. It suggests, however, that the combined effect of tidal field and encounters with GMCs can explain the lack of old open clusters in the solar neighbourhood. GMC encounters can also explain the (very) short disruption time that was observed for star clusters in the central region of M51, since there rho_n is an order of magnitude higher than in the solar neighbourhood. Bastian N, Gieles M, Ercolano B, Gutermuth R (2008) The Spatial Evolution of Stellar Structures in the LMC, We present an analysis of the spatial distribution of various stellar populations within the Large Magellanic Cloud. We combine mid-infrared selected young stellar objects, optically selected samples with mean ages between ~9 and ~1000 Myr, and existing stellar cluster catalogues to investigate how stellar structures form and evolve within the LMC. For the analysis we use Fractured Minimum Spanning Trees, the statistical Q parameter, and the two-point correlation function. Restricting our analysis to young massive (OB) stars we confirm our results obtained for M33, namely that the luminosity function of the groups is well described by a power-law with index -2, and that there is no characteristic length-scale of star-forming regions. We find that stars in the LMC are born with a large amount of substructure, consistent with a 2D fractal distribution with dimension ~1.8 and evolve towards a uniform distribution on a timescale of ~175 Myr. This is comparable to the crossing time of the galaxy and we suggest that stellar structure, regardless of spatial scale, will be eliminated in a crossing time. This may explain the smooth distribution of stars in massive/dense young clusters in the Galaxy, while other, less massive, clusters still display large amounts of structure at similar ages. By comparing the stellar and star cluster distributions and evolving timescales, we show that infant mortality of clusters (or 'popping clusters') have a negligible influence on galactic structure. Finally, we quantify the influence of the elongation, differential extinction, and contamination of a population on the measured Q value. Haas MR, Gieles M, Scheepmaker RA, Larsen SS, Lamers HJGLM, Bastian N (2006) Variation of the cluster luminosity function across the disk of M51, We study the luminosity function (LF) of the star clusters in M51. Comparing the observed LF with the LF resulting from artificial cluster populations suggests that there exists an upper mass limit for clusters and that this limit and/or the cluster disruption varies with galactocentric distance. Gieles M (2007) Conference summary: Mass loss from stellar clusters, This conference dealt with the mass loss from stars and from stellar clusters. In this summary of the cluster section of the conference, I highlight some of the results on the formation and the fundamental properties of star clusters (Sect. 2), the early stages of their evolution (Sect. 3) and go into more detail on the subsequent mass evolution of clusters (Sect. 4). A discussion on how this may, or may not, depend on mass is given in Sect. 5. Obviously, there will be a bias towards the topics where Henny Lamers has contributed. Some of the contributions to these proceedings have already reviewed extensively the topics of clusters mass loss and disruption, so I will try to fit these in a general framework as much as possible. Melo C, Downing M, Jorden P, Pasquini L, Deires S, Kelt A, Naef D, Hanuschik R, Palsa R, Castillo R, Pena E, Bendek E, Gieles M (2008) Detector upgrade for FLAMES: GIRAFFE gets red eyes, Proceedings of SPIE - The International Society for Optical Engineering 7014 GIRAFFE is an intermediate resolution spectrograph covering a wavelength range from 360-930nm and fed by optical fibers as a part of FLAMES, the multi-object fiber facility mounted at the ESO VLT Kueyen. For some time we sought a new detector for GIRAFFE spectrograph to boost the instrument's red QE (Quantum E.ciency) capabilities, while still retaining very good blue response. We aimed also at reducing the strong fringing present in the red spectra. The adopted solution was an e2v custom 2-layer AR (Anti-Re.ection) coated Deep Depletion CCD44-82 CCD. This device was made in a new e2v Technologies AR coating plant and delivered to ESO in mid 2007 with performance that matches predictions. The new CCD was commissioned in May 2008.Here we report on the results. Gieles M (2009) Star cluster disruption, Proceedings of the International Astronomical Union Star clusters are often used as tracers of major star formation events in external galaxies as they can be studied up to much larger distances than individual stars. It is vital to understand their evolution if they are used to derive, for example, the star formation history of their host galaxy. More specifically, we want to know how cluster lifetimes depend on their environment and structural properties such as mass and radius. This review presents a theoretical overview of the early evolution of star clusters and the consequent long term survival chances. It is suggested that clusters forming with initial densities of >10^4 Msun pc-3 survive the gas expulsion, or "infant mortality", phase. At ~10 Myr they are bound and have densities of 10^{3+/-1} Msun pc-3. After this time they are stable against expansion by stellar evolution, encounters with giant molecular clouds and will most likely survive for another Hubble time if they are in a moderate tidal field. Clusters with lower initial densities ( Myrs. Some discussion is provided on how extra galactic star cluster populations and especially their age distributions can be used to gain insight in disruption. Gieles M (2008) What determines the mass of the most massive star cluster in a galaxy: statistics, physics or disruption?, Astrophys.Space Sci.324:299-304,2009 In many different galactic environments the cluster initial mass function (CIMF) is well described by a power-law with index -2. This implies a linear relation between the mass of the most massive cluster (M_max) and the number of clusters. Assuming a constant cluster formation rate and no disruption of the most massive clusters it also means that M_max increases linearly with age when determining M_max in logarithmic age bins. We observe this increase in five out of the seven galaxies in our sample, suggesting that M_max is determined by the size of the sample. It also means that massive clusters are very stable against disruption, in disagreement with the mass independent disruption (MID) model presented at this conference. For the clusters in M51 and the Antennae galaxies the size-of-sample prediction breaks down around 10^6 M_sun, suggesting that this is a physical upper limit to the masses of star clusters in these galaxies. In this method there is a degeneracy between MID and a CIMF truncation. We show how the cluster luminosity function can serve as a tool to distinguish between the two. Gieles M, Lamers H, Baumgardt H (2007) Star cluster life-times: dependence on mass, radius and environment, The dissolution time (t_dis) of clusters in a tidal field does not scale with the classical'' expression for the relaxation time. First, the scaling with N, and hence cluster mass, is shallower due to the finite escape time of stars. Secondly, the cluster half-mass radius is of little importance. This is due to a balance between the relative tidal field strength and internal relaxation, which have an opposite effect on t_dis, but of similar magnitude. When external perturbations, such as encounters with giant molecular clouds (GMC) are important, t_dis for an individual cluster depends strongly on radius. The mean dissolution time for a population of clusters, however, scales in the same way with mass as for the tidal field, due to the weak dependence of radius on mass. The environmental parameters that determine t_dis are the tidal field strength and the density of molecular gas. We compare the empirically derived t_dis of clusters in six galaxies to theoretical predictions and argue that encounters with GMCs are the dominant destruction mechanism. Finally, we discuss a number of pitfalls in the derivations of t_dis from observations, such as incompleteness, with the cluster system of the SMC as particular example. Gieles M, Baumgardt H (2008) Lifetimes of tidally limited star clusters with different radii, MNRAS, 2008, 389, L28 We study the escape rate, dN/dt, from clusters with different radii in a tidal field using analytical predictions and direct N-body simulations. We find that dN/dt depends on the ratio R=r_h/r_j, where r_h is the half-mass radius and r_j the radius of the zero-velocity surface. For R>0.05, the "tidal regime", there is almost no dependence of dN/dt on R. To first order this is because the fraction of escapers per relaxation time, t_rh, scales approximately as R^1.5, which cancels out the r_h^1.5 term in t_rh. For R the "isolated regime", dN/dt scales as R^-1.5. Clusters that start with their initial R, Ri, in the tidal regime dissolve completely in this regime and their t_dis is insensitive to the initial r_h. We predicts that clusters that start with Ri t_dis has a shallower dependence on Ri than what would be expected when t_dis is a constant times t_rh. For realistic values of Ri, the lifetime varies by less than a factor of 1.5 due to changes in Ri. This implies that the "survival" diagram for globular clusters should allow for more small clusters to survive. We note that with our result it is impossible to explain the universal peaked mass function of globular cluster systems by dynamical evolution from a power-law initial mass function, since the peak will be at lower masses in the outer parts of galaxies. Our results finally show that in the tidal regime t_dis scales as N^0.65/w, with w the angular frequency of the cluster in the host galaxy. [ABRIDGED] Silva GMD, D'Orazi V, Melo C, Torres CAO, Gieles M, Quast GR, Sterzik M (2013) Search for Associations Containing Young stars (SACY): Chemical tagging IC 2391 & the Argus association, Monthly Notices of the Royal Astronomical Society We explore the possible connection between the open cluster IC 2391 and the unbound Argus association identified by the SACY survey. In addition to common kinematics and ages between these two systems, here we explore their chemical abundance patterns to confirm if the two substructures shared a common origin. We carry out a homogenous high-resolution elemental abundance study of eight confirmed members of IC 2391 as well as six members of the Argus association using UVES spectra. We derive spectroscopic stellar parameters and abundances for Fe, Na, Mg, Al, Si, Ca, Ti, Cr, Ni and Ba. All stars in the open cluster and Argus association were found to share similar abundances with the scatter well within the uncertainties, where [Fe/H] = -0.04 +/-0.03 for cluster stars and [Fe/H] = -0.06 +/-0.05 for Argus stars. Effects of over-ionisation/excitation were seen for stars cooler than roughly 5200K as previously noted in the literature. Also, enhanced Ba abundances of around 0.6 dex were observed in both systems. The common ages, kinematics and chemical abundances strongly support that the Argus association stars originated from the open cluster IC 2391. Simple modeling of this system find this dissolution to be consistent with two-body interactions. Sana H, de Mink SE, de Koter A, Langer N, Evans CJ, Gieles M, Gosset E, Izzard Robert, Le Bouquin JB, Schneider FR (2012) Binary interaction dominates the evolution of massive stars., Science 337 (6093) pp. 444-446 The presence of a nearby companion alters the evolution of massive stars in binary systems, leading to phenomena such as stellar mergers, x-ray binaries, and gamma-ray bursts. Unambiguous constraints on the fraction of massive stars affected by binary interaction were lacking. We simultaneously measured all relevant binary characteristics in a sample of Galactic massive O stars and quantified the frequency and nature of binary interactions. More than 70% of all massive stars will exchange mass with a companion, leading to a binary merger in one-third of the cases. These numbers greatly exceed previous estimates and imply that binary interaction dominates the evolution of massive stars, with implications for populations of massive stars and their supernovae. Renaud F, Gieles M (2015) A flexible method to evolve collisional systems and their tidal debris in external potentials, We introduce a numerical method to integrate tidal effects on collisional systems, using any definition of the external potential as a function of space and time. Rather than using a linearisation of the tidal field, this new method follows a differential technique to numerically evaluate the tidal acceleration and its time derivative. Theses are then used to integrate the motions of the components of the collisional systems, like stars in star clusters, using a predictor-corrector scheme. The versatility of this approach allows the study of star clusters, including their tidal tails, in complex, multi-components, time-evolving external potentials. The method is implemented in the code nbody6 (Aarseth 2003). Bastian N, Gieles M, Lamers HJGLM, Grijs RD, Scheepmaker RA (2004) The Star Cluster Population of M51: II. Age distribution and relations among the derived parameters, We use archival {\it Hubble Space Telescope} observations of broad-band images from the ultraviolet (F255W-filter) through the near infrared (NICMOS F160W-filter) to study the star cluster population of the interacting spiral galaxy M51. We obtain age, mass, extinction, and effective radius estimates for 1152 star clusters in a region of$\sim 7.3 \times 8.1$kpc centered on the nucleus and extending into the outer spiral arms. In this paper we present the data set and exploit it to determine the age distribution and relationships among the fundamental parameters (i.e. age, mass, effective radius). Using this dataset we find: {\it i}) that the cluster formation rate seems to have had a large increase$\sim$50-70 Myr ago, which is coincident with the suggested {\it second passage} of its companion, NGC 5195, {\it ii}) a large number of extremely young ($ of unbound clusters of which a large majority will disrupt within the next
$\sim$10 Myr, and {\it iii)} that the distribution of cluster sizescan be well
approximated by a power-law with exponent, $-\eta = -2.2 \pm 0.2$, which is
very similar to that of Galactic globular clusters, indicating that cluster
disruption is largely independent of cluster radius. In addition, we have used
this dataset to search for correlations among the derived parameters. In
particular, we do not find any strong trends between the age and mass, mass and
There is, however, a strong correlation between the age of a cluster and its
extinction, with younger clusters being more heavily reddened than older
clusters.
Gieles M, Baumgardt H, Heggie D, Lamers H (2010) On the mass-radius relation of hot stellar systems, Mon. Not. R. Astron. Soc. 408, L16-L20 (2010)
Most globular clusters have half-mass radii of a few pc with no apparent
correlation with their masses. This is different from elliptical galaxies, for
which the Faber-Jackson relation suggests a strong positive correlation between
mass and radius. Objects that are somewhat in between globular clusters and
low-mass galaxies, such as ultra-compact dwarf galaxies, have a mass-radius
relation consistent with the extension of the relation for bright ellipticals.
Here we show that at an age of 10 Gyr a break in the mass-radius relation at
~10^6 Msun is established because objects below this mass, i.e. globular
clusters, have undergone expansion driven by stellar evolution and hard
binaries. From numerical simulations we find that the combined energy
production of these two effects in the core comes into balance with the flux of
energy that is conducted across the half-mass radius by relaxation. An
important property of this balanced' evolution is that the cluster half-mass
radius is independent of its initial value and is a function of the number of
bound stars and the age only. It is therefore not possible to infer the initial
mass-radius relation of globular clusters and we can only conclude that the
present day properties are consistent with the hypothesis that all hot stellar
systems formed with the same mass-radius relation and that globular clusters
have moved away from this relation because of a Hubble time of stellar and
dynamical evolution.
Evans CJ, Bastian N, Beletsky Y, Brott I, Cantiello M, Clark JS, Crowther PA, De Koter A, De Mink SE, Dufton PL, Dunstall P, Gieles M, Gräfener G, Hénault-Brunet V, Herrero A, Howarth ID, Langer N, Lennon DJ, Maíz Apellániz J, Markova N, Najarro F, Puls J, Sana H, Simon-Díaz S, Smartt SJ, Stroud VE, Taylor WD, Trundle C, Van Loon JT, Vink JS, Walborn NR (2009) The VLT-flames tarantula survey, Proceedings of the International Astronomical Union 5 (S266) pp. 35-40
The Tarantula Survey is an ambitious ESO Large Programme that has obtained multi-epoch spectroscopy of over 1000 massive stars in the 30 Doradus region in the Large Magellanic Cloud. Here, we introduce the scientific motivations of the survey and give an overview of the observational sample. Ultimately, quantitative analysis of every star, paying particular attention to the effects of rotational mixing and binarity, will be used to address fundamental questions in both stellar and cluster evolution. © International Astronomical Union 2010.
Hénault-Brunet V, Evans CJ, Sana H, Gieles M, Bastian N, Apellániz JM, Markova N, Taylor WD, Bressert E, Crowther PA, Loon JTV (2012) The VLT-FLAMES Tarantula Survey. VII. A low velocity dispersion for the
young massive cluster R136,
Astronomy and Astrophysics
Detailed studies of resolved young massive star clusters are necessary to
determine their dynamical state and evaluate the importance of gas expulsion
and early cluster evolution. In an effort to gain insight into the dynamical
state of the young massive cluster R136 and obtain the first measurement of its
velocity dispersion, we analyse multi-epoch spectroscopic data of the inner
regions of 30 Doradus in the Large Magellanic Cloud (LMC) obtained as part of
the VLT-FLAMES Tarantula Survey. Following a quantitative assessment of the
variability, we use the radial velocities of non-variable sources to place an
upper limit of 6 km/s on the line-of-sight velocity dispersion of stars within
a projected distance of 5 pc from the centre of the cluster. After accounting
for the contributions of undetected binaries and measurement errors through
Monte Carlo simulations, we conclude that the true velocity dispersion is
likely between 4 and 5 km/s given a range of standard assumptions about the
binary distribution. This result is consistent with what is expected if the
cluster is in virial equilibrium, suggesting that gas expulsion has not altered
its dynamics. We find that the velocity dispersion would be ~25 km/s if
binaries were not identified and rejected, confirming the importance of the
multi-epoch strategy and the risk of interpreting velocity dispersion
measurements of unresolved extragalactic young massive clusters.
Baumgardt H, Parmentier G, Gieles M, Vesperini E (2009) Evidence for two populations of Galactic globular clusters from the
ratio of their half-mass to Jacobi radii,
MON NOT R ASTRON SOC
We investigate the ratio between the half-mass radii r_h of Galactic globular
clusters and their Jacobi radii r_J given by the potential of the Milky Way and
show that clusters with galactocentric distances R_{GC}>8 kpc fall into two
distinct groups: one group of compact, tidally-underfilling clusters with
r_h/r_J r_h/r_J cluster to one of these groups and its membership in the old or younger halo
population. Based on the relaxation times and orbits of the clusters, we argue
that compact clusters and most clusters in the inner Milky Way were born
compact with half-mass radii r_h might have formed compact as well, but the majority likely formed with large
half-mass radii. Galactic globular clusters therefore show a similar dichotomy
as was recently found for globular clusters in dwarf galaxies and for young
star clusters in the Milky Way. It seems likely that some of the
tidally-filling clusters are evolving along the main sequence line of clusters
recently discovered by Kuepper et al. (2008) and are in the process of
dissolution.
Alexander P, Gieles M, Lamers H, Baumgardt H (2014) A prescription and fast code for the long-term evolution of star
clusters - III. Unequal masses and stellar evolution,
We present a new version of the fast star cluster evolution code Evolve Me A
Cluster of StarS (EMACSS). While previous versions of EMACSS reproduced
clusters of single-mass stars, this version models clusters with an evolving
stellar content. Stellar evolution dominates early evolution, and leads to: (1)
reduction of the mean mass of stars due to the mass loss of high-mass stars;
(2) expansion of the half-mass radius; (3) for (nearly) Roche Volume filling
clusters, the induced escape of stars. Once sufficient relaxation has occurred
(~ 10 relaxation times-scales), clusters reach a second, 'balanced' state
whereby the core releases energy as required by the cluster as a whole. In this
state: (1) stars escape due to tidal effects faster than before balanced
evolution; (2) the half-mass radius expands or contracts depending on the Roche
volume filling factor; and (3) the mean mass of stars increases due to the
preferential ejection of low-mass stars.
We compare the EMACSS results of several cluster properties against N-body
simulations of clusters spanning a range of initial number of stars, mass,
half-mass radius, and tidal environments, and show that our prescription
accurately predicts cluster evolution for this database. Finally, we consider
applications for EMACSS, such as studies of galactic globular cluster
populations in cosmological simulations.
Gieles M, Bastian N, Lamers H, Mout J (2005) The Star Cluster Population of M51: III. Cluster disruption and
formation history,
In this work we concentrate on the evolution of the cluster population of the
interacting galaxy M51 (NGC 5194), namely the timescale of cluster disruption
and possible variations in the cluster formation rate. We present a method to
compare observed age vs. mass number density diagrams with predicted
populations including various physical input parameters like the cluster
initial mass function, cluster disruption, cluster formation rate and star
bursts. If we assume that the cluster formation rate increases at the moments
of the encounters with NGC 5195, we find an increase in the cluster formation
rate of a factor of 3, combined with a disruption timescale which is slightly
higher then when assuming a constant formation rate (t_4 = 200 Myr vs. 100
Myr). The measured cluster disruption time is a factor of 5 shorter than
expected on theoretical grounds. This implies that the disk of M51 is not a
preferred location for survival of young globular clusters, since even clusters
with masses of the order of 10^6 M_sun will be destroyed within a few Gyr.
Sana H, Koter AD, Mink SED, Dunstall PR, Evans CJ, Henault-Brunet V, Apellaniz JM, Ramirez-Agudelo OH, Taylor WD, Walborn NR, Clark JS, Crowther PA, Herrero A, Gieles M, Langer N, Lennon DJ, Vink JS (2012) The VLT-FLAMES Tarantula Survey VIII. Multiplicity properties of the
O-type star population,
Astronomy & Physics
Aims. We analyze the multiplicity properties of the massive O-type star
population. With 360 O-type stars, this is the largest homogeneous sample of
massive stars analyzed to date.
Methods. We use multi-epoch spectroscopy and variability analysis to identify
spectroscopic binaries. We also use a Monte-Carlo method to correct for
observational biases.
Results. We observe a spectroscopic binary fraction of 0.35\pm0.03, which
corresponds to the fraction of objects displaying statistically significant
radial velocity variations with an amplitude of at least 20km/s. We compute the
intrinsic binary fraction to be 0.51\pm0.04. We adopt power-laws to describe
the intrinsic period and mass-ratio distributions: f_P ~ (log P)^\pi\ (with
0.15 power-law indexes that best reproduce the observed quantities are \pi = -0.45
+/- 0.30 and \kappa = -1.0\pm0.4. The obtained period distribution thus favours
shorter period systems compared to an Oepik law. The mass ratio distribution is
slightly skewed towards low mass ratio systems but remains incompatible with a
random sampling of a classical mass function. The binary fraction seems mostly
uniform across the field of view and independent of the spectral types and
luminosity classes. The binary fraction in the outer region of the field of
view (r > 7.8', i.e. approx117 pc) and among the O9.7 I/II objects are however
significantly lower than expected from statistical fluctuations.
Conclusions. Using simple evolutionary considerations, we estimate that over
50% of the current O star population in 30 Dor will exchange mass with its
companion within a binary system. This shows that binary interaction is greatly
affecting the evolution and fate of massive stars, and must be taken into
account to correctly interpret unresolved populations of massive stars.
Bastian N, Adamo A, Gieles M, Lamers HJGLM, Larsen SS, Silva-Villa E, Smith LJ, Kotulla R, Konstantopoulos IS, Trancho G, Zackrisson E (2011) Evidence for environmentally dependent cluster disruption in M83, Monthly Notices of the Royal Astronomical Society: Letters 417 (1)
Using multiwavelength imaging from the Wide Field Camera 3 on the Hubble Space Telescope we study the stellar cluster populations of two adjacent fields in the nearby face-on spiral galaxy, M83. The observations cover the galactic centre and reach out to ~6kpc, thereby spanning a large range of environmental conditions, ideal for testing empirical laws of cluster disruption. The clusters are selected by visual inspection to be centrally concentrated, symmetric and resolved on the images. We find that a large fraction of objects detected by automated algorithms (e.g. SExtractor or daofind) are not clusters, but rather are associations. These are likely to disperse into the field on time-scales of tens of Myr due to their lower stellar densities and not due to gas expulsion (i.e. they were never gravitationally bound). We split the sample into two discrete fields (inner and outer regions of the galaxy) and search for evidence of environmentally dependent cluster disruption. Colour-colour diagrams of the clusters, when compared to simple stellar population models, already indicate that a much larger fraction of the clusters in the outer field are older by tens of Myr than in the inner field. This impression is quantified by estimating each cluster's properties (age, mass and extinction) and comparing the age/mass distributions between the two fields. Our results are inconsistent with 'universal' age and mass distributions of clusters, and instead show that the ambient environment strongly affects the observed populations. © 2011 The Authors. Monthly Notices of the Royal Astronomical Society © 2011 RAS.
Gieles M, Moeckel N, Clarke CJ (2012) Do all stars in the solar neighbourhood form in clusters? A cautionary
note on the use of the distribution of surface densities,
Mon. Not. R. Astron. Soc. 426, L11-15 (2012)
Bressert et al. recently showed that the surface density distribution of
low-mass, young stellar objects (YSOs) in the solar neighbourhood is
approximately log-normal. The authors conclude that the star formation process
is hierarchical and that only a small fraction of stars form in dense star
clusters. Here, we show that the peak and the width of the density distribution
are also what follow if all stars form in bound clusters which are not
significantly affected by the presence of gas and expand by two-body
relaxation. The peak of the surface density distribution is simply obtained
from the typical ages (few Myr) and cluster membership number (few hundred)
typifying nearby star-forming regions. This result depends weakly on initial
cluster sizes, provided that they are sufficiently dense (initial half mass
radius of Myr. We conclude that the degeneracy of the YSO surface density distribution
complicates its use as a diagnostic of the stellar formation environment.
Hénault-Brunet V, Gieles M, Agertz O, Read JI (2015) Multiple populations in globular clusters: the distinct kinematic
imprints of different formation scenarios,
Several scenarios have been proposed to explain the presence of multiple
stellar populations in globular clusters. Many of them invoke multiple
generations of stars to explain the observed chemical abundance anomalies, but
it has also been suggested that self-enrichment could occur via accretion of
ejecta from massive stars onto the circumstellar disc of low-mass pre-main
sequence stars. These scenarios imply different initial conditions for the
kinematics of the various stellar populations. Given some net angular momentum
initially, models for which a second generation forms from gas that collects in
a cooling flow into the core of the cluster predict an initially larger
rotational amplitude for the polluted stars compared to the pristine stars.
This is opposite to what is expected from the accretion model, where the
polluted stars are the ones crossing the core and are on preferentially radial
(low-angular momentum) orbits, such that their rotational amplitude is lower.
Here we present the results of a suite of $N$-body simulations with initial
conditions chosen to capture the distinct kinematic properties of these
pollution scenarios. We show that initial differences in the kinematics of
polluted and pristine stars can survive to the present epoch in the outer parts
of a large fraction of Galactic globular clusters. The differential rotation of
pristine and polluted stars is identified as a unique kinematic signature that
could allow us to distinguish between various scenarios, while other kinematic
imprints are generally very similar from one scenario to the other.
Lamers HJGLM, Baumgardt H, Gieles M (2010) Mass loss rates and the mass evolution of star clusters, Monthly Notices of the Royal Astronomical Society
We describe the interplay between stellar evolution and dynamical mass loss
of evolving star clusters, based on the principles of stellar evolution and
cluster dynamics and on a grid of N-body simulations of cluster models. The
cluster models have different initial masses, different orbits, including
elliptical ones, and different initial density profiles. We use two sets of
cluster models: initially Roche-lobe filling and Roche-lobe underfilling. We
identify four distinct mass loss effects: (1) mass loss by stellar evolution,
(2) loss of stars induced by stellar evolution and (3) relaxation-driven mass
loss before and (4) after core collapse. Both the evolution-induced loss of
stars and the relaxation-driven mass loss need time to build up. This is
described by a delay-function of a few crossing times for Roche-lobe filling
clusters and a few half mass relaxation times for Roche-lobe underfilling
clusters. The relaxation-driven mass loss can be described by a simple power
law dependence of the mass dM/dt =-M^{1-gamma}/t0, (with M in Msun) where t0
depends on the orbit and environment of the cluster. Gamma is 0.65 for clusters
with a King-parameter W0=5 and 0.80 for more concentrated clusters with W0=7.
For initially Roche-lobe underfilling clusters the dissolution is described by
the same gamma=0.80. The values of the constant t0 are described by simple
formulae that depend on the orbit of the cluster. The mass loss rate increases
by about a factor two at core collapse and the mass dependence of the
relaxation-driven mass loss changes to gamma=0.70 after core collapse. We also
present a simple recipe for predicting the mass evolution of individual star
clusters with various metallicities and in different environments, with an
accuracy of a few percent in most cases. This can be used to predict the mass
evolution of cluster systems.
Weisz DR, Koposov SE, Dolphin AE, Belokurov V, Gieles M, Mateo ML, Olszewski EW, Sills A, Walker MG (2016) A Hubble Space Telescope Study of the Enigmatic Milky Way Halo Globular
Cluster Crater,
The Astrophysical Journal: an international review of astronomy and astronomical physics 822 (1) pp. 1-10 The American Astronomical Society
We analyze the resolved stellar populations of the faint stellar system, Crater, based on deep optical imaging
taken with the Advanced Camera for Surveys aboard the Hubble Space Telescope. The HST-based colormagnitude
diagram (CMD) of Crater extends 4 magnitudes below the oldest main sequence turnoff, providing
excellent leverage on Crater?s physical properties. Structurally, we find that Crater has a half-light radius of 20
pc and shows no evidence for tidal distortions. We model the CMD of Crater under the assumption of it being
a simple stellar population and alternatively by solving for its full star formation history. In both cases, Crater
is well-described by a simple stellar population with an age of 7.5 Gyr, a metallicity of [M/H] -1.65, a total
stellar mass of M? 1e4 M , a luminosity of MV -5:3, located at a distance of d 145 kpc, with modest
uncertainties in these properties due to differences in the underlying stellar evolution models. We argue that the
sparse sampling of stars above the turnoff and sub-giant branch are likely to be 1.0-1.4 M binary star systems
(blue stragglers) and their evolved descendants, as opposed to intermediate age main sequence stars. Confusion
of these populations highlights a substantial challenge in accurately characterizing sparsely populated stellar
systems. Our analysis shows that Crater is not a dwarf galaxy, but instead is an unusually young cluster given
its location in the Milky Way?s very outer stellar halo. Crater is similar to SMC cluster Lindsay 38, and its
position and velocity are in good agreement with observations and models of the Magellanic stream debris,
suggesting it may have accreted from the Magellanic Clouds. However, its age and metallicity are also in
agreement with the age-metallicity relationships of lower mass dwarf galaxies such as Leo I or Carina. Despite
uncertainty over its progenitor system, Crater appears to have been incorporated into the Galaxy more recently
than z 1 (8 Gyr ago), providing an important new constraint on the accretion history of the Milky Way.
Alexander PER, Gieles M (2012) A prescription and fast code for the long-term evolution of star
clusters,
MON NOT R ASTRON SOC
We introduce the star cluster evolution code Evolve Me A Cluster of StarS
(EMACSS), a simple yet physically motivated computational model that describes
the evolution of some fundamental properties of star clusters in static tidal
fields. We base our prescription upon the flow of energy within the cluster,
which is a constant fraction of the total energy per half-mass relaxation time.
According to Henon's predictions, this flow is independent of the precise
mechanisms for energy production within the core, and therefore does not
require a complete description of the many-body interactions therein. For a
cluster of equal-mass stars, we thence use dynamical theory and analytic
descriptions of escape mechanisms to construct a series of coupled differential
equations expressing the time evolution of cluster mass and radius. These
equations are numerically solved using a fourth-order Runge-Kutta integration
kernel, and the results benchmarked against a data base of direct N-body
simulations. We use simulations containing a modest initial number of stars
(1024 prescription is publicly available and reproduces the N-body results to within
~10 per cent accuracy for the entire post-collapse evolution of star clusters.
Evans CJ, Taylor WD, Henault-Brunet V, Sana H, Koter AD, Simon-Diaz S, Carraro G, Bagnoli T, Bastian N, Bestenlehner JM, Bonanos AZ, Bressert E, Brott I, Campbell MA, Cantiello M, Clark JS, Costa E, Crowther PA, Mink SED, Doran E, Dufton PL, Dunstall PR, Friedrich K, Garcia M, Gieles M, Graefener G, Herrero A, Howarth ID, Izzard RG, Langer N, Lennon DJ, Apellaniz JM, Markova N, Najarro F, Puls J, Ramirez OH, Sabin-Sanjulian C, Smartt SJ, Stroud VE, Loon JTV, Vink JS, Walborn NR (2011) The VLT-FLAMES Tarantula Survey I: Introduction and observational
overview,
Astronomy and Astrophysics 530
The VLT-FLAMES Tarantula Survey (VFTS) is an ESO Large Programme that has
obtained multi-epoch optical spectroscopy of over 800 massive stars in the 30
Doradus region of the Large Magellanic Cloud (LMC). Here we introduce our
scientific motivations and give an overview of the survey targets, including
optical and near-infrared photometry and comprehensive details of the data
reduction. One of the principal objectives was to detect massive binary systems
via variations in their radial velocities, thus shaping the multi-epoch
observing strategy. Spectral classifications are given for the massive
emission-line stars observed by the survey, including the discovery of a new
Wolf-Rayet star (VFTS 682, classified as WN5h), 2' to the northeast of R136. To
illustrate the diversity of objects encompassed by the survey, we investigate
the spectral properties of sixteen targets identified by Gruendl & Chu from
Spitzer photometry as candidate young stellar objects or stars with notable
mid-infrared excesses. Detailed spectral classification and quantitative
analysis of the O- and B-type stars in the VFTS sample, paying particular
attention to the effects of rotational mixing and binarity, will be presented
in a series of future articles to address fundamental questions in both stellar
and cluster evolution.
Evans CJ, Bastian N, Beletsky Y, Brott I, Cantiello M, Clark JS, Crowther PA, Koter AD, Mink SD, Dufton PL, Dunstall P, Gieles M, Graefener G, Henault-Brunet V, Herrero A, Howarth ID, Langer N, Lennon DJ, Apellaniz JM, Markova N, Najarro F, Puls J, Sana H, Simon-Diaz S, Smartt SJ, Stroud VE, Taylor WD, Trundle C, Loon JTV, Vink JS, Walborn NR (2009) The VLT-FLAMES Tarantula Survey,
The Tarantula Survey is an ambitious ESO Large Programme that has obtained
multi-epoch spectroscopy of over 1,000 massive stars in the 30 Doradus region
of the Large Magellanic Cloud. Here we introduce the scientific motivations of
the survey and give an overview of the observational sample. Ultimately,
quantitative analysis of every star, paying particular attention to the effects
of rotational mixing and binarity, will be used to address fundamental
questions in both stellar and cluster evolution.
Davies B, Bastian N, Gieles M, Seth AC, Mengel S, Konstantopoulos IS (2010) GLIMPSE-CO1: the most massive intermediate-age stellar cluster in the
Galaxy,
Monthly Notices of the Royal Astronomical Society
The stellar cluster GLIMPSE-C01 is a dense stellar system located in the
Galactic Plane. Though often referred to in the literature as an old globular
cluster traversing the Galactic disk, previous observations do not rule out
that it is an intermediate age (less than a few Gyr) disk-borne cluster. Here,
we present high-resolution near-infrared spectroscopy of over 50 stars in the
cluster. We find an average radial velocity is consistent with being part of
the disk, and determine the cluster's dynamical mass to be (8 \pm 3)x10^4 Msun.
Analysis of the cluster's M/L ratio, the location of the Red Clump, and an
extremely high stellar density, all suggest an age of 400-800Myr for
GLIMPSE-C01, much lower than for a typical globular cluster. This evidence
therefore leads us to conclude that GLIMPSE-C01 is part of the disk population,
and is the most massive Galactic intermediate-age cluster discovered to date.
Gieles M, Lamers HJGLM, Zwart SFP (2007) On the Interpretation of the Age Distribution of Star Clusters in the
Small Magellanic Cloud,
We re-analyze the age distribution (dN/dt) of star clusters in the Small
Magellanic Cloud (SMC) using age determinations based on the Magellanic Cloud
Photometric Survey. For ages younger than 3x10^9 yr the dN/dt distribution can
be approximated by a power-law distribution, dN/dt propto t^-beta, with
-beta=-0.70+/-0.05 or -beta=-0.84+/-0.04, depending on the model used to derive
the ages. Predictions for a cluster population without dissolution limited by a
V-band detection result in a power-law dN/dt distribution with an index of
~-0.7. This is because the limiting cluster mass increases with age, due to
evolutionary fading of clusters, reducing the number of observed clusters at
old ages. When a mass cut well above the limiting cluster mass is applied, the
dN/dt distribution is flat up to 1 Gyr. We conclude that cluster dissolution is
of small importance in shaping the dN/dt distribution and incompleteness causes
dN/dt to decline. The reason that no (mass independent) infant mortality of
star clusters in the first ~10-20 Myr is found is explained by a detection bias
towards clusters without nebular emission, i.e. cluster that have survived the
infant mortality phase. The reason we find no evidence for tidal (mass
dependent) cluster dissolution in the first Gyr is explained by the weak tidal
field of the SMC. Our results are in sharp contrast to the interpretation of
Chandar et al. (2006), who interpret the declining dN/dt distribution as rapid
cluster dissolution. This is due to their erroneous assumption that the sample
is limited by cluster mass, rather than luminosity.
Konstantopoulos IS, Bastian N, Gieles M, Lamers HJGLM (2009) Constraining star cluster disruption mechanisms, 2010IAUS..266..433K
Star clusters are found in all sorts of environments and their formation and
evolution is inextricably linked to the star formation process. Their eventual
destruction can result from a number of factors at different times, but the
process can be investigated as a whole through the study of the cluster age
distribution. Observations of populous cluster samples reveal a distribution
following a power law of index approximately -1. In this work we use M33 as a
test case to examine the age distribution of an archetypal cluster population
and show that it is in fact the evolving shape of the mass detection limit that
defines this trend. That is to say, any magnitude-limited sample will appear to
follow a dN/dt=1/t, while cutting the sample according to mass gives rise to a
composite structure, perhaps implying a dependence of the cluster disruption
process on mass. In the context of this framework, we examine different models
of cluster disruption from both theoretical and observational standpoints.
Sana H, Momany Y, Gieles M, Carraro G, Beletsky Y, Ivanov VD, Silva GD, James G (2010) A MAD view of Trumpler 14, Astronomy and Astrophysics
We present adaptive optics (AO) near-infrared observations of the core of the
Tr 14 cluster in the Carina region obtained with the ESO multi-conjugate AO
demonstrator, MAD. Our campaign yields AO-corrected observations with an image
quality of about 0.2 arcsec across the 2 arcmin field of view, which is the
widest AO mosaic ever obtained. We detected almost 2000 sources spanning a
dynamic range of 10 mag. The pre-main sequence (PMS) locus in the
colour-magnitude diagram is well reproduced by Palla & Stahler isochrones with
an age of 3 to 5 1E+05 yr, confirming the very young age of the cluster. We
derive a very high (deprojected) central density n0~4.5(+/-0.5) \times 10^4
pc^-3 and estimate the total mass of the cluster to be about ~4.3^{+3.3}_{-1.5}
\times 10^3 Msun, although contamination of the field of view might have a
significant impact on the derived mass. We show that the pairing process is
largely dominated by chance alignment so that physical pairs are difficult to
disentangle from spurious ones based on our single epoch observation. Yet, we
identify 150 likely bound pairs, 30% of these with a separation smaller than
0.5 arcsec (~1300AU). We further show that at the 2-sigma level massive stars
have more companions than lower-mass stars and that those companions are
respectively brighter on average, thus more massive. Finally, we find some
hints of mass segregation for stars heavier than about 10 Msun. If confirmed,
the observed degree of mass segregation could be explained by dynamical
evolution, despite the young age of the cluster.
Bastian N, Ercolano B, Gieles M, Rosolowsky E, Scheepmaker RA, Gutermuth R, Efremov Y (2007) Hierarchical Star-Formation in M33: Fundamental properties of the
star-forming regions,
Mon.Not.Roy.Astron.Soc. 379 pp. 1302-1312
Star-formation within galaxies appears on multiple scales, from spiral
structure, to OB associations, to individual star clusters, and often
sub-structure within these clusters. This multitude of scales calls for
objective methods to find and classify star-forming regions, regardless of
spatial size. To this end, we present an analysis of star-forming groups in the
local group spiral galaxy M33, based on a new implementation of the Minimum
Spanning Tree (MST) method. Unlike previous studies which limited themselves to
a single spatial scale, we study star-forming structures from the effective
resolution limit (~20pc) to kpc scales. We find evidence for a continuum of
star-forming group sizes, from pc to kpc scales. We do not find a
characteristic scale for OB associations, unlike that found in previous
studies, and we suggest that the appearance of such a scale was caused by
spatial resolution and selection effects. The luminosity function of the groups
is found to be well represented by a power-law with an index, -2, similar to
that found for clusters and GMCs. Additionally, the groups follow a similar
mass-radius relation as GMCs. The size distribution of the groups is best
described by a log-normal distribution and we show that within a hierarchical
distribution, if a scale is selected to find structure, the resulting size
distribution will have a log-normal distribution. We find an abrupt drop of the
number of groups outside a galactic radius of ~4kpc, suggesting a change in the
structure of the star-forming ISM, possibly reflected in the lack of GMCs
Lamers H, Gieles M, Bastian N, Baumgardt H, Kharchenko N, Zwart SP (2005) An analytical description of the disruption of star clusters in tidal
fields with an application to Galactic open clusters,
We present a simple analytical description of the disruption of star clusters
in a tidal field, which agrees excellently with detailed N-body simulations.
The analytic expression can be used to predict the mass and age histograms of
surviving clusters for any cluster initial mass function and any cluster
formation history. The method is applied to open clusters in the solar
neighbourhood, based on the new cluster sample of Kharchenko et al. From a
comparison between the observed and predicted age distributions in the age
range between 10 Myr to 3 Gyr we find the following results: (1) The disruption
time of a 10^4 M_sun cluster in the solar neighbourhood is about 1.3+/-0.5 Gyr.
This is a factor 5 shorter than derived from N-body simulations of clusters in
the tidal field of the galaxy. (2) The present starformation rate in bound
clusters within 600 pc from the Sun is 5.9+/-0.8 * 10^2 M_sun / Myr, which
corresponds to a surface star formation rate in bound clusters of 5.2+/-0.7
10^(-10) M_sun/yr/pc^2. (3) The age distribution of open clusters shows a bump
between 0.26 and 0.6 Gyr when the cluster formation rate was 2.5 times higher
than before and after. (4) The present star formation rate in bound clusters is
half as small as that derived from the study of embedded clusters. The
difference suggests that half of the clusters in the solar neighbourhood become
unbound within 10 Myr. (5) The most massive clusters within 600 pc had an
initial mass of 3*10^4 M_sun. This is in agreement with the statistically
expected value based on a cluster initial mass function with a slope of -2,
even if the physical upper mass limit is as high as 10^6 M_sun.
Scheepmaker RA, Haas MR, Gieles M, Bastian N, Larsen SS, Lamers HJGLM (2007) ACS imaging of star clusters in M51. I. Identification and radius
distribution,
We use HST/ACS observations of the spiral galaxy M51 in F435W, F555W and
F814W to select a large sample of star clusters with accurate effective radius
measurements in an area covering the complete disc of M51. We present the
arm/interarm region, galactocentric distance, mass and age. We select a sample
of 7698 (F435W), 6846 (F555W) and 5024 (F814W) slightly resolved clusters and
derive their effective radii by fitting the spatial profiles with analytical
models convolved with the point spread function. The radii of 1284 clusters are
studied in detail. We find cluster radii between 0.5 and ~10 pc, and one
exceptionally large cluster candidate with a radius of 21.6 pc. The median
radius is 2.1 pc. We find 70 clusters in our sample which have colours
consistent with being old GC candidates and we find 6 new "faint fuzzy"
clusters in, or projected onto, the disc of M51. The radius distribution can
not be fitted with a power law, but a log-normal distribution provides a
reasonable fit to the data. This indicates that shortly after the formation of
the clusters from a fractal gas, their radii have changed in a non-uniform way.
We find an increase in radius with colour as well as a higher fraction of
redder clusters in the interarm regions, suggesting that clusters in spiral
arms are more compact. We find a correlation between radius and galactocentric
distance which is considerably weaker than the observed correlation for old
Milky Way GCs. We find weak relations between cluster luminosity and radius,
but we do not observe a correlation between cluster mass and radius.
Cabrera-Ziri I, Niederhofer F, Bastian N, Rejkuba M, Balbinot E, Kerzendorf WE, Larsen SS, Mackey AD, Dalessandro E, Mucciarelli A, Charbonnel C, Hilker M, Gieles M, Henault-Brunet V (2016) No evidence for younger stellar generations within the intermediate-age massive clusters NGC 1783, NGC 1806 and NGC 411, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 459 (4) pp. 4218-4223 OXFORD UNIV PRESS
Shanahan RL, Gieles M (2015) Biases in the inferred mass-to-light ratio of globular clusters: no need
for variations in the stellar mass function,
Mon. Not. R. Astron. Soc. 448 pp. 94-98
From a study of the integrated light properties of 200 globular clusters
(GCs) in M31, Strader et al. found that the mass-to-light ratios are lower than
what is expected from simple stellar population (SSP) models with a canonical'
stellar initial mass function (IMF), with the discrepancy being larger at high
metallicities. We use dynamical multi-mass models, that include a prescription
for equipartition, to quantify the bias in the inferred dynamical mass as the
result of the assumption that light follows mass. For a universal IMF and a
metallicity dependent present day mass function we find that the inferred mass
from integrated light properties systematically under estimates the true mass,
and that the bias is more important at high metallicities, as was found for the
M31 GCs. We show that mass segregation and a flattening of the mass function
have opposing effects of similar magnitude on the mass inferred from integrated
properties. This makes the mass-to-light ratio as derived from integrated
properties an inadequate probe of the low-mass end of the stellar mass
function. There is, therefore, no need for variations in the IMF, nor the need
to invoke depletion of low-mass stars, to explain the observations. Finally, we
find that the retention fraction of stellar-mass black holes (BHs) is an
equally important parameter in understanding the mass segregation bias. We
speculatively put forward to idea that kinematical data of GCs can in fact be
used to constrain the total mass in stellar-mass BHs in GCs.
Gieles M, Heggie DC, Zhao H (2011) The life cycle of star cluster in a tidal field, Mon. Not. R. Astron. Soc. 413, 2509-2524 (2011)
The evolution of globular clusters due to 2-body relaxation results in an
outward flow of energy and at some stage all clusters need a central energy
source to sustain their evolution. Henon provided the insight that we do not
need to know the details of the energy production in order to understand the
relaxation-driven evolution of the cluster, at least outside the core. He
provided two self-similar solutions for the evolution of clusters based on the
view that the cluster as a whole determines the amount of energy that is
produced in the core: steady expansion for isolated clusters, and homologous
contraction for clusters evaporating in a tidal field. We combine these models:
the half-mass radius increases during the first half of the evolution, and
decreases in the second half; while the escape rate approaches a constant value
set by the tidal field. We refer to these phases as expansion dominated' and
evaporation dominated'. These simple analytical solutions immediately allow us
to construct evolutionary tracks and isochrones in terms of cluster half-mass
density, cluster mass and galacto-centric radius. From a comparison to the
Milky Way globular clusters we find that roughly 1/3 of them are in the second,
evaporation-dominated phase and for these clusters the density inside the
half-mass radius varies with the galactocentric distance R as rho_h ~ 1/R^2.
The remaining 2/3 are still in the first, expansion-dominated phase and their
isochrones follow the environment-independent scaling rho_h ~ M^2; that is, a
constant relaxation time-scale. We find substantial agreement between Milky Way
globular cluster parameters and the isochrones, which suggests that there is,
as Henon suggested, a balance between the flow of energy and the central energy
production for almost all globular clusters.
Haas MR, Gieles M, Scheepmaker RA, Larsen SS, Lamers HJGLM (2008) ACS imaging of star clusters in M51 II. The luminosity function and mass
function across the disk,
Astronomy and Astrophysics, Volume 487, Issue 3, 2008, pp.937-949
Whether or not there exists a physical upper mass limit for star clusters is
still unclear. HST/ACS data for the rich cluster population in the interacting
galaxy M51 enables us to investigate this in more detail. We investigate
whether the cluster luminosity function (LF) in M51 shows evidence for an upper
limit to the mass function. The variations of the LF parameters with position
on the disk are addressed. We determine the cluster LF for all clusters in M51
falling within our selection criteria, as well as for several subsets of the
sample. In that way we can determine the properties of the cluster population
as a function of galactocentric distance and background intensity. By comparing
observed and simulated LFs we can constrain the underlying cluster initial mass
function and/or cluster disruption parameters. A physical upper mass limit for
star clusters will appear as a bend dividing two power law parts in the LF, if
the cluster sample is large enough to sample the full range of cluster masses.
The location of the bend in the LF is indicative of the value of the upper mass
limit. The slopes of the power laws are an interplay between upper masses,
disruption and fading. The LF of the cluster population of M51 is better
described by a double power law than by a single power law. We show that the
cluster initial mass function is likely to be truncated at the high mass end.
We conclude from the variation of the LF parameters with galactocentric
distance that both the upper mass limit and the cluster disruption parameters
are likely to be a function of position in the galactic disk. At higher
galactocentric distances the maximum mass is lower, cluster disruption slower,
or both.
Markova N, Evans C, Bastian N, Beletsky Y, Bestenlehner J, Brott I, Cantiello M, Carraro G, Clark J, Crowther P, Koter AD, Mink SD, Doran E, Dufton P, Dunstall P, Gieles M, Graefener G, Henault-Brunet V, Herrero A, Howarth I, Langer N, Lennon D, Apellaniz JM, Najarro F, Puls J, Sana H, Simon-Diaz S, Smartt S, Stroud V, Taylor W, Loon JV, Vink J, Walborn N, Izzard Robert (2011) The FLAMES Tarantula Survey, Astronomy and Astrophysics
The Tarantula survey is an ESO Large Programme which has obtained multi-epochs spectroscopy of over 800 massive stars in the 30 Dor region in the Large Magelanic Cloud. Here we briefly describe the main drivers of the survey and the observational material derived.
Gieles M, Bastian N (2008) An alternative method to study star cluster disruption, ASTRON ASTROPHYS
Many embedded star clusters do not evolve into long-lived bound clusters. The
most popular explanation for this "infant mortality" of young clusters is the
expulsion of natal gas by stellar winds and supernovae, which leaves up to 90%
of them unbound. A cluster disruption model has recently been proposed in which
this mass- independent disruption of clusters proceeds for another Gyr after
gas expulsion. In this scenario, the survival chances of massive clusters are
much smaller than in the traditional mass-dependent disruption models. The most
common way to study cluster disruption is to use the cluster age distribution,
which, however, can be heavily affected by incompleteness. To avoid this, we
introduce a new method, based on size-of-sample effects, namely the relation
between the most massive cluster, M_max, and the age range sampled. Assuming
that clusters are sampled from a power-law initial mass function, with index -2
and that the cluster formation rate is constant, M_max scales with the age
range sampled, such that the slope in a log(M_max) vs. log(age) plot is equal
to unity. This slope decreases if mass-independent disruption is included. For
90% mass-independent cluster disruption per age dex, the predicted slope is
zero. For the solar neighbourhood, SMC, LMC, M33, and M83, based on ages and
masses taken from the literature, we find slopes consistent with the expected
size-of-sample correlations for the first 100 Myr, hence ruling out the 90%
mass-independent cluster disruption scenario. For M51, however, the increase of
log(M_max) with log(age) is slightly shallower and for the Antennae galaxies it
is flat. This simple method shows that the formation and/or disruption of
clusters in the Antennae must have been very different from that of the other
galaxies studied here, so it should not be taken as a representative case.
Hénault-Brunet V, Gieles M, Evans CJ, Sana H, Bastian N, Apellániz JM, Taylor WD, Markova N, Bressert E, Koter AD, Loon JTV (2012) The VLT-FLAMES Tarantula Survey VI: Evidence for rotation of the young
massive cluster R136,
Astronomy and Astrophysics
Although it has important ramifications for both the formation of star
clusters and their subsequent dynamical evolution, rotation remains a largely
unexplored characteristic of young star clusters (few Myr). Using multi-epoch
spectroscopic data of the inner regions of 30 Doradus in the Large Magellanic
Cloud (LMC) obtained as part of the VLT-FLAMES Tarantula Survey, we search for
rotation of the young massive cluster R136. From the radial velocities of 36
apparently single O-type stars within a projected radius of 10 pc from the
centre of the cluster, we find evidence, at the 95% confidence level, for
rotation of the cluster as a whole. We use a maximum likelihood method to fit
simple rotation curves to our data and find a typical rotational velocity of ~3
km/s. When compared to the low velocity dispersion of R136, our result suggests
that star clusters may form with at least ~20% of the kinetic energy in
rotation.
Gaburov E, Gieles M (2008) Mass segregation in young star clusters: can it be detected from the
integrated photometric properties?,
Monthly Notices of the Royal Astronomical Society
We consider the effect of mass segregation on the observable integrated
properties of star clusters. The measurable properties depend on a combination
of the dynamical age of the cluster and the physical age of the stars in the
cluster. To investigate all possible combinations of these two quantities we
propose an analytical model for the mass function of segregated star clusters
that agrees with the results of N-body simulations, in which any combination
can be specified. For a realistic degree of mass segregation and a fixed
density profile we find with increasing age an increase in the measured core
radii and a central surface brightness that decreases in all filters more
rapidly than what is expected from stellar evolution alone. Within a Gyr the
measured core radius increases by a factor of two and the central surface
density in all filters of a segregated cluster will be overestimated by a
similar factor when not taking into account mass segregation in the conversion
from light to mass. We find that the $V-I$ colour of mass segregated clusters
decreases with radius by about 0.1-0.2 mag, which could be observable. From
recent observations of partially resolved extra-galactic clusters a decreasing
half-light radius with increasing wavelength was observed, which was attributed
to mass segregation. These observations can not be reproduced by our models. We
find that the differences between measured radii in different filters are
always smaller than 5%.
Campbell MA, Evans CJ, Ascenso J, Longmore AJ, Kolb J, Gieles M, Alves J (2008) Imaging the dense stellar cluster R136 with VLT-MAD, Proceedings of SPIE
We evaluate the performance of the Multi-conjugate Adaptive optics
Demonstrator (MAD) from H and Ks imaging of 30 Doradus in the Large Magellanic
Cloud. Maps of the full-width half maximum (FWHM) of point sources in the H and
Ks images are presented, together with maps of the Strehl ratio achieved in the
Ks-band observations. Each of the three natural guide stars was at the edge of
the MAD field-of-view, and the observations were obtained at relatively large
airmass (1.4-1.6). Even so, the Strehl ratio achieved in the second pointing
(best-placed compared to the reference stars) ranged from 15% to an impressive
30%. Preliminary photometric calibration of the first pointing indicates 5
sigma sensitivities of Ks=21.75 and H=22.25 (from 22 and 12 min exposures,
respectively).
Gieles M, Lamers HJGLM, Baumgardt H (2007) Star cluster life-times: Dependence on mass, radius and environment, Proceedings of the International Astronomical Union 3 (S246) pp. 171-175
The dissolution time (tdis) of clusters in a tidal field does not scale with the classical expression for the relaxation time. First, the scaling with N, and hence cluster mass, is shallower due to the finite escape time of stars. Secondly, the cluster half-mass radius is of little importance. This is due to a balance between the relative tidal field strength and internal relaxation, which have an opposite effect on tdis, but of similar magnitude. When external perturbations, such as encounters with giant molecular clouds (GMC) are important, tdis for an individual cluster depends strongly on radius. The mean dissolution time for a population of clusters, however, scales in the same way with mass as for the tidal field, due to the weak dependence of radius on mass. The environmental parameters that determine tdis are the tidal field strength and the density of molecular gas. We compare the empirically derived tdis of clusters in six galaxies to theoretical predictions and argue that encounters with GMCs are the dominant destruction mechanism. Finally, we discuss a number of pitfalls in the derivations of tdis from observations, such as incompleteness, with the cluster system of the SMC as particular example. © 2008 Copyright International Astronomical Union 2008.
Lamers HJGLM, Gieles M (2007) Star clusters in the solar neighborhood: a solution to Oort's problem,
In 1958 Jan Oort remarked that the lack of old clusters in the solar
neighborhood (SN) implies that clusters are destroyed on a timescale of less
than a Gyr. This is much shorter than the predicted dissolution time of
clusters due to stellar evolution and two-body relaxation in the tidal field of
the Galaxy. So, other (external) effects must play a dominant role in the
destruction of star clusters in the solar neighborhood. We recalculated the
survival time of initially bound star clusters in the solar neighborhood taking
into account: (1) stellar evolution, (2) tidal stripping, (3) perturbations by
spiral arms and (4) encounters with giant molecular clouds (GMCs). We find that
encounters with GMCs are the most damaging to clusters. The resulting predicted
dissolution time of these combined effects, t_dis=1.7 (Mi/10^4 M_sun)^0.67 Gyr
for clusters in the mass range of 10^2 disruption time of t_dis=1.3+/-0.5 (M/10^4 M_sun)^0.62 Gyr that was derived
empirically from a mass limited sample of clusters in the solar neighborhood
within 600 pc. The predicted shape of the age distribution of clusters agrees
very well with the observed one. The comparison between observations and theory
implies a surface star formation rate (SFR) near the sun of 3.5x10^-10 M_sun
yr^-1 pc^-2 for stars in bound clusters with an initial mass in the range of
10^2 to 3x10^4 M_sun. This can be compared to a total SFR of 7-10x10^-10 M_sun
yr^-1 pc^-2 derived from embedded clusters or 3-7x10^-9 M_sun yr^-1 pc^-2
derived from field stars. This implies an infant mortality rate of clusters in
the solar neighborhood between 50% and 95%, in agreement with the results of a
study of embedded clusters.
Gieles M, Bastian N, Ercolano B (2008) Evolution of stellar structure in the Small Magellanic Cloud, Mon.Not.Roy.Astron.Soc.391:L93-L97,2008
The projected distribution of stars in the Small Magellanic Cloud (SMC) from
the Magellanic Clouds Photometric Survey is analysed. Stars of different ages
are selected via criteria based on V magnitude and V-I colour, and the degree
of grouping' as a function of age is studied. We quantify the degree of
structure using the two-point correlation function and a method based on the
Minimum Spanning Tree and find that the overall structure of the SMC is
evolving from a high degree of sub-structure at young ages (~10 Myr) to a
smooth radial density profile. This transition is gradual and at ~75 Myr the
distribution is statistically indistinguishable from the background SMC
distribution. This time-scale corresponds to approximately the dynamical
crossing time of stars in the SMC. The spatial positions of the star clusters
in the SMC show a similar evolution of spatial distribution with age. Our
analysis suggests that stars form with a high degree of (fractal)
sub-structure, probably imprinted by the turbulent nature of the gas from which
they form, which is erased by random motions in the galactic potential on a
time-scale of a galactic crossing time.
Bastian N, Gieles M, Goodwin SP, Trancho G, Smith LJ, Konstantopoulos I, Efremov Y (2008) The Early Expansion of Cluster Cores, Monthly Notices of the Royal Astronomical Society
The observed properties of young star clusters, such as the core radius and
luminosity profile, change rapidly during the early evolution of the clusters.
Here we present observations of 6 young clusters in M51 where we derive their
sizes using HST imaging and ages using deep Gemini-North spectroscopy. We find
evidence for a rapid expansion of the cluster cores during the first 20 Myr of
their evolution. We confirm this trend by including data from the literature of
both Galactic and extra-galactic embedded and young clusters, and possible
mechanisms (rapid gas removal, stellar evolutionary mass-loss, and internal
dynamical heating) are discussed. We explore the implications of this result,
focussing on the fact that clusters were more concentrated in the past,
implying that their stellar densities were much higher and relaxation times
correspondingly shorter. Thus, when estimating if a particular cluster is
dynamically relaxed, (i.e. when determining if a cluster's mass segregation is
due to primordial or dynamical processes), the current relaxation time is only
an upper-limit, with the relaxation time likely being significantly shorter in
the past.
Gieles M, Zwart SP (2010) The distinction between star clusters and associations, Mon. Not. R. Astron. Soc. 410, L6-L7 (2011)
In Galactic studies a distinction is made between (open) star clusters and
associations. For barely resolved objects at a distance of several Mpc this
distinction is not trivial to make. Here we provide an objective definition by
comparing the age of the stars to the crossing time of nearby stellar
agglomerates. We find that a satisfactory separation can be made where this
ratio equals unity. Stellar agglomerates for which the age of the stars exceeds
the crossing time are bound, and are referred to as star clusters.
Alternatively, those for which the crossing time exceeds the stellar age are
unbound and are referred to as associations. This definition is useful whenever
reliable measurements for the mass, radius and age are available.
Niederste-Ostholt M, Belokurov V, Evans NW, Koposov S, Gieles M, Irwin MJ (2010) The tidal tails of the ultrafaint globular cluster Palomar 1, Monthly Notices of the Royal Astronomical Society: Letters 408 (1)
Using the optimal filter technique applied to Sloan Digital Sky Survey photometry, we have found extended tails stretching about 1° (or several tens of half-light radii) from either side of the ultrafaint globular cluster Palomar 1. The tails contain roughly as many stars as does the cluster itself. Using deeper Hubble Space Telescope data, we see that the isophotes twist in a characteristic S-shape on moving outwards from the cluster centre to the tails. We argue that the main mechanism forming the tails may be relaxation-driven evaporation and that Pal 1 may have been accreted from a now disrupted dwarf galaxy ~500 Myr ago. © 2010 The Authors. Journal compilation © 2010 RAS.
Koposov SE, Belokurov V, Evans NW, Gilmore G, Gieles M, Irwin MJ, Lewis GF, Niederste-Ostholt M, Penarrubia J, Smith MC, Bizyaev D, Malanushenko E, Malanushenko V, Schneider DP, Wyse RFG (2011) The Sagittarius Streams in the Southern Galactic Hemisphere, Astrophysical Journal
The structure of the Sagittarius stream in the Southern Galactic hemisphere
is analysed with the Sloan Digital Sky Survey Data Release 8. Parallel to the
Sagittarius tidal track, but ~ 10deg away, there is another fainter and more
metal-poor stream. We provide evidence that the two streams follow similar
distance gradients but have distinct morphological properties and stellar
populations. The brighter stream is broader, contains more metal-rich stars and
has a richer colour-magnitude diagram with multiple turn-offs and a prominent
red clump as compared to the fainter stream. Based on the structural properties
and the stellar population mix, the stream configuration is similar to the
Northern "bifurcation". In the region of the South Galactic Cap, there is
overlapping tidal debris from the Cetus Stream, which crosses the Sagittarius
stream. Using both photometric and spectroscopic data, we show that the blue
straggler population belongs mainly to Sagittarius and the blue horizontal
branch stars belong mainly to the Cetus stream in this confused location in the
halo.
Gieles M, Sana H, Zwart SFP (2009) On the velocity dispersion of young star clusters: super-virial or
binaries?,
Monthly Notices of the Royal Astronomical Society
Many young extra-galactic clusters have a measured velocity dispersion that
is too high for the mass derived from their age and total luminosity, which has
led to the suggestion that they are not in virial equilibrium. Most of these
clusters are confined to a narrow age range centred around 10 Myr because of
observational constraints. At this age the cluster light is dominated by
luminous evolved stars, such as red supergiants, with initial masses of ~13-22
Msun for which (primordial) binarity is high. In this study we investigate to
what extent the observed excess velocity dispersion is the result of the
orbital motions of binaries. We demonstrate that estimates for the dynamical
mass of young star clusters, derived from the observed velocity dispersion,
exceed the photometric mass by up-to a factor of 10 and are consistent with a
constant offset in the square of the velocity dispersion. This can be
reproduced by models of virialised star clusters hosting a massive star
population of which ~25 is in binaries, with typical mass ratios of ~0.6 and
periods of ~1000 days. We conclude that binaries play a pivotal role in
deriving the dynamical masses of young (~10 Myr) moderately massive and compact
( 1 pc) star clusters.
Renaud F, Gieles M, Boily C (2011) Evolution of star clusters in arbitrary tidal fields, Monthly Notices of the Royal Astronomical Society
We present a novel and flexible tensor approach to computing the effect of a
time-dependent tidal field acting on a stellar system. The tidal forces are
recovered from the tensor by polynomial interpolation in time. The method has
been implemented in a direct-summation stellar dynamics integrator (NBODY6) and
test-proved through a set of reference calculations: heating, dissolution time
and structural evolution of model star clusters are all recovered accurately.
The tensor method is applicable to arbitrary configurations, including the
important situation where the background potential is a strong function of
time. This opens up new perspectives in stellar population studies reaching to
the formation epoch of the host galaxy or galaxy cluster, as well as for
star-burst events taking place during the merger of large galaxies. A pilot
application to a star cluster in the merging galaxies NGC 4038/39 (the
Antennae) is presented.
Zocchi A, Gieles M, Henault-Brunet V, Varri AL (2016) Testing lowered isothermal models with direct N-body simulations of globular clusters, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 462 (1) pp. 696-714 OXFORD UNIV PRESS
Gieles M, Athanassoula E, Zwart SFP (2007) The effect of spiral arm passages on the evolution of stellar clusters, Mon.Not.Roy.Astron.Soc. 376 pp. 809-819
We study the effect of spiral arm passages on the evolution of star clusters
on planar and circular orbits around the centres of galaxies. Individual
passages with different relative velocity (V_drift) and arm width are studied
using N-body simulations. When the ratio of the time it takes the cluster to
cross the density wave to the crossing time of stars in the cluster is much
smaller than one, the energy gain of stars can be predicted accurately in the
impulsive approximation. When this ratio is much larger than one, the cluster
is heated adiabatically and the net effect of heating is largely damped. For a
given duration of the perturbation, this ratio is smaller for stars in the
outer parts of the cluster compared to stars in the inner part. The cluster
energy gain due to perturbations of various duration as obtained from our
N-body simulations is in good agreement with theoretical predictions taking
stellar component of the spiral arms on a cluster are in the adiabatic regime
and, therefore, hardly contribute to the energy gain and mass loss of the
cluster. We consider the effect of crossings through the high density shocked
gas in the spiral arms, which result in a more impulsive compression of the
cluster. The time scale of disruption is shortest at ~0.8-0.9 R_CR since there
V_drift is low. This location can be applicable to the solar neighbourhood. In
addition, the four-armed spiral pattern of the Milky Way makes spiral arms
contribute more to the disruption of clusters than in a similar but two-armed
galaxy. Still, the disruption time due to spiral arm perturbations there is
about an order of magnitude higher than what is observed for the solar
neighbourhood.[ABRIDGED]
Sollima A, Baumgardt H, Zocchi A, Balbinot E, Gieles M, Henault-Brunet V, Varri AL (2015) Biases in the determination of dynamical parameters of star clusters:
today and in the Gaia era,
The structural and dynamical properties of star clusters are generally
derived by means of the comparison between steady-state analytic models and the
available observables. With the aim of studying the biases of this approach, we
fitted different analytic models to simulated observations obtained from a
suite of direct N-body simulations of star clusters in different stages of
their evolution and under different levels of tidal stress to derive mass, mass
function and degree of anisotropy. We find that masses can be
under/over-estimated up to 50% depending on the degree of relaxation reached by
the cluster, the available range of observed masses and distances of radial
velocity measures from the cluster center and the strength of the tidal field.
The mass function slope appears to be better constrainable and less sensitive
to model inadequacies unless strongly dynamically evolved clusters and a
non-optimal location of the measured luminosity function are considered. The
degree and the characteristics of the anisotropy developed in the N-body
simulations are not adequately reproduced by popular analytic models and can be
detected only if accurate proper motions are available. We show how to reduce
the uncertainties in the mass, mass-function and anisotropy estimation and
provide predictions for the improvements expected when Gaia proper motions will
be available in the near future.
Zocchi A, Gieles M, Hénault-Brunet V (2015) On the uniqueness of kinematical signatures of intermediate-mass black
holes in globular clusters,
Finding an intermediate-mass black hole (IMBH) in a globular cluster (GC), or
proving its absence, is a crucial ingredient in our understanding of galaxy
formation and evolution. The challenge is to identify a unique signature of an
IMBH that cannot be accounted for by other processes. Observational claims of
IMBH detection are often based on analyses of the kinematics of stars, such as
a rise in the velocity dispersion profile towards the centre. In this
contribution we discuss the degeneracy between this IMBH signal and pressure
anisotropy in the GC. We show that that by considering anisotropic models it is
possible to partially explain the innermost shape of the projected velocity
dispersion profile, even though models that do not account for an IMBH do not
exhibit a cusp in the centre.
Belokurov V, Koposov SE, Evans NW, Peñarrubia J, Irwin MJ, Smith MC, Lewis GF, Gieles M, Wilkinson MI, Gilmore G, Olszewski EW, Niederste-Ostholt MN (2013) Precession of the Sagittarius stream,
Using a variety of stellar tracers -- blue horizontal branch stars,
main-sequence turn-off stars and red giants -- we follow the path of the
Sagittarius (Sgr) stream across the sky in Sloan Digital Sky Survey data. Our
study presents new Sgr debris detections, accurate distances and line-of-sight
velocities that together help to shed new light on the puzzle of the Sgr tails.
For both the leading and the trailing tail, we trace the points of their
maximal extent, or apo-centric distances, and find that they lie at $R^L$ =
47.8 $\pm$ 0.5 kpc and $R^T$ = 102.5 $\pm$ 2.5 kpc respectively. The angular
difference between the apo-centres is 93.2 $\pm$ 3.5 deg, which is smaller than
predicted for logarithmic haloes. Such differential orbital precession can be
made consistent with models of the Milky Way in which the dark matter density
falls more quickly with radius. However, currently, no existing Sgr disruption
simulation can explain the entirety of the observational data. Based on its
position and radial velocity, we show that the unusually large globular cluster
NGC 2419 can be associated with the Sgr trailing stream. We measure the
precession of the orbital plane of the Sgr debris in the Milky Way potential
and show that, surprisingly, Sgr debris in the primary (brighter) tails evolves
differently to the secondary (fainter) tails, both in the North and the South.
Bastian N, Weisz DR, Skillman ED, McQuinn KBW, Dolphin AE, Gutermuth RA, Cannon JM, Ercolano B, Gieles M, Kennicutt RC, Walter F (2010) The evolution of stellar structures in dwarf galaxies, Monthly Notices of the Royal Astronomical Society
We present a study of the variation of spatial structure of stellar
populations within dwarf galaxies as a function of the population age. We use
deep Hubble Space Telescope/Advanced Camera for Surveys imaging of nearby dwarf
galaxies in order to resolve individual stars and create composite
colour-magnitude diagrams (CMDs) for each galaxy. Using the obtained CMDs, we
select Blue Helium Burning stars (BHeBs), which can be unambiguously age-dated
by comparing the absolute magnitude of individual stars with stellar
isochrones. Additionally, we select a very young ( stars for a subset of the galaxies based on the tip of the young main-sequence.
By selecting stars in different age ranges we can then study how the spatial
distribution of these stars evolves with time. We find, in agreement with
previous studies, that stars are born within galaxies with a high degree of
substructure which is made up of a continuous distribution of clusters, groups
and associations from parsec to hundreds of parsec scales. These structures
disperse on timescales of tens to hundreds of Myr, which we quantify using the
two-point correlation function and the Q-parameter developed by Cartwright &
Whitworth (2004). On galactic scales, we can place lower limits on the time it
takes to remove the original structure (i.e., structure survives for at least
this long), tevo, which varies between ~100~Myr (NGC~2366) and ~350 Myr
(DDO~165). This is similar to what we have found previously for the SMC
(~80~Myr) and the LMC (~175 Myr). We do not find any strong correlations
between tevo and the luminosity of the host galaxy.
Bressert E, Bastian N, Evans CJ, Sana H, Hénault-Brunet V, Goodwin SP, Parker RJ, Gieles M, Bestenlehner JM, Vink JS, Taylor WD, Crowther PA, Longmore SN, Gräfener G, Apellániz JM, Koter AD, Cantiello M, Kruijssen JMD (2012) The VLT-FLAMES Tarantula Survey IV: Candidates for isolated high-mass
Astronomy and Astrophysics
Whether massive stars can occasionally form in relative isolation or if they
require a large cluster of lower-mass stars around them is a key test in the
differentiation of star formation theories as well as how the initial mass
function of stars is sampled. Previous attempts to find O-type stars that
formed in isolation were hindered by the possibility that such stars are merely
runaways from clusters, i.e., their current isolation does not reflect their
birth conditions. We introduce a new method to find O-type stars that are not
affected by such a degeneracy. Using the VLT-FLAMES Tarantula Survey and
additional high resolution imaging we have identified stars that satisfy the
following constraints: 1) they are O-type stars that are not detected to be
part of a binary system based on RV time series analysis; 2) they are
designated spectral type O7 or earlier ; 3) their velocities are within 1\sigma
of the mean of OB-type stars in the 30 Doradus region, i.e. they are not
runaways along our line-of-sight; 4) the projected surface density of stars
does not increase within 3 pc towards the O-star (no evidence for clusters); 5)
their sight lines are associated with gaseous and/or dusty filaments in the
ISM, and 6) if a second candidate is found in the direction of the same
filament with which the target is associated, both are required to have similar
velocities. With these criteria, we have identified 15 stars in the 30 Doradus
region, which are strong candidates for being high-mass stars that have formed
in isolation. Additionally, we employed extensive MC stellar cluster
simulations to confirm that our results rule out the presence of clusters
around the candidates. Eleven of these are classified as Vz stars, possibly
associated with the zero-age main sequence. We include a newly discovered W-R
star as a candidate, although it does not meet all of the above criteria.
Anders P, Gieles M, Grijs RD (2006) Accurate photometry of extended spherically symmetric sources, A 451
We present a new method to derive reliable photometry of extended spherically
symmetric sources from {\it HST} images (WFPC2, ACS/WFC and NICMOS/NIC2
cameras), extending existing studies of point sources and marginally resolved
sources. We develop a new approach to accurately determine intrinsic sizes of
extended spherically symmetric sources, such as star clusters in galaxies
beyond the Local Group (at distances cookbook to perform aperture photometry on such sources, by determining
size-dependent aperture corrections (ACs) and taking sky oversubtraction as a
function of source size into account. In an extensive Appendix, we provide the
parameters of polynomial relations between the FWHM of various input profiles
and those obtained by fitting a Gaussian profile (which we have used for
reasons of computational robustness, although the exact model profile used is
irrelevant), and between the intrinsic and measured FWHM of the cluster and the
derived AC. Both relations are given for a number of physically relevant
cluster light profiles, intrinsic and observational parameters. AC relations
are provided for a wide range of apertures. Depending on the size of the source
and the annuli used for the photometry, the absolute magnitude of such extended
objects can be underestimated by up to 3 mag, corresponding to an error in mass
of a factor of 15. We carefully compare our results to those from the more
widely used DeltaMag method, and find an improvement of a factor of 3--40 in
both the size determination and the AC.
Hollyhead K, Adamo A, Bastian N, Gieles M, Ryon JE (2016) Properties of the cluster population of NGC 1566 and their implications, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 460 (2) pp. 2087-2102 OXFORD UNIV PRESS
Bastian N, Adamo A, Gieles M, Villa ES, Lamers HJGLM, Larsen SS, Smith LJ, Konstantopoulos IS, Zackrisson E (2011) Stellar Clusters in M83: Formation, evolution, disruption and the
influence of environment,
Monthly Notices of the Royal Astronomical Society
We study the stellar cluster population in two adjacent fields in the nearby,
face-on spiral galaxy, M83, using WFC3/HST imaging. The clusters are selected
through visual inspection to be centrally concentrated, symmetric, and resolved
on the images, which allows us to differentiate between clusters and likely
unbound associations. We compare our sample with previous studies and show that
the differences between the catalogues are largely due to the inclusion of
large numbers of diffuse associations within previous catalogues. The
luminosity function of the clusters is well approximated by a power-law with
index, -2, over most of the observed range, however a steepening is seen at M_V
= -9.3 and -8.8 in the inner and outer fields, respectively. Additionally, we
show that the cluster population is inconsistent with a pure power-law mass
distribution, but instead exhibits a truncation at the high mass end. If
described as a Schechter function, the characteristic mass is 1.6 and 0.5 *
10^5 Msun, for the inner and outer fields, respectively, in agreement with
previous estimates of other cluster populations in spiral galaxies. Comparing
the predictions of the mass independent disruption (MID) and mass dependent
disruption (MDD) scenarios with the observed distributions, we find that both
models can accurately fit the data. However, for the MID case, the fraction of
clusters destroyed (or mass lost) per decade in age is dependent on the
environment, hence, the age/mass distributions of clusters are not universal.
In the MDD case, the disruption timescale scales with galactocentric distance
(being longer in the outer regions of the galaxy) in agreement with analytic
and numerical predictions. Finally, we discuss the implications of our results
on other extragalactic surveys, focussing on the fraction of stars that form in
clusters and the need (or lack thereof) for infant mortality.
Cai MX, Gieles M, Heggie DC, Varri AL (2015) Evolution of star clusters on eccentric orbits, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 455 (1) pp. 596-602 OXFORD UNIV PRESS
Bastian N, Lamers HJGLM, Mink SED, Longmore SN, Goodwin SP, Gieles M (2013) Early Disc Accretion as the Origin of Abundance Anomalies in Globular
Clusters,
Globular clusters (GCs), once thought to be well approximated as simple
stellar populations (i.e. all stars having the same age and chemical
abundance), are now known to host a variety of anomalies, such as multiple
discrete (or spreads in) populations in colour-magnitude diagrams and abundance
variations in light elements (e.g., Na, O, Al). Multiple models have been put
forward to explain the observed anomalies, although all have serious
shortcomings (e.g., requiring a non-standard initial mass function of stars and
GCs to have been initially 10-100 times more massive than observed today).
These models also do not agree with observations of massive stellar clusters
forming today, which do not display significant age spreads nor have gas/dust
within the cluster. Here we present a model for the formation of GCs, where low
mass pre-main sequence (PMS) stars accrete enriched material released from
interacting massive binary and rapidly rotating stars onto their circumstellar
discs, and ultimately onto the young stars. As was shown in previous studies,
the accreted material matches the unusual abundances and patterns observed in
GCs. The proposed model does not require multiple generations of
star-formation, conforms to known properties of massive clusters forming today,
and solves the "mass budget problem" without requiring GCs to have been
significantly more massive at birth. Potential caveats to the model as well as
model predictions are discussed.
Bastian N, Ercolano B, Gieles M (2009) Hierarchical star formation in M33: Properties of the star-forming regions, Astrophysics and Space Science 324 (2) pp. 293-297
Star formation within galaxies occurs on multiple scales, from spiral structure, to OB associations, to individual star clusters, and often as substructure within these clusters. This multitude of scales calls for objective methods to find and classify star-forming regions, regardless of spatial size. To this end, we present an analysis of star-forming groups in the Local Group spiral galaxy M33, based on a new implementation of the Minimum Spanning Tree (MST) method. Unlike previous studies, which limited themselves to a single spatial scale, we study star-forming structures from the effective resolution limit (~20 pc) to kpc scales. Once the groups have been identified, we study their properties, such as their size and luminosity distributions, and compare these with studies of young star clusters and giant molecular clouds (GMCs). We find evidence for a continuum of star-forming group sizes, which extends into the star cluster spatial-scale regime. We do not find a characteristic scale for OB associations, unlike that found in previous studies, and we suggest that the appearance of such a scale was caused by spatial resolution and selection effects. The luminosity function of the groups is found to be well represented by a power law with an index of -2, as has also been found for the luminosity and mass functions of young star clusters, as well as for the mass function of GMCs. Additionally, the groups follow a similar mass-radius relation as GMCs. The size distribution of the groups is best described by a lognormal distribution, the peak of which is controlled by the spatial scale probed and the minimum number of sources used to define a group. We show that within a hierarchical distribution, if a scale is selected to find structure, the resulting size distribution will have a log-normal distribution. We find an abrupt drop of the number of groups outside a galactic radius of ~4 kpc (although individual high-mass stars are found beyond this limit), suggesting a change in the structure of the star-forming interstellar medium, possibly reflected in the lack of GMCs beyond this radius. Finally, we find that the spatial distribution of H II regions, GMCs, and star-forming groups are all highly correlated. © Springer Science+Business Media B.V. 2009.
Gieles M (2012) Mass loss of stars in star clusters: an energy source for dynamical
evolution,
Dense star clusters expand until their sizes are limited by the tidal field
of their host galaxy. During this expansion phase the member stars evolve and
lose mass. We show that for clusters with short initial relaxation time scales
( in the core, but happens on a relaxation time scale. That is, the energy
release following stellar mass loss is in balance with the amount of energy
that is transported outward by two-body relaxation.
Campbell MA, Evans CJ, Mackey AD, Gieles M, Alves J, Ascenso J, Bastian N, Longmore AJ (2010) VLT-MAD observations of the core of 30 Doradus, Monthly Notices of the Royal Astronomical Society
We present H- and Ks-band imaging of three fields at the centre of 30 Doradus
in the Large Magellanic Cloud, obtained as part of the Science Demonstration
Very Large Telescope. Strehl ratios of 15-30% were achieved in the Ks-band,
yielding near-infrared images of this dense and complex region at unprecedented
angular resolution at these wavelengths. The MAD data are used to construct a
near-infrared luminosity profile for R136, the cluster at the core of 30 Dor.
Using cluster profiles of the form used by Elson et al., we find the surface
brightness can be fit by a relatively shallow power-law function
(gamma~1.5-1.7) over the full extent of the MAD data, which extends to a radius
of ~40" (~10pc). We do not see compelling evidence for a break in the
luminosity profile as seen in optical data in the literature, arguing that
cluster asymmetries are the dominant source, although extinction effects and
stars from nearby triggered star-formation likely also contribute. These
results highlight the need to consider cluster asymmetries and multiple spatial
components in interpretation of the luminosity profiles of distant unresolved
clusters. We also investigate seven candidate young stellar objects reported by
Gruendl & Chu from Spitzer observations, six of which have apparent
counterparts in the MAD images. The most interesting of these (GC09:
053839.24-690552.3) appears related to a striking bow-shock--like feature,
orientated away from both R136 and the Wolf-Rayet star Brey 75, at distances of
19.5" and 8" (4.7 and 1.9pc in projection), respectively.
Smith LJ, Bastian N, Konstantopoulos IS, Gallagher JS, Gieles M, Grijs RD, Larsen SS, O'Connell RW, Westmoquette MS (2007) The Young Cluster Population of M82 Region B,
We present observations obtained with the Advanced Camera for Surveys on
board the Hubble Space Telescope of the "fossil" starburst region B in the
nearby starburst galaxy M82. By comparing UBVI photometry with models, we
derive ages and extinctions for 35 U-band selected star clusters. We find that
the peak epoch of cluster formation occurred ~ 150 Myr ago, in contrast to
earlier work that found a peak formation age of 1.1 Gyr. The difference is most
likely due to our inclusion of U-band data, which are essential for accurate
age determinations of young cluster populations. We further show that the
previously reported turnover in the cluster luminosity function is probably due
to the neglect of the effect of extended sources on the detection limit. The
much younger cluster ages we derive clarifies the evolution of the M82
starburst. The M82-B age distribution now overlaps with the ages of: the
nuclear starburst; clusters formed on the opposite side of the disk; and the
last encounter with M81, some 220 Myr ago.
Gieles M (2013) The mass and radius evolution of globular clusters in tidal fields,
We present a simple theory for the evolution of initially compact clusters in
a tidal field. The fundamental ingredient of the model is that a cluster
conducts a constant fraction of its own energy through the half-mass radius by
two-body interactions every half-mass relaxation time. This energy is produced
in a self-regulative way in the core by an (unspecified) energy source. We find
that the half-mass radius increases during the first part (roughly half) of the
evolution and decreases in the second half, while the escape rate is constant
and set by the tidal field. We present evolutionary tracks and isochrones for
clusters in terms of cluster half-mass density, cluster mass and
galacto-centric radius. We find substantial agreement between model isochrones
and Milky Way globular cluster parameters, which suggests that there is a
balance between the flow of energy and the central energy production for almost
all globular clusters. We also find that the majority of the globular clusters
are still expanding towards their tidal radius. Finally, a fast code for
cluster evolution is presented.
Gieles M, Larsen S, Bastian N, Stein I (2005) The luminosity function of young star clusters: implications for the
maximum mass and luminosity of clusters,
We introduce a method to relate a possible truncation of the star cluster
mass function at the high mass end to the shape of the cluster luminosity
function (LF). We compare the observed LFs of five galaxies containing young
star clusters with synthetic cluster population models with varying initial
conditions. The LF of the SMC, the LMC and NGC 5236 are characterized by a
power-law behavior NdL~L^-a dL, with a mean exponent of = 2.0 +/- 0.2. This
can be explained by a cluster population formed with a constant cluster
formation rate, in which the maximum cluster mass per logarithmic age bin is
determined by the size-of-sample effect and therefore increases with
log(age/yr). The LFs of NGC 6946 and M51 are better described by a double
power-law distribution or a Schechter function. When a cluster population has a
mass function that is truncated below the limit given by the size-of-sample
effect, the total LF shows a bend at the magnitude of the maximum mass, with
the age of the oldest cluster in the population, typically a few Gyr due to
disruption. For NGC 6946 and M51 this implies a maximum mass of M_max = 5*10^5
M_sun. Faint-ward of the bend the LF has the same slope as the underlying
initial cluster mass function and bright-ward of the bend it is steeper. This
behavior can be well explained by our population model. We compare our results
with the only other galaxy for which a bend in the LF has been observed, the
Antennae'' galaxies (NGC 4038/4039). There the bend occurs brighter than in
NGC 6946 and M51, corresponding to a maximum cluster mass of M_max = 2*10^6
M_sun (abridged).
Zwart SP, McMillan S, Gieles M (2010) Young massive star clusters, Annual Review of Astronomy and Astrophysics
Young massive clusters are dense aggregates of young stars that form the
fundamental building blocks of galaxies. Several examples exist in the Milky
Way Galaxy and the Local Group, but they are particularly abundant in starburst
and interacting galaxies. The few young massive clusters that are close enough
to resolve are of prime interest for studying the stellar mass function and the
ecological interplay between stellar evolution and stellar dynamics. The
distant unresolved clusters may be effectively used to study the star-cluster
mass function, and they provide excellent constraints on the formation
mechanisms of young cluster populations. Young massive clusters are expected to
be the nurseries for many unusual objects, including a wide range of exotic
stars and binaries. So far only a few such objects have been found in young
massive clusters, although their older cousins, the globular clusters, are
unusually rich in stellar exotica. In this review we focus on star clusters
younger than $\sim100$ Myr, more than a few current crossing times old, and
more massive than $\sim10^4$ \Msun, irrespective of cluster size or
environment. We describe the global properties of the currently known young
massive star clusters in the Local Group and beyond, and discuss the state of
the art in observations and dynamical modeling of these systems. In order to
make this review readable by observers, theorists, and computational
astrophysicists, we also review the cross-disciplinary terminology.
Gieles M, Zwart SFP, Athanassoula E (2006) The effect of giant molecular clouds on star clusters,
We study the encounters between stars clusters and giant molecular clouds
(GMCs). The effect of these encounters has previously been studied analytically
for two cases: 1) head-on encounters, for which the cluster moves through the
centre of the GMC and 2) distant encounters, where the encounter distance p >
3*R_n, with p the encounter parameter and R_n the radius of the GMC. We
introduce an expression for the energy gain of the cluster due to GMC
encounters valid for all values of p and R_n. This analytical result is
confronted with results from N-body simulations and excellent agreement is
found. From the simulations we find that the fractional mass loss is only 25%
of the fractional energy gain. This is because stars escape with velocities
much higher than the escape velocity. Based on the mass loss, we derive a
disruption time for star clusters due to encounters with GMCs of the form t_dis
[Gyr] = 2.0*S*(M_c/10^4 M_sun)^gamma, with S=1 for the solar neighbourhood and
inversely proportional with the global GMC density and gamma=1-3lambda, with
lambda the index that relates the cluster half-mass radius to the cluster mass
(r_h ~ M_c^lambda). The observed shallow relation between cluster radius and
mass (e.g. lambda=0.1), makes the index (gamma=0.7) similar to the index found
both from observations and from simulations of clusters dissolving in tidal
fields (gamma=0.62). The constant of 2.0 Gyr, which is the disruption time of a
10^4 M_sun cluster in the solar neighbourhood, is close to the value of 1.3 Gyr
which was empirically determined from the age distribution of open clusters.
This suggests that the combined effect of GMC encounters, stellar evolution and
galactic tidal field can explain the lack of old open clusters in the solar
neighbourhood.
Gieles M, Larsen SS, Haas MR, Scheepmaker RA, Bastian N (2006) The Maximum Mass of Star Clusters,
When an universal untruncated star cluster initial mass function (CIMF)
described by a power-law distribution is assumed, the mass of the most massive
star cluster in a galaxy (M_max) is the result of the size-of-sample (SoS)
effect. This implies a dependence of M_max on the total number of star clusters
(N). The SoS effect also implies that M_max within a cluster population
increases with equal logarithmic intervals of age. This is because the number
of clusters formed in logarithmic age intervals increases (assuming a constant
cluster formation rate). This effect has been observed in the SMC and LMC.
Based on the maximum pressure (P_int) inside molecular clouds, it has been
suggested that a physical maximum mass (M_max[phys]) should exist. The theory
predicts that M_max[phys] should be observable, i.e. lower than M_max that
follows from statistical arguments, in big galaxies with a high star formation
rate. We compare the SoS relations in the SMC and LMC with the ones in M51 and
model the integrated cluster luminosity function (CLF) for two cases: 1) M_max
is determined by the SoS effect and 2) M_max=M_max[phys]=constant. The observed
CLF of M51 and the comparison of the SoS relations with the SMC and LMC both
suggest that there exists a M_max[phys] of 5*10^5 M_sun in M51. The CLF of M51
looks very similar to the one observed in the Antennae'' galaxies. A direct
comparison with our model suggests that there M_max[phys]=2*10^6 M_sun.
Gieles M (2011) Dynamical evolution of stellar clusters,
The evolution of star clusters is determined by several internal and external
processes. Here we focus on two dominant internal effects, namely energy
exchange between stars through close encounters (two-body relaxation) and
mass-loss of the member stars through stellar winds and supernovae explosions.
Despite the fact that the former operates on the relaxation timescale of the
cluster and the latter on a stellar evolution timescale, these processes work
together in driving a nearly self-similar expansion, without forming (hard)
binaries. Low-mass clusters expand more, such that after some time the radii of
clusters depend very little on their masses, even if all clusters have the same
(surface) density initially. Throughout it is assumed that star clusters are in
virial equilibrium and well within their tidal boundary shortly after
formation, motivated by observations of young (few Myrs) clusters. We start
with a discussion on how star clusters can be distinguished from (unbound)
associations at these young ages.
Gieles M (2009) Basic Tools for Studies on the Formation and Disruption of Star
Clusters: the Luminosity Function,
Arxiv
The luminosity function (LF) of young star clusters is often approximated by
a power law function. For clusters in a wide range of galactic environments
this has resulted in fit indices near -2, but on average slightly steeper. A
fundamental property of the -2 power law function is that the luminosity of the
brightest object (L_max) scales linearly with the total number of clusters,
which is close to what is observed. This suggests that the formation of Young
Massive Clusters (YMCs) is a result of the size of the sample, i.e. when the
SFR is high it is statistically more likely to form YMCs, but no particular
physical conditions are required. In this contribution we provide evidence that
the LF of young clusters is not a -2 power law, but instead is curved, showing
a systematic decrease of the (logarithmic) slope from roughly -1.8 at low
luminosities to roughly -2.8 at high luminosities. The empirical LFs can be
reproduced by model LFs using an underlying cluster IMF with a Schechter type
truncation around M*=2x10^5 M_sun. This value of M* can not be universal since
YMCs well in excess of this M* are known in merging galaxies and merger
remnants. Therefore, forming super massive clusters (>10^6 M_sun) probably
requires conditions different from those in (quiescent) spiral galaxies and
hence is not only the result of a size-of-sample effect. From the vertical
offset a cluster formation efficiency of ~10% is derived. We find indications
for this efficiency to be higher when the SFR is higher.
Sana H, Mink SED, Koter AD, Langer N, Evans CJ, Gieles M, Gosset E, Izzard RG, Bouquin J-BL, Schneider FRN (2012) Multiplicity of massive O stars and evolutionary implications,
Nearby companions alter the evolution of massive stars in binary systems.
Using a sample of Galactic massive stars in nearby young clusters, we
simultaneously measure all intrinsic binary characteristics relevant to
quantify the frequency and nature of binary interactions. We find a large
intrinsic binary fraction, a strong preference for short orbital periods and a
flat distribution for the mass-ratios. Our results do not support the presence
of a significant peak of equal-mass twin' binaries. As a result of the
measured distributions, we find that over seventy per cent of all massive stars
exchange mass with a companion. Such a rate greatly exceeds previous estimates
and implies that the majority of massive stars have their evolution strongly
affected by interaction with a nearby companion.
Lamers HJGLM, Baumgardt H, Gieles M (2013) The evolution of the global stellar mass function of star clusters: an analytic description, MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 433 (2) pp. 1378-1388 OXFORD UNIV PRESS
Gaburov E, Gieles M (2007) Integrated properties of mass segregated star clusters,
In this contribution we study integrated properties of dynamically segregated
star clusters. The observed core radii of segregated clusters can be 50%
smaller than the true'' core radius. In addition, the measured radius in the
red filters is smaller than those measured in blue filters. However, these
difference are small ($\lesssim10%$), making it observationally challenging to
detect mass segregation in extra-galactic clusters based on such a comparison.
Our results follow naturally from the fact that in nearly all filters most of
the light comes from the most massive stars. Therefore, the observed surface
brightness profile is dominated by stars of similar mass, which are centrally
concentrated and have a similar spatial distribution.
Gieles M, Alexander P, Lamers H, Baumgardt H (2013) A prescription and fast code for the long-term evolution of star
clusters - II. Unbalanced and core evolution,
Mon. Not. R. Astron. Soc. 437 pp. 916-929
We introduce version two of the fast star cluster evolution code Evolve Me A
Cluster of StarS (EMACSS). The first version (Alexander & Gieles) assumed that
cluster evolution is balanced for the majority of the life-cycle, meaning that
the rate of energy generation in the core of the cluster equals the diffusion
rate of energy by two-body relaxation, which makes the code suitable for
modelling clusters in weak tidal fields. In this new version we extend the
model to include an unbalanced phase of evolution to describe the pre-collapse
evolution and the accompanying escape rate such that clusters in strong tidal
fields can also be modelled. We also add a prescription for the evolution of
the core radius and density and a related cluster concentration parameter. The
model simultaneously solves a series of first-order ordinary differential
equations for the rate of change of the core radius, half-mass radius and the
number of member stars N. About two thousand integration steps in time are
required to solve for the entire evolution of a star cluster and this number is
approximately independent of N. We compare the model to the variation of these
parameters following from a series of direct N-body calculations of single-mass
clusters and find good agreement in the evolution of all parameters. Relevant
time-scales, such as the total lifetimes and core collapse times, are
reproduced with an accuracy of about 10% for clusters with various initial
half-mass radii (relative to their Jacobi radii) and a range of different
initial N up to N = 65536. We intend to extend this framework to include more
realistic initial conditions, such as a stellar mass spectrum and mass loss
from stars. The EMACSS code can be used in star cluster population studies and
in models that consider the co-evolution of (globular) star clusters and large
scale structures.
Scheepmaker RA, Gieles M, Haas MR, Bastian N, Larsen SS, Lamers HJGLM (2006) The radii of thousands of star clusters in M51 with HST/ACS,
We exploit the superb resolution of the new HST/ACS mosaic image of M51 to
select a large sample of young ( based on their sizes. The image covers the entire spiral disk in B, V, I and
H_alpha, at a resolution of 2 pc per pixel. The surface density distribution of
4357 resolved clusters shows that the clusters are more correlated with clouds
than with stars, and we find a hint of enhanced cluster formation at the
corotation radius. The radius distribution of a sample of 769 clusters with
more accurate radii suggests that young star clusters have a preferred
effective radius of ~3 pc, which is similar to the preferred radius of the much
older GCs. However, in contrast to the GCs, the young clusters in M51 do not
show a relation between radius and galactocentric distance. This means that the
clusters did not form in tidal equilibrium with their host galaxy, nor that
their radius is related to the ambient pressure.
Lamers HJGLM, Gieles M (2006) Clusters in the solar neighbourhood: how are they destroyed?,
We predict the survival time of initially bound star clusters in the solar
neighbourhood taking into account: (1) stellar evolution, (2) tidal stripping,
(3) shocking by spiral arms and (4) encounters with giant molecular clouds. We
find that the predicted dissolution time is t_dis= 1.7 (M_i/10^4 M_sun)^0.67
Gyr for clusters in the mass range of 10^2 M_i is the initial mass of the cluster.. The resulting predicted shape of the
logarithmic age distribution agrees very well with the empirical one, derived
from a complete sample of clusters in the solar neighbourhood within 600 pc.
The required scaling factor implies a star formation rate of 400 M_sun/Myr
within 600 pc from the Sun or a surface formation rate of 3.5 10^-10 M_sun/(yr
pc^2) for stars in bound clusters with an initial mass in the range of 10^2 to
3 10^4 M_sun.
Gieles M, Baumgardt H, Bastian N, Lamers HJGLM (2004) Theoretical and Observational Agreement on Mass Dependence of Cluster
Life Times,
Observations and N-body simulations both support a simple relation for the
disruption time of a cluster as a function of its mass of the form: t_dis = t_4
* (M/10^4 Msun)^gamma. The scaling factor t_4 seems to depend strongly on the
environment. Predictions and observations show that gamma ~ 0.64 +/- 0.06.
Assuming that t_dis ~ M^0.64 is caused by evaporation and shocking implies a
relation between the radius and the mass of a cluster of the form: r_h ~
M^0.07, which has been observed in a few galaxies. The suggested relation for
the disruption time implies that the lower mass end of the cluster initial mass
function will be disrupted faster than the higher mass end, which is needed to
evolve a young power law shaped mass function into the log-normal mass function
of old (globular) clusters.
Gieles M (2009) What determines the mass of the most massive star cluster in a galaxy: Statistics, physics or disruption?, Astrophysics and Space Science 324 (2) pp. 299-304
In many different galactic environments the cluster initialmass function (CIMF) is well described by a power law with index-2. This implies a linear relation between the mass of the most massive cluster (Mmax) and the number of clusters. Assuming a constant cluster formation rate and no disruption of the most massive clusters it also means that Mmax increases linearly with age when determining Mmax in logarithmic age bins. We observe this increase in five out of the seven galaxies in our sample, suggesting that Mmax is determined by the size of the sample. It also means that massive clusters are very stable against disruption, in disagreement with the mass-independent disruption (MID) model. For the clusters in M51 and the Antennae galaxies, the sizeof- sample prediction breaks down around 106 M, suggesting that this is a physical upper limit to the masses of star clusters in these galaxies. In this method there is a degeneracy between MID and a CIMF truncation.We show how the cluster luminosity function can serve as a tool to distinguish between the two. © Springer Science+Business Media B.V. 2009.
Renaud F, Gieles M (2015) The effect of secular galactic growth on the evolution of star clusters,
The growth of galaxies through adiabatic accretion of dark matter is one of
the main drivers of galaxy evolution. By isolating it from other processes like
mergers, we analyse how it affects the evolution of star clusters. Our study
comprises a fast and approximate exploration of the orbital and intrinsic
cluster parameter space, and more detailed monitoring of their evolution,
through N-body simulations for a handful of cases. We find that the properties
of present-day star clusters and their tidal tails differ very little, whether
the clusters are embedded in a growing galactic halo for 12 Gyr, or in a static
one.
Lamers HJGLM, Gieles M, Zwart SFP (2004) Disruption time scales of star clusters in different galaxies, Astron.Astrophys. 429 pp. 173-179
The observed average lifetime of the population of star clusters in the Solar
Neighbourhood, the Small Magellanic Cloud and in selected regions of M51 and
M33 is compared with simple theoretical predictions and with the results of
N-body simulations. The empirically derived lifetimes (or disruption times) of
star clusters depend on their initial mass as t_dis ~ Mcl^0.60 in all four
galaxies. N-body simulations have shown that the predicted disruption time of
clusters in a tidal field scales as t_dis^pred ~ t_rh^0.75 t_cr^0.25, where
t_rh is the initial half-mass relaxation time and t_cr is the crossing time for
a cluster in equilibrium. We show that this can be approximated accurately by
t_dis^pred ~ M_cl^0.62 for clusters in the mass range of about 10^3 to 10^6
M_sun, in excellent agreement with the observations. Observations of clusters
in different extragalactic environments show that t_dis also depends on the
ambient density in the galaxies where the clusters reside. Linear analysis
predicts that the disruption time will depend on the ambient density of the
cluster environment as t_dis ~ rho_amb^-0.5. This relation is consistent with
N-body simulations.
Carballo-Bello JA, Gieles M, Sollima A, Koposov S, Martínez-Delgado D, Peñarrubia J (2011) Outer density profiles of 19 Galactic globular clusters from deep and
wide-field imaging,
Monthly Notices of the Royal Astronomical Society
Using deep photometric data from WFC@INT and [email protected] we measure the outer
number density profiles of 19 stellar clusters located in the inner region of
the Milky Way halo (within a Galactocentric distance range of 10-30 kpc) in
order to assess the impact of internal and external dynamical processes on the
spatial distribution of stars. Adopting power-law fitting templates, with index
$-\gamma$ in the outer region, we find that the clusters in our sample can be
divided in two groups: a group of massive clusters ($\ge 10^5$ M_sun) that
has relatively flat profiles with $2.5 clusters ($ \le 10^5 $M_sun), with steep profiles ($\gamma > 4\$) and clear
signatures of interaction with the Galactic tidal field. We refer to these two
groups as 'tidally unaffected' and 'tidally affected', respectively. Our
results also show a clear trend between the slope of the outer parts and the
half-mass density of these systems, which suggests that the outer density
profiles may retain key information on the dominant processes driving the
dynamical evolution of Globular Clusters.
Gieles M, Larsen S, Scheepmaker R, Bastian N, Haas M, Lamers H (2005) Observational evidence for a truncation of the star cluster initial mass
function at the high mass end,
We present the luminosity function (LF) of star clusters in M51 based on
HST/ACS observations taken as part of the Hubble Heritage project. The clusters
are selected based on their size and with the resulting 5990 clusters we
present one of the largest cluster samples of a single galaxy. We find that the
LF can be approximated with a double power-law distribution with a break around
M_V = -8.9. On the bright side the index of the power-law distribution is
steeper (a = 2.75) than on the faint-side (a = 1.93), similar to what was found
earlier for the Antennae'' galaxies. The location of the bend, however,
occurs about 1.6 mag fainter in M51. We confront the observed LF with the model
for the evolution of integrated properties of cluster populations of Gieles et
al., which predicts that a truncated cluster initial mass function would result
in a bend in, and a double power-law behaviour of, the integrated LF. The
combination of the large field-of view and the high star cluster formation rate
of M51 make it possible to detect such a bend in the LF. Hence, we conclude
that there exists a fundamental upper limit to the mass of star clusters in
M51. Assuming a power-law cluster initial mass function with exponentional
cut-off of the form NdM ~ M^-b * exp(-M/M_C)dM, we find that M_C = 10^5 M_sun.
A direct comparison with the LF of the Antennae'' suggests that there M_C =
4*10^5 M_sun.
Gieles M (2006) The role of tidal forces in star cluster disruption,
Star clusters are subject to density irregularities in their host galaxy,
such as giant molecular clouds (GMCs), the galactic disc and spiral arms, which
are largely ignored in present day (N-body) simulations of cluster evolution.
Time dependent external potentials give rise to tidal forces that accelerate
stars leading to an expansion and more rapid dissolution of the cluster. I
explain the basic principles of this tidal heating in the impulse approximation
and show how related disruption time-scales depend on properties of the
cluster.
Bastian N, Ercolano B, Gieles M (2009) Hierarchical star formation in M33: properties of the star-forming regions, Astrophysics and Space Science pp. 1-5
Star formation within galaxies occurs on multiple scales, from spiral structure, to OB associations, to individual star clusters, and often as substructure within these clusters. This multitude of scales calls for objective methods to find and classify star-forming regions, regardless of spatial size. To this end, we present an analysis of star-forming groups in the Local Group spiral galaxy M33, based on a new implementation of the Minimum Spanning Tree (MST) method. Unlike previous studies, which limited themselves to a single spatial scale, we study star-forming structures from the effective resolution limit (
Contenta F, Gieles M, Balbinot E, Collins MLM (2016) The contribution of dissolving star clusters to the population of ultra faint objects in the outer halo of the Milky Way, Monthly Notices of the Royal Astronomical Society 466 (2) pp. 1741-1756 Oxford Univercity Press
In the last decade, several ultra faint objects (UFOs, MV ?3.5) have been discovered in the outer halo of the Milky Way. For some of these objects, it is not clear whether they are star clusters or (ultra faint) dwarf galaxies. In this work, we quantify the contribution of star clusters to the population of UFOs. We extrapolated the mass and Galactocentric radius distribution of the globular clusters using a population model, finding that the Milky Way contains about 3.3+7.3 ?1.6 star clusters with MV ?3.5 and Galactocentric radius e20 kpc. To understand whether dissolving clusters can appear as UFOs, we run a suite of direct N-body models, varying the orbit, the Galactic potential, the binary fraction and the black hole (BH) natal kick velocities. In the analyses, we consider observational biases such as luminosity limit, field stars and line-of-sight projection. We find that star clusters contribute to both the compact and the extended population of UFOs: clusters without BHs appear compact with radii
Renaud F, Agertz O, Gieles M (2017) The origin of the Milky Way globular clusters, Monthly Notices of the Royal Astronomical Society 465 (3) pp. 3622-3636 Oxford University Press
We present a cosmological zoom-in simulation of a Milky Way-like galaxy used to explore the formation and evolution of star clusters. We investigate in particular the origin of the bimodality observed in the colour and metallicity of globular clusters, and the environmental evolution through cosmic times in the form of tidal tensors. Our results self-consistently confirm previous findings that the blue, metal-poor clusters form in satellite galaxies which are accreted onto the Milky Way, while the red, metal-rich clusters form mostly in situ or, to a lower extent in massive, self-enriched galaxies merging with the Milky Way. By monitoring the tidal fields these populations experience, we find that clusters formed in situ (generally centrally concentrated) feel significantly stronger tides than the accreted ones, both in the present-day, and when averaged over their entire life. Furthermore, we note that the tidal field experienced by Milky Way clusters is significantly weaker in the past than at present-day, confirming that it is unlikely that a power-law cluster initial mass function like that of young massive clusters, is transformed into the observed peaked distribution in the Milky Way with relaxation-driven evaporation in a tidal field.
Claydon I, Gieles M, Zocchi A (2017) The properties of energetically unbound stars in stellar clusters, Monthly Notices of the Royal Astronomical Society 466 (4) pp. 3937-3950 Oxford University Press
Several Milky Way star clusters show a roughly flat velocity dispersion profile at large radii, which is not expected from models with a tidal cut-off energy. Possible explanations for this excess velocity include: the effects of a dark matter halo, modified gravity theories and energetically unbound stars inside of clusters. These stars are known as potential escapers (PEs) and can exist indefinitely within clusters which are on circular orbits. Through a series of N-body simulations of star cluster systems, where we vary the galactic potential, orbital eccentricity and stellar mass function, we investigate the properties of the PEs and their effects on the kinematics. We derive a prediction for the scaling of the velocity dispersion at the Jacobi surface due to PEs, as a function of cluster mass, angular velocity of the cluster orbit, and slope of the mass profile of the host galaxy. We see a tentative signal of the mass and orbital velocity dependence in kinematic data of globular clusters from literature. We also find that the fraction of PEs depends sensitively on the galactic mass profile, reaching as high as 40% in the cusp of a Navarro-Frenk-White profile and as the velocity anisotropy also depends on the slope of the galactic mass profile, we conclude that PEs provide an independent way of inferring the properties of the dark matter mass profile at the galactic radius of (globular) clusters in the Gaia-era
Peuten M, Zocchi A, Gieles M, Gualandris A, Henault-Brunet V (2016) A stellar-mass black hole population in the globular cluster NGC 6101?, Monthly Notices of the Royal Astronomical Society 462 (3) pp. 2333-2342 Oxford University Press
Dalessandro et al. observed a similar distribution for blue straggler stars and main-sequence turn-off stars in the Galactic globular cluster NGC 6101, and interpreted this feature as an indication that this cluster is not mass-segregated. Using direct N-body simulations, we find that a significant amount of mass segregation is expected for a cluster with the mass, radius and age of NGC 6101. Therefore, the absence of mass segregation cannot be explained by the argument that the cluster is not yet dynamically evolved. By varying the retention fraction of stellar-mass black holes, we show that segregation is not observable in clusters with a high black hole retention fraction (>50 per cent after supernova kicks and >50 per cent after dynamical evolution). Yet all model clusters have the same amount of mass segregation in terms of the decline of the mean mass of stars and remnants with distance to the centre. We also discuss how kinematics can be used to further constrain the presence of a stellar-mass black hole population and distinguish it from the effect of an intermediate-mass black hole. Our results imply that the kick velocities of black holes are lower than those of neutron stars. The large retention fraction during its dynamical evolution can be explained if NGC 6101 formed with a large initial radius in a Milky Way satellite.
Gieles M, Renaud F (2016) If it does not kill them, it makes them stronger: collisional evolution of star clusters with tidal shocks, Monthly Notices of the Royal Astronomical Society 463 (1) pp. L103-L107 Oxford University Press
The radii of young (. 100 Myr) star clusters correlate only weakly with their masses. This shallow relation has been used to argue that impulsive tidal perturbations, or ?shocks?, by passing giant molecular clouds (GMCs) preferentially disrupt low-mass clusters. We show that this mass-radius relation is in fact the result of the combined effect of two-body relaxation and repeated tidal shocks. Clusters in a broad range of environments including those like the solar neighbourhood evolve towards a typical radius of a few parsecs, as observed, independent of the initial radius. This equilibrium mass-radius relation is the result of a competition between expansion by relaxation and shrinking due to shocks. Interactions with GMCs are more disruptive for low-mass clusters, which helps to evolve the globular cluster mass function (GCMF). However, the properties of the interstellar medium in high-redshift galaxies required to establish a universal GCMF shape are more extreme than previously derived, challenging the idea that all GCs formed with the same power-law mass function.
Peuten M, Zocchi A, Gieles M (2017) Testing lowered isothermal models with direct N-body
simulations of globular clusters - II. Multimass models,
Monthly Notices of the Royal Astronomical Society 470 (3) pp. 2736-2761 Oxford University Press
Lowered isothermal models, such as the multimass Michie?King models, have been successful in describing observational data of globular clusters. In this study, we assess whether such models are able to describe the phase space properties of evolutionary N-body models. We compare the multimass models as implemented in LIMEPY (Gieles & Zocchi) to N-body models of star clusters with different retention fractions for the black holes and neutron stars evolving in a tidal field. We find that multimass models successfully reproduce the density and velocity dispersion profiles of the different mass components in all evolutionary phases and for different remnants retention. We further use these results to study the evolution of global model parameters. We find that over the lifetime of clusters, radial anisotropy gradually evolves from the low- to the high-mass components and we identify features in the properties of observable stars that are indicative of the presence of stellar-mass black holes. We find that the model velocity scale depends on mass as m?´, with ´ C 0.5 for almost all models, but the dependence of central velocity dispersion on m can be shallower, depending on the dark remnant content, and agrees well with that of the N-body models. The reported model parameters, and correlations amongst them, can be used as theoretical priors when fitting these types of mass models to observational data.
Zocchi A, Gieles M, Henault-Brunet V (2017) Radial anisotropy in É Cen limiting the room for an intermediate-mass
black hole,
Monthly Notices of the Royal Astronomical Society 468 (4) pp. 4429-4440 Oxford University Press
Finding an intermediate-mass black hole (IMBH) in a globular cluster (or proving its absence) would provide valuable insights into our understanding of galaxy formation and evolution. However, it is challenging to identify a unique signature of an IMBH that cannot be accounted for by other processes. Observational claims of IMBH detection are indeed often based on analyses of the kinematics of stars in the cluster core, the most common signature being a rise in the velocity dispersion profile towards the centre of the system. Unfortunately, this IMBH signal is degenerate with the presence of radially-biased pressure anisotropy in the globular cluster. To explore the role of anisotropy in shaping the observational kinematics of clusters, we analyse the case of É Cen by comparing the observed profiles to those calculated from the family of LIMEPY models, that account for the presence of anisotropy in the system in a physicallymotivated way. The best-fit radially anisotropicmodels reproduce the observational profiles well, and describe the central kinematics as derived from Hubble Space Telescope proper motions without the need for an IMBH.
Almeida L, Sana H, Taylor W, Barbá R, Bonanos A, Crowther P, Damineli A, de Koter A, de Mink S, Evans C, Gieles M, Grin N, Hénault-Brunet V, Langer N, Lennon D, Lockwood S, Maíz Apellániz J, Moffat A, Neijssel C, Norman C, Ramírez-Agudelo O, Richardson N, Schootemeijer A, Shenar T, SoszyDski I, Tramper F, Vink J (2017) The Tarantula Massive Binary Monitoring. I. Observational campaign and OB-type spectroscopic binaries, Astronomy and Astrophysics 598 A84 EDP Sciences
ontext. Massive binaries play a crucial role in the Universe. Knowing the distributions of their orbital parameters is important for a wide range of topics from stellar feedback to binary evolution channels and from the distribution of supernova types to gravitational wave progenitors, yet no direct measurements exist outside the Milky Way. Aims. The Tarantula Massive Binary Monitoring project was designed to help fill this gap by obtaining multi-epoch radial velocity (RV) monitoring of 102 massive binaries in the 30 Doradus region. Methods. In this paper we analyze 32 FLAMES/GIRAFFE observations of 93 O- and 7 B-type binaries. We performed a Fourier analysis and obtained orbital solutions for 82 systems: 51 single-lined (SB1) and 31 double-lined (SB2) spectroscopic binaries. Results. Overall, the binary fraction and orbital properties across the 30 Doradus region are found to be similar to existing Galactic samples. This indicates that within these domains environmental effects are of second order in shaping the properties of massive binary systems. A small difference is found in the distribution of orbital periods, which is slightly flatter (in log space) in 30 Doradus than in the Galaxy, although this may be compatible within error estimates and differences in the fitting methodology. Also, orbital periods in 30 Doradus can be as short as 1.1 d, somewhat shorter than seen in Galactic samples. Equal mass binaries (q> 0.95) in 30 Doradus are all found outside NGC 2070, the central association that surrounds R136a, the very young and massive cluster at 30 Doradus?s core. Most of the differences, albeit small, are compatible with expectations from binary evolution. One outstanding exception, however, is the fact that earlier spectral types (O2?O7) tend to have shorter orbital periods than later spectral types (O9.2?O9.7). Conclusions. Our results point to a relative universality of the incidence rate of massive binaries and their orbital properties in the metallicity range from solar (Z) to about half solar. This provides the first direct constraints on massive binary properties in massive star-forming galaxies at the Universe?s peak of star formation at redshifts z ~ 1 to 2 which are estimated to have Z ~ 0.5 Z.
Balbinot E, Gieles M (2017) The devil is in the tails: the role of globular cluster mass evolution on stream properties, Monthly Notices of the Royal Astronomical Society 474 (2) pp. 2479-2492 Oxford University Press (OUP)
We present a study of the effects of collisional dynamics on the formation and detectability
of cold tidal streams. A semi-analytical model for the evolution of the stellar mass function
was implemented and coupled to a fast stellar stream simulation code, as well as the synthetic
cluster evolution code EMACSS for the mass evolution as a function of a globular cluster
orbit. We find that the increase in the average mass of the escaping stars for clusters close
to dissolution has a major effect on the observable stream surface density. As an example,
we show that Palomar 5 would have undetectable streams (in an SDSS-like survey) if it was
currently three times more massive, despite the fact that a more massive cluster loses stars
at a higher rate. This bias due to the preferential escape of low-mass stars is an alternative
explanation for the absence of tails near massive clusters, than a dark matter halo associated
with the cluster. We explore the orbits of a large sample of Milky Way globular clusters and
derive their initial masses and remaining mass fraction. Using properties of known tidal tails
we explore regions of parameter space that favour the detectability of a stream. A list of high
probability candidates is discussed.
Gieles M, Balbinot E, Yaaqib R, Hénault-Brunet V, Zocchi A, Peuten M, Jonker P (2017) Mass models of NGC 6624 without an intermediate-mass black hole, Monthly Notices of the Royal Astronomical Society 473 (4) pp. 4832-4839 Oxford University Press (OUP)
An intermediate-mass black hole (IMBH) was recently reported to reside in the centre of the Galactic globular cluster (GC) NGC 6624, based on timing observations of a millisecond pulsar (MSP) located near the cluster centre in projection. We present dynamical models with multiple mass components of NGC 6624 ? without an IMBH ? which successfully describe the surface brightness profile and proper motion kinematics from the Hubble Space Telescope (HST) and the stellar mass function at different distances from the cluster centre. The maximum line-of-sight acceleration at the position of the MSP accommodates the inferred acceleration of the MSP, as derived from its first period derivative. With discrete realisations of the models we show that the higher-order period derivatives ? which were previously used to derive the IMBH mass ? are due to passing stars and stellar remnants, as previously shown analytically in literature. We conclude that there is no need for an IMBH to explain the timing observations of this MSP.
Martinez-Medina L, Gieles M, Pichardo B, Peimbert A (2017) New insights in the origin and evolution of the old, metal-rich open cluster NGC 6791, Monthly Notices of the Royal Astronomical Society 474 (1) pp. 32-44 Oxford University Press (OUP)
NGC 6791 is one of the most studied open clusters, it is massive (
I present a new semi-analytic dynamical friction model built upon Chandrasekhar's formalism (Petts et al., 2015, 2016), and its first scientific application regarding the origin of the young stellar populations in the Galactic Centre (Petts and Gualandris, 2017). The model is accurate for spherical potentials of varying inner slope, gamma=[0,2], due to a few key novelties. Firstly, I use physically motivated, radially varying maximum and minimum impact parameters, that describe the range over which interactions are important. Secondly, I use the self-consistent velocity distribution as derived from the distribution function of the galactic potential, including the effect of stars moving faster than satellite. Finally, I reproduce the core-stalling effect seen in simulations of cored galaxies with a tidal-stalling'' prescription, which describes when the satellite disrupts the galaxy and forms a steady-state. I implemented dynamical friction analytically in the direct summation N-body code, NBODY6, excellently reproducing the orbital decay of clusters as compared with full N-body models. Since only cluster stars need be modelled in an N-body fashion, my method allows for simulation possibilities that were previously prohibited (e.g. Contenta et al., 2017; Inoue, 2017; Cole et al., 2017).
Using this new method, I explore the scenario in which the young stellar populations in the central parsec of the Milky Way were formed by infalling star clusters. I find that clusters massive enough to reach the central parsec within the lifetime of these populations form very massive stars via collisions. Using up to date - yet conservative - mass loss recipes, I find that these very massive stars lose most of their mass via strong stellar winds, forming large stellar mass black holes incapable of bringing stars to the central parsec. A star cluster infalling in the Galactic Centre within the last 15 Myr would leave an observable population of massive stars from ~1-10 pc, contradicting observations. Thus, I rule out the star cluster inspiral scenario, favouring in-situ formation and/or binary disruption for the origin of the young stars.
Lucas W, Rybak M, Bonnell I, Gieles M (2017) A clustered origin for isolated massive stars, Monthly Notices of the Royal Astronomical Society 474 (3) pp. 3582-3592 Oxford University Press
High-mass stars are commonly found in stellar clusters promoting the idea that their formation occurs due to the physical processes linked with a young stellar cluster. It has recently been reported that isolated high-mass stars are present in the Large Magellanic Cloud. Due to their low velocities it has been argued that these are high-mass stars which formed without a surrounding stellar cluster. In this paper we present an alternative explanation for the origin of these stars in which they formed in a cluster environment but are subsequently dispersed into the field as their natal cluster is tidally disrupted in a merger with a higher-mass cluster. They escape the merged cluster with relatively low velocities typical of the cluster interaction and thus of the larger scale velocity dispersion, similarly to the observed stars. N-body simulations of cluster mergers predict a sizeable population of low velocity (d20 km s?1), high-mass stars at distances of >20 pc from the cluster. High-mass clusters in which gas poor mergers are frequent would be expected to commonly have halos of young stars, including high-mass stars, that were actually formed in a cluster environment.
Star clusters are collisional and dark matter (DM) free stellar systems, where their evolution is ruled by two-body interactions and the galactic potential. Using direct summation N-body simulations, I study how the observational properties of star clusters can be used to: (i) distinguish between DM free and DM dominated objects. From observations, the nature of several faint stellar systems in the Milky Way halo is not clear, therefore, I quantify the contribution of star clusters to the faint stellar systems population. (ii) Probe the underlying DM density of their host galaxy. I apply a new method to the recently discovered Eridanus~II ultra-faint dwarf galaxy that hosts a star cluster in its centre. I find that a cored DM density profile naturally reproduces the observed properties of Eridanus II?s star cluster. (iii) Infer their progenitor properties if they are accreted star clusters, such as Crater. From its properties I find that Crater is likely to be tidally stripped from a dwarf galaxy, and it must have formed extended and with a low concentration. Throughout this thesis, the comparison of simulations and data took into consideration observational biases and uncertainties. I show that the initial conditions of star clusters can heavily influence its present-day properties, and that the stellar evolution prescriptions can also impact the final star cluster properties, such as the neutron stars natal kick distribution. I conclude, through a series of test cases, that N-body simulations can be used to reproduce the observed properties of star clusters, and these can ultimately probe their host galaxy DM distribution.
Gieles M, Zocchi A (2017) Erratum: A family of lowered isothermal models, Monthly Notices of the Royal Astronomical Society 474 (3) pp. 3997-3997 Oxford University Press
This is a correction to: Monthly Notices of the Royal Astronomical Society, Volume 454, Issue 1, 21 November 2015, Pages 576?592, https://doi.org/10.1093/mnras/stv1848
Schneider F, Sana H, Evans C, Bestenlehner J, Castro N, Fossati L, Gräfener G, Langer N, Ramírez-Agudelo O, Sabín-Sanjulián C, Simón-Díaz S, Tramper F, Crowther P, de Koter A, de Mink S, Dufton P, Garcia M, Gieles M, Hénault-Brunet V, Herrero A, Izzard R, Kalari V, Lennon D, Maíz Apellániz J, Markova N, Najarro F, Podsiadlowski P, Puls J, Taylor W, van Loon J, Vink J, Norman C (2018) An excess of massive stars in the local 30 Doradus starburst, Science 359 (6371) pp. 69-71 American Association for the Advancement of Science
The 30 Doradus star-forming region in the Large Magellanic Cloud is a nearby analog of large star-formation events in the distant universe. We determined the recent formation history and the initial mass function (IMF) of massive stars in 30 Doradus on the basis of spectroscopic observations of 247 stars more massive than 15 solar masses (Embedded Image). The main episode of massive star formation began about 8 million years (My) ago, and the star-formation rate seems to have declined in the last 1 My. The IMF is densely sampled up to 200 Embedded Image and contains 32 ± 12% more stars above 30 Embedded Image than predicted by a standard Salpeter IMF. In the mass range of 15 to 200 Embedded Image, the IMF power-law exponent is Embedded Image, shallower than the Salpeter value of 2.35.
Forbes D, Bastian N, Gieles M, Crain R, Kruijssen J, Larsen S, Ploeckinger S, Agertz O, Trenti M, Ferguson A, Pfeffer J, Gnedin O (2018) Globular Cluster Formation and Evolution in the Context of Cosmological Galaxy Assembly: Open Questions, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474 (2210) The Royal Society
We discuss some of the key open questions regarding the formation and evolution of globular clusters (GCs) during galaxy formation and assembly within a cosmological framework. The current state-of-the-art for both observations and simulations is described, and we briefly mention directions for future research. The oldest GCs have ages e 12.5 Gyr and formed around the time of reionisation. Resolved colour-magnitude diagrams of Milky Way GCs and direct imaging of lensed proto-GCs at z
Contenta Filippo, Balbinot Eduardo, Petts James, Read Justin, Gieles Mark, Collins Michelle, Peñarrubia Jorge, Delorme Maxime, Gualandris Alessia (2018) Probing dark matter with star clusters: a dark matter core in the ultra-faint dwarf Eridanus II, Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
We present a new technique to probe the central dark matter (DM) density profile of galaxies that harnesses both the survival and observed properties of star clusters. As a first application, we apply our method to the ultra-faint' dwarf Eridanus II (Eri II) that has a lone star cluster ~45 pc from its centre. Using a grid of collisional N-body simulations, incorporating the effects of stellar evolution, external tides and dynamical friction, we show that a DM core for Eri II naturally reproduces the size and the projected position of its star cluster. By contrast, a dense cusped galaxy requires the cluster to lie implausibly far from the centre of Eri II (>1 kpc), with a high inclination orbit that must be observed at a particular orbital phase. Our results imply that either a cold DM cusp was heated up' at the centre of Eri II by bursty star formation, or we are seeing an evidence for physics beyond cold DM.
Gieles Mark, Charbonnel C, Krause M, Hénault-Brunet V, Agertz Oscar, Lamers H, Bastian N, Gualandris Alessia, Zocchi A, Petts James (2018) Concurrent formation of supermassive stars and globular clusters:
implications for early self-enrichment,
Monthly Notices of the Royal Astronomical Society 478 (2) sty1059 pp. 2461-2479 Oxford University Press
We present a model for the concurrent formation of globular clusters (GCs) and supermassive stars (SMSs, >103M) to address the origin of the HeCNONaMgAl abundance anomalies in GCs. GCs form in converging gas flows and accumulate low-angular momentum gas, which accretes onto protostars. This leads to an adiabatic contraction of the cluster and an increase of the stellar collision rate. A SMS can form via runaway collisions if the cluster reaches sufficiently high density before two-body relaxation halts the contraction. This condition is met if the number of stars s106 and the gas accretion rate s105M/Myr, reminiscent of GC formation in high gas-density environments, such as -- but not restricted to -- the early Universe. The strong SMS wind mixes with the inflowing pristine gas, such that the protostars accrete diluted hot-hydrogen burning yields of the SMS. Because of continuous rejuvenation, the amount of processed material liberated by the SMS can be an order of magnitude higher than its maximum mass. This conveyor-belt' production of hot-hydrogen burning products provides a solution to the mass budget problem that plagues other scenarios. Additionally, the liberated material is mildly enriched in helium and relatively rich in other hot-hydrogen burning products, in agreement with abundances of GCs today. Finally, we find a super-linear scaling between the amount of processed material and cluster mass, providing an explanation for the observed increase of the fraction of processed material with GC mass. We discuss open questions of this new GC enrichment scenario and propose observational tests.
Zocchi Alice, Gieles Mark, Hénault-Brunet Vincent (2018) The effect of stellar-mass black holes on the central kinematics of É Cen: a cautionary tale for IMBH interpretations, Monthly Notices of the Royal Astronomical Society Oxford University Press
The search for intermediate-mass black holes (IMBHs) in the centre of globular clusters is
often based on the observation of a central cusp in the surface brightness profile and a rise
towards the centre in the velocity dispersion profiles. Similar signatures, however, could result
from other effects, that need to be taken into account in order to determine the presence (or
the absence) of an IMBH in these stellar systems. Following our previous exploration of the
role of radial anisotropy in shaping these observational signatures, we analyse here the effects
produced by the presence of a population of centrally concentrated stellar-mass black holes.
We fit dynamical models to É Cen data, and we show that models with ~ 5% of their mass
in black holes (consistent with ~ 100% retention fraction after natal kicks) can reproduce the
data.When simultaneously considering both radial anisotropy and mass segregation, the bestfit
model includes a smaller population of remnants, and a less extreme degree of anisotropy
with respect to the models that include only one of these features. These results underline that
before conclusions about putative IMBHs can be made, the effects of stellar-mass black holes
and radial anisotropy need to be properly accounted for.
Schneider F.R.N., Ramírez-Agudelo O.H., Tramper F., Bestenlehner J.M., Castro N., Sana H., Evans C.J., Sabín-Sanjulián C., Simón-Díaz S., Langer N., Fossati L., Gräfener G., Crowther P.A., de Mink S.E., de Koter A., Gieles M., Herrero A., Izzard R.G., Kalari V., Klessen R.S., Lennon D.J., Mahy L., Maíz Apellániz J., Markova N., van Loon J.Th., Vink J.S., Walborn N.R. (2018) The VLT-FLAMES Tarantula Survey. XXIX. Massive star formation in the local 30 Doradus starburst, Astronomy and Astrophysics EDP Sciences
The 30 Doradus (30 Dor) nebula in the Large Magellanic Cloud (LMC) is the brightest HII region in the Local Group and a prototype
starburst similar to those found in high redshift galaxies. It is thus a stepping stone to understand the complex formation processes of
stars in starburst regions across the Universe. Here, we have studied the formation history of massive stars in 30 Dor using masses and
ages derived for 452 mainly OB stars from the spectroscopic VLT-FLAMES Tarantula Survey (VFTS). We find that stars of all ages
and masses are scattered throughout 30 Dor. This is remarkable because it implies that massive stars either moved large distances or
formed independently over the whole field of view in relative isolation. We find that both channels contribute to the 30 Dor massive
star population. Massive star formation rapidly accelerated about 8 Myr ago, first forming stars in the field before giving birth to the
stellar populations in NGC 2060 and NGC 2070. The R136 star cluster in NGC 2070 formed last and, since then, about 1 Myr ago,
star formation seems to be diminished with some continuing in the surroundings of R136. Massive stars within a projected distance
of 8 pc of R136 are not coeval but show an age range of up to 6 Myr. Our mass distributions are well populated up to 200M. The
inferred IMF is shallower than a Salpeter-like IMF and appears to be the same across 30 Dor. By comparing our sample of stars to
stellar models in the Hertzsprung?Russell diagram, we find evidence for missing physics in the models above log L=L = 6 that is
likely connected to enhanced wind mass loss for stars approaching the Eddington limit. Our work highlights the key information about
the formation, evolution and final fates of massive stars encapsulated in the stellar content of 30 Dor, and sets a new benchmark for
theories of massive star formation in giant molecular clouds.
Terlevich Elena, Fernández-Arenas David, Terlevich Roberto, Gieles Mark, Chávez Ricardo, González-Morán Ana Luisa (2018) From Giant H ii regions and H ii galaxies to globular clusters and compact dwarf ellipticals, Monthly Notices of the Royal Astronomical Society 481 (1) sty2325 pp. 268-276 Oxford University Press
Massive starforming regions like Giant H ii Regions (GHIIR) and H ii Galaxies (HIIG) are emission line systems ionized by compact young massive star clusters (YMC) with masses ranging from 104M to 108M. We model the photometric and dynamical evolution over a Hubble time of the massive gravitationally bound systems that populate the tight relation between absolute blue magnitude and velocity dispersion (MB ? Ã) of GHIIR and HIIG and compare the resulting relation with that one of old stellar systems: globular clusters, elliptical galaxies, bulges of spirals. After 12 Gyr of evolution their position on the à vs. MB plane coincides ? depending on the initial mass ? either with the globular clusters for systems with initial mass M 106M. The slope change in the MB ? à and MB-size relations at cluster masses around 106M is due to the larger impact of the dynamical evolution on the lower mass clusters. We interpret our result as an indication that the YMC that ionize GHIIR and HIIG can evolve to form globular clusters and ultra compact dwarf ellipticals in about 12 Gyr so that present day globular clusters and ultra compact dwarf ellipticals may have formed in conditions similar to those observed in today GHIIR and HIIG.
Forbes Duncan A, Read Justin I, Gieles Mark, Collins Michelle L M (2018) Extending the globular cluster system?halo mass relation to the lowest galaxy masses, Monthly Notices of the Royal Astronomical Society 481 (4) pp. 5592-5605
High-mass galaxies, with halo masses
M200 e 1010
, reveal a remarkable near-linear relation between their globular cluster (GC) system mass and their host galaxy halo mass. Extending this relation to the mass range of dwarf galaxies has been problematic due to the difficulty in measuring independent halo masses. Here we derive new halo masses based on stellar and H i gas kinematics for a sample of nearby dwarf galaxies with GC systems. We find that the GC system mass?halo mass relation for galaxies populated by GCs holds from halo masses of M20014 M
down to below M2009 M
, although there is a substantial increase in scatter towards low masses. In particular, three well-studied ultradiffuse galaxies, with dwarf-like stellar masses, reveal a wide range in their GC-to-halo mass ratios. We compare our GC system?halo mass relation to the recent model of El Badry et al., finding that their fiducial model does not reproduce our data in the low-mass regime. This may suggest that GC formation needs to be more efficient than assumed in their model, or it may be due to the onset of stochastic GC occupation in low-mass haloes. Finally, we briefly discuss the stellar mass?halo mass relation for our low-mass galaxies with GCs, and we suggest some nearby dwarf galaxies for which searches for GCs may be fruitful.
Kamann S, Bastian N J, Gieles Mark, Balbinot Eduardo, Henault-Brunet V (2018) Linking the rotation of a cluster to the spins of its stars: The kinematics of NGC 6791 and NGC 6819 in 3D, Monthly Notices of the Royal Astronomical Society Oxford University Press
The physics governing the formation of star clusters is still not entirely understood.
One open question concerns the amount of angular momentum that newly formed
clusters possess after emerging from their parent gas clouds. Recent results suggest
an alignment of stellar spins and binary orbital spins in star clusters, which support
a scenario in which clusters are born with net angular momentum cascading down to
stellar scales. In this paper, we combine Gaia data and published line of sight velocities
to explore if NGC 6791 and NGC 6819, two of the clusters for which an alignment of
stellar spins has been reported, rotate in the same plane as their stars.We find evidence
for rotation in NGC 6791 using both proper motions and line of sight velocities. Our
estimate of the inclination angle is broadly consistent with the mean inclination that
has been determined for its stars, but the uncertainties are still substantial. Our results
identify NGC 6791 as a promising follow-up candidate to investigate the link between
cluster and stellar rotation. We find no evidence for rotation in NGC 6819.
Henault-Brunet V, Gieles Mark, Sollima A, Watkins LL, Zocchi A, Claydon Ian, Pancino E, Baumgardt H (2018) Mass modelling globular clusters in the Gaia era: a method comparison using mock data from an N-body simulation of M 4, Monthly Notices of the Royal Astronomical Society Oxford University Press
As we enter a golden age for studies of internal kinematics and dynamics of Galactic globular
clusters (GCs), it is timely to assess the performance of modelling techniques in recovering
the mass, mass profile, and other dynamical properties of GCs. Here, we compare different
mass-modelling techniques (distribution-function (DF)-based models, Jeans models, and a
grid of N-body models) by applying them to mock observations from a star-by-star N-body
simulation of the GCM4 by Heggie. The mocks mimic existing and anticipated data for GCs:
surface brightness or number density profiles, local stellar mass functions, line-of-sight velocities,
and Hubble Space Telescope- and Gaia-like proper motions. We discuss the successes
and limitations of the methods. We find that multimass DF-based models, Jeans, and N-body
models provide more accurate mass profiles compared to single-mass DF-based models. We
highlight complications in fitting the kinematics in the outskirts due to energetically unbound
stars associated with the cluster (?potential escapers?, not captured by truncated DF models
nor by N-body models of clusters in isolation), which can be avoided with DF-based models
including potential escapers, or with Jeans models. We discuss ways to account for mass segregation.
For example, three-component DF-based models with freedom in their mass function
are a simple alternative to avoid the biases of single-mass models (which systematically
underestimate the total mass, half-mass radius, and central density), while more realistic multimass
DF-based models with freedom in the remnant content represent a promising avenue
to infer the total mass and the mass function of remnants.
Patrick L. R., Lennon D. J., Britavskiy N., Evans C. J., Sana H., Taylor W. D., Herrero A., Almeida L. A., Clark J. S., Gieles M., Langer N., Schneider F. R. N., van Loon J. Th. (2019) The VLT-FLAMES Tarantula Survey: XXXI. Radial velocities and multiplicity constraints of red supergiant stars in 30 Doradus, Astronomy & Astrophysics 624 A129 pp. 1-12 EDP Sciences / European Southern Observatory (ESO)
Aims. The incidence of multiplicity in cool, luminous massive stars is relatively unknown compared to their hotter counterparts. In this work we present radial velocity (RV) measurements and investigate the multiplicity properties of red supergiants (RSGs) in the 30 Doradus region of the Large Magellanic Cloud using multi-epoch visible spectroscopy from the VLT-FLAMES Tarantula Survey.
Methods. Exploiting the high density of absorption features in visible spectra of cool stars, we used a novel slicing technique to estimate RVs of 17 candidate RSGs in 30 Doradus from cross-correlation of the observations with model spectra.
Results. We provide absolute RV measurements (precise to better than ±1 km s?1) for our sample and estimate line-of-sight velocities for the Hodge 301 and SL 639 clusters, which agree well with those of hot stars in the same clusters. By combining results for the RSGs with those for nearby B-type stars, we estimate systemic velocities and line-of-sight velocity dispersions for the two clusters, obtaining estimates for their dynamical masses of log(Mdyn/M)?=?3.8 ± 0.3 for Hodge 301, and an upper limit of log(Mdyn/M)?3.1 ± 0.8 for SL 639, assuming virial equilibrium. Analysis of the multi-epoch data reveals one RV variable, potential binary candidate (VFTS 744), which is likely a semi-regular variable asymptotic giant branch star. Calculations of semi-amplitude velocities for a range of RSGs in model binary systems and literature examples of binary RSGs were used to guide our RV variability criteria. We estimate an upper limit on the observed binary fraction for our sample of 0.3; for this sample we are sensitive to maximum periods for individual objects in the range 1?10 000 days and mass ratios above 0.3 depending on the data quality. From simulations of RV measurements from binary systems given the current data, we conclude that systems within the parameter range q?> ?0.3, log?P [days]
Conclusions. We demonstrate that RSGs are effective extragalactic kinematic tracers by estimating the kinematic properties, including the dynamical masses of two LMC young massive clusters. In the context of binary evolution models, we conclude that the large majority of our sample consists of effectively single stars that are either currently single or in long-period systems. Further observations at greater spectral resolution or over a longer baseline, or both, are required to search for such systems.
Antonini Fabio, Gieles Mark, Gualandris Alessia (2019) Black hole growth through hierarchical black hole mergers in dense star clusters: implications for gravitational wave detections, Monthly Notices of the Royal Astronomical Society 486 (4) pp. 5008-5021 Oxford University Press (OUP)
In a star cluster with a sufficiently large escape velocity, black holes (BHs) that are produced by BH mergers can be retained, dynamically form new BH binaries, and merge again. This process can repeat several times and lead to significant mass growth. In this paper, we calculate the mass of the largest BH that can form through repeated BH mergers and determine how its value depends on the physical properties of the host cluster. We adopt an analytical model in which the energy generated by the black hole binaries in the cluster core is assumed to be regulated by the process of two-body relaxation in the bulk of the system. This principle is used to compute the hardening rate of the binaries and to relate this to the time-dependent global properties of the parent cluster. We demonstrate that in clusters with initial escape velocity s300kms?1 in the core and density s105Mpc?3, repeated mergers lead to the formation of BHs in the mass range 100?105M, populating any upper mass gap created by pair-instability supernovae. This result is independent of cluster metallicity and the initial BH spin distribution. We show that about 10 per cent of the present-day nuclear star clusters meet these extreme conditions, and estimate that BH binary mergers with total mass s100M should be produced in these systems at a maximum rate H0.05Gpc?3yr?1`, corresponding to one detectable event every few years with Advanced LIGO/Virgo at design sensitivity.
Claydon Ian, Gieles Mark, Varri Anna Lisa, Heggie Douglas C, Zocchi Alice (2019) Spherical models of star clusters with potential escapers, Monthly Notices of the Royal Astronomical Society 487 (1) pp. 147-160 Oxford University Press (OUP)
An increasing number of observations of the outer regions of globular clusters (GCs) have shown a flattening of the velocity dispersion profile and an extended surface density profile. Formation scenarios of GCs can lead to different explanations of these peculiarities, therefore the dynamics of stars in the outskirts of GCs are an important tool in tracing back the evolutionary history and formation of star clusters. One possible explanation for these features is that GCs are embedded in dark matter haloes. Alternatively, these features are the result of a population of energetically unbound stars that can be spatially trapped within the cluster, known as potential escapers (PEs). We present a prescription for the contribution of these energetically unbound members to a family of self-consistent, distribution function-based models, which, for brevity, we call the Spherical Potential Escapers Stitched (SPES) models. We show that, when fitting to mock data of bound and unbound stars from an N-body model of a tidally limited star cluster, the SPES models correctly reproduce the density and velocity dispersion profiles up to the Jacobi radius, and they are able to recover the value of the Jacobi radius itself to within 20 per cent. We also provide a comparison to the number density and velocity dispersion profiles of the Galactic cluster 47 Tucanae. Such a case offers a proof of concept that an appropriate modelling of PEs is essential to accurately interpret current and forthcoming Gaia data in the outskirts of GCs, and, in turn, to formulate meaningful present-day constraints for GC formation scenarios in the early Universe.
The recent discovery of a gravitational wave produced by two merging stellar-mass black holes started a search for environments where two stellar mass black holes can become a binary and merge. One favourable environment could be globular clusters, but the evolution of black holes in them is still widely debated.
In this thesis, I present a method, based on isotropic lowered isothermal multimass models with which stellar mass black hole populations in globular clusters can be dynamically inferred and the main properties of the cluster can be estimated. In the models, I am using an improved stellar evolution code from Balbinot and Gieles (2018) to which I added black hole evolution. Before applying the multimass models to data, I made a detailed comparison between the properties of multimass models and collisional N-body simulations. I find that all dynamical stages are well described by the models and that a stellar mass black hole population reduces mass segregation.
For the Milky Way globular cluster NGC 6101, I run three N-body simulations to show that the observed lack of observable mass segregation could be explained by a stellar mass black hole population. To differentiate this explanation from others, I create different multimass models and find that measuring the cluster's velocity dispersion could help to prove the black hole population.
In the final chapter I follow-up on this prediction, and present new line-of-sight velocities of NGC 6101's velocities with the ESO MUSE instrument, I find, applying my method, that the cluster has 86+30-23 black holes, which could explain its currently observed lack of mass segregation. This thesis is concluded by a discussion on how to improve dynamical detections of BH populations with future observations and models.
Webb Jeremy J, Bovy Jo, Carlberg Raymond G, Gieles Mark (2019) Modelling the Effects of Dark Matter Substructure on Globular Cluster Evolution with the Tidal Approximation, Monthly Notices of the Royal Astronomical Society 488 (4) pp. 5748-5762 Oxford University Press (OUP)
We present direct N-body simulations of tidally filling 30 000 M star clusters orbiting between 10 and 100 kpc in galaxies with a range of dark matter substructure properties. The time-dependent tidal force is determined based on the combined tidal tensor of the galaxy?s smooth and clumpy dark matter components, the latter of which causes fluctuations in the tidal field that can heat clusters. The strength and duration of these fluctuations are sensitive to the local dark matter density, substructure fraction, sub-halo mass function, and the sub-halo mass?size relation. Based on the cold dark matter framework, we initially assume sub-haloes are Hernquist spheres following a power-law mass function between 10u and 10¹¹ M and find that tidal fluctuations are too weak and too short to affect star cluster evolution. Treating sub-haloes as point masses, to explore how denser sub-haloes affect clusters, we find that only sub-haloes with masses greater than 10v M will cause cluster dissolution times to decrease. These interactions can also decrease the size of a cluster while increasing the velocity dispersion and tangential anisotropy in the outer regions via tidal heating. Hence increased fluctuations in the tidal tensor, especially fluctuations that are due to low-mass haloes, do not necessarily translate into mass-loss. We further conclude that the tidal approximation can be used to model cluster evolution in the tidal fields of cosmological simulations with a minimum cold dark matter sub-halo mass of 10v M, as the effect of lower mass sub-haloes on star clusters is negligible.
de Boer T J L, Gieles M, Balbinot E, Hénault-Brunet V, Sollima A, Watkins L L, Claydon Ian (2019) Globular cluster number density profiles using Gaia DR2, Monthly Notices of the Royal Astronomical Society 485 (4) pp. 4906-4935 Oxford University Press (OUP)
Using data from Gaia DR2, we study the radial number density profiles of the Galactic globular cluster sample. Proper motions are used for accurate membership selection, especially crucial in the cluster outskirts. Due to the severe crowding in the centres, the Gaia data are supplemented by literature data from HST and surface brightness measurements, where available. This results in 81 clusters with a complete density profile covering the full tidal radius (and beyond) for each cluster. We model the density profiles using a set of single-mass models ranging from King and Wilson models to generalized lowered isothermal LIMEPY models and the recently introduced SPES models, which allow for the inclusion of potential escapers. We find that both King and Wilson models are too simple to fully reproduce the density profiles, with King (Wilson) models on average underestimating (overestimating) the radial extent of the clusters. The truncation radii derived from the LIMEPY models are similar to estimates for the Jacobi radii based on the cluster masses and their orbits. We show clear correlations between structural and environmental parameters, as a function of Galactocentric radius and integrated luminosity. Notably, the recovered fraction of potential escapers correlates with cluster pericentre radius, luminosity, and cluster concentration. The ratio of half mass over Jacobi radius also correlates with both truncation parameter and PE fraction, showing the effect of Roche lobe filling.
Orkney M D A, Read J I, Petts J A, Gieles M (2019) Globular clusters as probes of dark matter cusp-core transformations, Monthly Notices of the Royal Astronomical Society 488 (3) pp. 2977-2988 Oxford University Press (OUP)
Bursty star formation in dwarf galaxies can slowly transform a steep dark matter cusp into a constant density core. We explore the possibility that globular clusters (GCs) retain a dynamical memory of this transformation. To test this, we use the NBODY6DF code to simulate the dynamical evolution of GCs, including stellar evolution, orbiting in static and time-varying potentials for a Hubble time. We find that GCs orbiting within a cored dark matter halo, or within a halo that has undergone a cusp-core transformation, grow to a size that is substantially larger (Reff à 10 pc) than those in a static cusped dark matter halo. They also produce much less tidal debris. We find that the cleanest signal of an historic cusp-core transformation is the presence of large GCs with tidal debris. However, the effect is small and will be challenging to observe in real galaxies. Finally, we qualitatively compare our simulated GCs with the observed GC populations in the Fornax, NGC 6822, IKN, and Sagittarius dwarf galaxies. We find that the GCs in these dwarf galaxies are systematically larger ()Reff* C 7.8 pc), and have substantially more scatter in their sizes than in situ metal-rich GCs in the Milky Way and young massive star clusters forming in M83 ()Reff* C 2.5 pc). We show that the size, scatter, and survival of GCs in dwarf galaxies are all consistent with them having evolved in a constant density core, or a potential that has undergone a cusp-core transformation, but not in a dark matter cusp. |
# Barometric Formula (Troposphere)
vCalc Reviewed
p =
Barometric Formula (Troposphere)
Variable Instructions Datatype
Altitude Above Sea Level Enter the altitude in your chosen units Decimal (m)
Type
Equation
Category
Sciences->Geoscience
Contents
1 variables
Rating
ID
vCalc.Barometric Formula (Troposphere)
UUID
2a116cd9-00e5-11e4-b7aa-bc764e2038f2
The Barometric Pressure at Altitude calculator computes the normal barometric pressure based on the altitude (h) using the Exponential Atmosphere formula (aka Isothermal Atmosphere" formula1).
Atmospheric pressure generally decreases as altitude within the troposphere increases. Although weather and climate may influence exact atmospheric pressure in any given region, an approximation of pressure within the troposphere can be made with knowledge of altitude alone.
Atmospheric pressure is the force per unit area exerted on a surface by the weight of air above in the atmosphere of Earth. The troposphere is the lowest layer of Earth's atmosphere, extending about 17km (11 miles) from the Earth's surface in middle latitudes2.
# Definition
The formula relates pressure, p, to and altitude, h, as
p = p_0 * ( 1 - (g * h)/(c_p * T_0) ) ^ ((c_p * M) / R)
with the following parameters
Parameter Value
p0 - sea level standard atmospheric pressure 101325 Pa
L - temperature lapse rate, = g/cp for dry air 0.0065 K/m
cp - constant pressure specific heat 1007 J/(kg•K)
T0 - sea level standard temperature 288.15 K
g - Earth-surface gravitational acceleration 9.80665 m/s2
M - molar mass of dry air 0.0289644 kg/mol
R - universal gas constant 8.31447 J/(mol•K)
# History
Empirical knowledge of the effects of barometric pressure has existed for centuries, as the construction of effective pumps demonstrates. However, the first documented theoretical explanation of barometric pressure accompanied an experiment performed by Evangelista Torricelli in 1643 or 1644, which became known at the time as "The Experiment from Italy."3
# Usage
This formula is applicable only to altitudes up to about 44,000 meters. Additionally, local weather and climate effects also affect the exact barometric pressure at any given point. |
0
# What is the integral of f prime divided by the quantity f times the square root of the quantity f squared minus a squared with respect to x where f is a function of x and a is a constant?
Wiki User
2010-11-05 10:41:00
∫ f'(x)/[f(x)√(f(x)2 - a2)] dx = (1/a)arcses(f(x)/a) + C
C is the constant of integration.
Wiki User
2010-11-05 10:41:00
Study guides
20 cards
➡️
See all cards
3.79
1510 Reviews |
# Name of the rule allowing the exchanging $\sin$ and $\cos$ in integrals with limits $0$ and $\pi/2$?
As in $$0$$ to $$\frac{\pi}{2}$$ limits the area under curve of $$\sin \theta$$ and $$\cos \theta$$ are same, so in integration if the limits are from $$0$$ to $$\frac{\pi}{2}$$ we can replace $$\sin \theta$$ with $$\cos \theta$$ and vice versa. Example-
\begin{align*} \int\limits_{0}^{\frac{\pi}{2}} \frac{\sin^3x-\cos x}{\cos^3x-\sin x} dx &=\int\limits_{0}^{\frac{\pi}{2}} \frac{\sin^3x-\sin x}{\sin^3x-\sin x} dx\\ &=\int\limits_{0}^{\frac{\pi}{2}}dx\\ &=\frac{\pi}{2} \end{align*}
I want to know what the name of this rule.
I think you want
$$\int_a^b f(x) dx=\int_a^b f(a+b-x) dx$$
If you input $$a=0,b=\pi/2$$, using the above property you can "convert" sines to cosines and vice versa due to $$\sin x=\cos (\pi/2 -x)$$.
But what you have done, as pointed out by others, is not applicable everywhere. If you ''exchange" sines and cosines using the above property, it is totally fine.
There is a rule that allows exchanging sines and cosines (and, generally, trig functions and their respective co-functions), but it requires changing them all ... and possibly making other adjustments for non-trig elements.
The "rule" is simply the $$u$$-substitution $$u=\pi/2-x$$, which we write thusly:
$$\int_0^{\pi/2}f(x) dx = \int_{\pi/2}^0 f\left(\frac\pi2-u\right)(-du) = \int_0^{\pi/2}f\left(\frac\pi2-u\right)du= \int_0^{\pi/2}f\left(\frac\pi2-x\right)dx \tag{\star}$$ where the last step simply replaces the integration variable.
Now, since $$\sin x = \cos(\pi/2-x)=\cos u$$ and $$\cos x = \sin(\pi/2-x) = \sin u$$, the effect of $$(\star)$$ is to "magically" exchange all sines and cosines (and all trig functions and co-functions).
Importantly: Every instance of $$\sin x$$ must be changed to $$\cos x$$, and vice-versa. You don't get to pick and choose. (Also, any non-trigged instances of $$x$$ become $$\pi/2-x$$, which is decidedly non-magical.)
So, use the "rule" with caution.
In particular, the example's replacement of cosines with sines without the vice-versa, is invalid. It is perhaps worth noting that $$\int_0^{\pi/2}\frac{\sin^3 x - \cos x}{\cos^3 x - \sin x}dx$$ is an improper integral (due to a singularity at $$x=0.598\ldots$$). WolframAlpha even times-out trying to evaluate it. Evaluating the integral before and after the problem point and adding the results gives a value of about $$8.71605$$, which is not $$\pi/2$$.
• then Is it totally fine use this in integrating continuous trigonometric functions(if it is valid )? (I just want to use this in multiple choice questions to save some time) Jun 13, 2020 at 5:04
• @SoyebJim: I won't give a blanket statement about what is "totally fine". That said, continuity (or lack thereof) isn't the issue in your problem, although the shift from the discontinuous original function to your continuous (constant) simplification should have raised some warning flags. After all, all that $u$-substitution does is effectively flip graphs across a vertical mirror line; it can't change their (dis)continuous nature ... or even their overall shape (apart from the mirroring). Unless/until you truly understand this effect, I recommend against blindly applying the rule.
– Blue
Jun 13, 2020 at 5:32
I am not sure such a "rule" exists. If I understand what you have said correctly, then we have $$\int_0^{\pi/2}\tan xdx=\int_0^{\pi/2}\frac{\sin x}{\cos x}dx =\int_0^{\pi/2}\frac{\cos x}{\cos x}dx=\pi/2$$ where we used the "rule" in the second equality.
However, $$\int_0^{\pi/2}\tan xdx=-\ln \cos x\Big|_{x=0}^{x=\pi/2}=\infty.$$
This is not a general rule. For example, take $$\int_0^{\frac{\pi}{2}} \frac{\sin(x)}{x} dx \approx 1.3707621$$ and $$\int_0^{\frac{\pi}{2}} \frac{\cos(x)}{x} dx$$ which is undefined.
This is just plain wrong. Indeed, if you evaluate your original integral numerically, you get a negative answer.
What is correct is this: For any continuous function $$f(x,y)$$, it is the case that $$\int_0^{\pi/2} f(\sin\theta,\cos\theta)\,d\theta = \int_0^{\pi/2} f(\cos\theta,\sin\theta)\,d\theta.$$
• Unnecessarily blunt, but correct. Jun 13, 2020 at 3:48
This is actually not a valid rule. To get an intuition why, we can imagine two functions other than sine and cosine that have the same integral over a certain region.
Let the functions be $$f_1(x)=\begin{cases}0 & x\leq1\\ 1 & x>1\end{cases}$$ $$f_2(x)=\begin{cases}1 & x\leq1\\ 0 & x>1\end{cases}$$ Clearly these functions both have an area of 1 when integrated from 0 to 2. But see what happens when we multiply them together: $$f_1(x)f_2(x)=\begin{cases}0\cdot1 & x\leq1\\ 1\cdot0 & x>1\end{cases}=0$$ So if we tried to integrate their product, the answer would clearly be zero, showing that we cannot replace one with the other if they are multiplied together. Similar reasoning follows for division.
This is not a specific rule. It is the property of definite integral: $$\int_a^bf(x)dx=\int_a^bf(a+b-x)dx$$ i.e. substitute $$x=a+b-x$$ everywhere in the integrand as follows $$\int\limits_{0}^{\frac{\pi}{2}} \frac{\sin^3x-\cos x}{\cos^3x-\sin x} dx$$ $$=\int\limits_{0}^{\frac{\pi}{2}} \frac{\sin^3\left(\frac{\pi}{2}-x\right)-\cos\left(\frac{\pi}{2}-x\right)}{\cos^3\left(\frac{\pi}{2}-x\right)-\sin \left(\frac{\pi}{2}-x\right)} dx$$ $$=\int\limits_{0}^{\frac{\pi}{2}} \frac{\cos^3x-\sin x}{\sin^3x-\cos x} dx$$
Using above property in this case is not useful because it's an improper integral. |
# Magnetism and Centripetal force question
1. Jun 12, 2007
### jumpfreak
1. The problem statement, all variables and given/known data
One method for determing masses of heavy ions involves timing their orbital period in a known magnetic field. What is the mass of a singly charged ion that makes 7.0 revolutions in 1.3 x 10^-3 seconds in a 4.5 x 10^-2 T field.
a) 2.1x10^-25 kg
b) 1.3x10^-24 kg
c) 6.5x10^-23 kg
a) 5.0x10^-20 kg
2. Relevant equations
I'm not sure of which equation to use. Maybe....
Fc = Fb
m4(pi^2)r/T^2 = QvB
3. The attempt at a solution
m4(pi^2)r/(1.3x10^-3s/7)^2 = Qv(4.5x10^-2)
This might be the wrong formula.
2. Jun 12, 2007
### Andrew Mason
It is the right formula.
If $mv^2/r = qvB$ where $v = 2\pi r/T$ then:
$$qB = m2\pi/T$$
$$m = qBT/2\pi$$
Just plug in the numbers.
AM
3. Jun 13, 2007
### jackiefrost
One caution:
In the previously cited equation $$m = qBT/2\pi$$, T is the orbital period = (1.3 x 10^-3)/7 sec
Last edited: Jun 13, 2007 |
# What is the number of functions $f : A\rightarrow A, \forall_{x\in{A}} f(f(x))=x$, set $A$ have $n$ distinct elements.
What is the number of functions $f : A\rightarrow A, \forall_{x\in{A}} f(f(x))=x$, set $A$ have $n$ distinct elements.
For $A=\{a,b,c,d\}$ we could look on the right-upper triangle of matrix of all possible pairs (? better word needed here) - below. For $a$ we have four possibilities but if we choose $a->b$ for example, then we have to delete second column. Without deletation we would have $4!$ functions $f$ for $A=\{a,b,c,d\}$, but there is only 1+6+3 of them.
$a->a,a->b,a->c,a->d \\ ........b->b,b->c,b->d \\ ................c->c,c->d \\ ........................d->d$
I think that if $n=2k$ then we could choose first $k$ elements so we have $k$ sets with single element then for each elements with number $k+1 : 2k$ we assign $0$ or $1$. $0$ if $a_{i}->a_{i}$ and $1$ if $a_{i}->a_{1 : n}$, for $i\in k+1:2k$, so we have $2^k$ possibilities. If $j$, $1's$ have been assigned then we have $\binom{k}{j}$ possibilities for two element sets and ... I'm out of ideas.
You could edit my post this is welcome.
-
In other words, the number of permutations $\sigma\in S_n$ such that $\sigma^2=e$. If you're familiar with the terms, consider the cycle decomposition of these $\sigma$'s. (Apparently, the answer isn't terribly simple, though.) – anon May 17 '12 at 18:30
yes, it's the essence of my question – Qbik May 17 '12 at 18:33
I don't get the tagging. This is neither set theory nor category theory related. – Asaf Karagila May 17 '12 at 18:33
– anon May 17 '12 at 18:38
Let $B$ denote a set of size $n+1$ with $B=A\cup\{o\}$, $A$ of size $n$, and $o$ not in $A$. Let $f$ denote an involution of $B$. Then:
• Either $f(o)=o$, then $f$ is characterized by an involution of $A$.
• Or $f(o)\ne o$, then there are $n$ possible choices of $f(o)=a$ with $a$ in $A$. Once $a$ is fixed, one knows that $f(a)=o$ hence $f$ is characterized by an involution of $A\setminus\{a\}$.
Let $i_n$ denote the number of involutions of a set of size $n$. One sees that $i_1=1$, $i_2=2$, and that the recursion $i_{n+1}=i_n+ni_{n-1}$ holds, for every $n\geqslant2$. This characterizes the sequence $(i_n)_{n\geqslant1}$. |
# 700 Days Later
Since my last update we hit another set of COVID milestones here in Portugal, so I guess it’s time for another post. We’re now two years into the pandemic and hitting record numbers of cases every day, but we also have more evidence that both exponentials and vaccination ought to be better understood…
This series began after the start of the pandemic and has had irregular updates , , , , , , , , , , and days later.
On a personal level, things are “fine”. Yes, like the GIF.
Everyone in the family has the requisite amount of shots (I had my booster a couple of weeks ago, which was pretty uneventful other than feeling a little tired in the evening) and we haven’t been infected yet, but right now Omicron is pretty much everywhere.
And like I pointed out last month, things have gone exponential very quickly indeed as COVID evolves towards becoming endemic, but not harmless.
## So Where Are We At Then?
Here’s my usual normalized chart, which I prefer for long-term comparisons:
As you can see, the number of cases skyrocketed even beyond my most pessimistic expectations (we’re now at three times the maximum previously recorded in Portugal last year), but (fortunately) deaths have barely had an uptick.
## On Testing and Age Breakdowns
But there’s also something that isn’t on the chart, and for which there is no real data, just a lot of complaints: We’ve apparently hit a plateau of sorts in testing capacity, since it’s been pretty much impossible to schedule a test overnight (PCRs sometimes take days to schedule).
That has an impact in terms of when cases are accounted for officially (which means figures may be lagging up to a week behind…), and since we can only measure things that are recorded, well… We might be looking at a lot more unaccounted for cases here.
Either way, schools are a bit of a mess right now. Even though my kids are both vaccinated we have reports of cases in their classes nearly every day, and the numbers clearly show it:
It makes sense because young kids haven’t been vaccinated and it is impossible to fully segregate age groups in large schools, but it is still a concern when your kids’ grandparents are involved.
So we crack open a new set of test kits every few days, but these are becoming quite thin on the ground, worse quality, and higher cost as scalpers cotton on to making easy money from buying them in bulk and marking them up for 50-70% profit.
The upside here (for now) is that Omicron’s mildness and extensive vaccination means its rampant infectiousness hasn’t translated into a massive amount of hospital admissions (or deaths):
…but I’m still worried about “long COVID” and the likelihood of elderly people developing immediate (or belated) serious symptoms, and that we’re not taking things seriously enough in schools.
Besides the natural pressure for restrictions to be relaxed permanently due to people having spent two years cooped up, another worry I could do without is that we’re a week away from elections–and a lot of fuss was made about infected people having to vote, with dismal handling of the issue all around.
So I don’t expect case numbers to go down just yet, even if they’re not the key figure anymore (well, provided there are no new variants…).
# Going to Go
Although I haven’t coded for work for a long time and have inexorably gravitated towards technology practice management, I’ve been thinking about what to use for rebuilding a set of personal projects (including this site) and tackling a few new ones.
# Gemini
I stumbled upon the gemini:// protocol the other day, and went down that particular rabbit hole over the weekend so you wouldn’t have to (although I might actually recommend it).
# Notable Media of 2021
The Expanse’s final episode aired yesterday and it was a great end to a decent week, so I decided to do a reprise of ’s take on the stuff that struck my fancy throughout the intervening year, even if there’s a little less to write about this time.
# Early January Checkpoint
Been back at work for a week and am actually not terribly excited about it. In fact, the running joke bouncing listlessly from neuron to neuron is that it took me most of the week to wade through enough notes and e-mails to remember what my job actually is these days.
# 2021 In Review
We’re now two years into a (still) evolving pandemic with entirely too many plot twists, but life goes on and I’ve been trying to push that into the background. And given the season, I think I should put together another list of noteworthy things that came to pass in much the same vein as .
# 670 Days Later
Now that Christmas has come and gone (with varying results as people scrambled to test centers to figure out whether it was safe), I think another short update on the COVID situation in Portugal is warranted as the Omicron variant takes a firm hold.
# The Christmas Halo Effect
It’s looking like we’re going to be in for a rainy Christmas this year, so couch surfing seems like a given, and some light gaming is in order. I have a bazillion things on my personal backlog, but… I need to relax a little, so traipsing around a Forerunner ring again is much more appealing than any of them right now.
# Unreal Thoughts
I’ve been watching the discussions and interviews about the Matrix Awakens Unreal 5 demo (which, sadly, is not available on Game Pass), and I have thoughts about it. And they’re ambivalent, but likely not in the way you would expect.
# The Bored Programmer's Ambilight
Although I abhor RGB lighting in PC builds, I’ve always been fascinated with Philips TVs and the Ambilight feature, of which there are dozens of hacky clones. |
5
# Loeleaa ba Eetttis 41 pueld ZaWeteml btha statamarz Iqua &7 Fbhatmachamaleal MAAqpltexplaln YourLo that I6 I74ri uJandcuan concinteilyrelectly ~ oteha weltht % ...
## Question
#### Similar Solved Questions
##### 1_ Let V be a vector space and S = {V1, Vz, Vn} be a basis of V. Prove that every vector v in V can be written in one and only one way a8 a linear combination of vectors in S.
1_ Let V be a vector space and S = {V1, Vz, Vn} be a basis of V. Prove that every vector v in V can be written in one and only one way a8 a linear combination of vectors in S....
##### Procedure Iot Onyrn !U pnudmuElu I goinnt0 Usirg red *or axxjenstedl e/ Droll 4mdand tur ccior F7 Guttg {not&uuetAu 0444ut Fue 9 + IndiJcinotte unttnm- [eun orfromtke Jo"4uleuneneltTtht~an EEitt 0 4a Int pumaray Kallatnunbt? luags RM ELTI"Luh comiryGrdr +TKfU ; Mtn lung . eetndnauonoATTFigute %,2; Anem Mp cmne humantearlRChT ATAIAamleng _ AKauATaMprt fumonary vet ebo' Inin + slug Enntat GdrvonWanacuFiqure 9 33 Poxcor Mttol tne numzm heart. @uculatory Suslem Eraicinn4VLRQE
Procedure Iot Onyrn !U pnudmuElu I goinnt0 Usirg red *or axxjenstedl e/ Droll 4mdand tur ccior F7 Guttg {not&uuetAu 0444ut Fue 9 + IndiJcinotte unttnm- [eun orfromtke Jo" 4uleunenelt Ttht ~an EEitt 0 4a Int pumaray Kallatnun bt? luags RM ELTI "Luh comiry Grdr +T KfU ; Mtn lung . eetndn...
##### The graph above is & transformation of the function y f(c) = 2Write an equation for the function graphcd aboveHint: Thcrc is a vertical stretch/compression in addition tO the shiftsg(tPrevicw
The graph above is & transformation of the function y f(c) = 2 Write an equation for the function graphcd above Hint: Thcrc is a vertical stretch/compression in addition tO the shifts g(t Previcw...
##### Nitrogen dioxide dimerizes according to the reaction: 2 NOz(g) = NzOa(g) K =5.5 at298 KPart Acontainer contains 0.496 bar of NOz and 1.47 bar of NzO4 at298KCalculate the value of Q for the reaction under these current conditions:AZdSubmitRequest Answer
Nitrogen dioxide dimerizes according to the reaction: 2 NOz(g) = NzOa(g) K =5.5 at298 K Part A container contains 0.496 bar of NOz and 1.47 bar of NzO4 at298K Calculate the value of Q for the reaction under these current conditions: AZd Submit Request Answer...
##### Whon OvC1 , during Ihe inlerval doesbody changeUC0 In ary enswur poiat nltun yOui cpice The body changes direcbion at [' (ypa an iniegor _ simpliilad IracUon ) The boxdy doos rl change direclion duririg Ine interval
Whon OvC1 , during Ihe inlerval does body change UC0 In ary enswur poiat nltun yOui cpice The body changes direcbion at [' (ypa an iniegor _ simpliilad IracUon ) The boxdy doos rl change direclion duririg Ine interval...
##### Point)10 sin € dcthe definite integral with the Trapezoid Rule and =4 Approximate 18.9612Approximate the definite integral with Simpson's Rule and n =4 b)c) Find the exact value of the integral. 20
point) 10 sin € dc the definite integral with the Trapezoid Rule and =4 Approximate 18.9612 Approximate the definite integral with Simpson's Rule and n =4 b) c) Find the exact value of the integral. 20...
##### At 25 "€ Lhe vapor pressure of pure water is 23.76 mmHg and that of sea waler IS 22.98 mmHg: Assuming that sea water contains only NaCI calculate molal concentration of NaCI in sea waler: (HzO: 18.02 glmol)At 25 "C. the equilibrium partial pressures of NO_ and N,O4 are 0.15 atm and 0.20 atm respectively. If the volume is doubled at constant temperature ( Vtinal = 2 Vinitial ) . calculate the partial pressures of the gases when ncl equilibrium established_NzO, (g) + ZNOz (g)
At 25 "€ Lhe vapor pressure of pure water is 23.76 mmHg and that of sea waler IS 22.98 mmHg: Assuming that sea water contains only NaCI calculate molal concentration of NaCI in sea waler: (HzO: 18.02 glmol) At 25 "C. the equilibrium partial pressures of NO_ and N,O4 are 0.15 atm and ...
##### What is the probability of not rolling a five on one throw of one die?What is the probability of rolling a five on your first throw and another five onthe second throw of that die?4. If you roll two dice at one time, what are the chances that both dice will comeup twos?5 . If you roll two dice at one time, what are the chances that one or the other (orboth) of the dice will come up a two?6. If you roll two dice at once, what are the chances that at most one of the dicewill come up a two? (HINT:
What is the probability of not rolling a five on one throw of one die? What is the probability of rolling a five on your first throw and another five on the second throw of that die? 4. If you roll two dice at one time, what are the chances that both dice will come up twos? 5 . If you roll two dice ...
##### The average human diet contains about 2000 kcal per day. If all this food energy is released rather than stored as fat, what's the approximate average power output of the human body?
The average human diet contains about 2000 kcal per day. If all this food energy is released rather than stored as fat, what's the approximate average power output of the human body?...
##### Use integration by parts to derive the given formula. $$\int e^{\alpha z} \sin \beta z d z=\frac{e^{\alpha z}(\alpha \sin \beta z-\beta \cos \beta z)}{\alpha^{2}+\beta^{2}}+C$$
Use integration by parts to derive the given formula. $$\int e^{\alpha z} \sin \beta z d z=\frac{e^{\alpha z}(\alpha \sin \beta z-\beta \cos \beta z)}{\alpha^{2}+\beta^{2}}+C$$...
##### In methane hydrate the methane molecule is trapped in a cage of water molecules. Describe the structure: (a) how many water molecules make up the cage, (b) how many hydrogen bonds are involved, and (c) how many faces does the cage have? (Figure $20.16 .)$
In methane hydrate the methane molecule is trapped in a cage of water molecules. Describe the structure: (a) how many water molecules make up the cage, (b) how many hydrogen bonds are involved, and (c) how many faces does the cage have? (Figure $20.16 .)$...
##### Explain how to simplify a rational expression.
Explain how to simplify a rational expression....
##### Find the area vector of the oriented flat surface_The triangle with vertices (0, 0,0) , (0,14,0) , (0, 0, 3) oriented in the negative x direction.The area vector is A =Numberi+ Numberj+ Numberk
Find the area vector of the oriented flat surface_ The triangle with vertices (0, 0,0) , (0,14,0) , (0, 0, 3) oriented in the negative x direction. The area vector is A = Number i+ Number j+ Number k...
##### Suppose the line tangent to the graph of f at x= 4 iS y = 3x + and suppose y = Sx - 3 is the line tangent to the graph of g at x=4_ Find the line tangent to the following curves atx= 4.y = f(x)g(x)b. y =The equation of the line tangent to the curve y = f(x)g(x) at x= 4 is y =The equation of the line tangent to the curve yatx= 4 is y =
Suppose the line tangent to the graph of f at x= 4 iS y = 3x + and suppose y = Sx - 3 is the line tangent to the graph of g at x=4_ Find the line tangent to the following curves atx= 4. y = f(x)g(x) b. y = The equation of the line tangent to the curve y = f(x)g(x) at x= 4 is y = The equation of the ...
##### Which of the following is involved in pre-transcriptional gene regulation?histonesmicroRNADNA methylationalternative splicingallof the aboveQuestion 61ptsWhich of these mutation locations alters the product of a mutation?coding region mutationscis-regulatory mutationstrans-regulatory mutationsallof the abovenone of the above
Which of the following is involved in pre-transcriptional gene regulation? histones microRNA DNA methylation alternative splicing allof the above Question 6 1pts Which of these mutation locations alters the product of a mutation? coding region mutations cis-regulatory mutations trans-regulatory muta...
##### Val bes 90:Al12.=(Section 12.3) 12 Solids Metallic _ densities . of the elements K, Ca, Sc, and Ti The _ 4.5 g/cm' , respectively. One of- are 0.86, and these 1.5_ 3.2, body-centered cubic structure; the elements crys tallizes_ in & other in a face-centered cubic structure. three crystallize in the body-centered cubic Which one crystallizes structure? Justify JOULE answer; Foreach of these solids, state whether you would 12,32 metallic properties: (a) expect it to possess _ TiCl4, (b) N
Val bes 90: Al 12.= (Section 12.3) 12 Solids Metallic _ densities . of the elements K, Ca, Sc, and Ti The _ 4.5 g/cm' , respectively. One of- are 0.86, and these 1.5_ 3.2, body-centered cubic structure; the elements crys tallizes_ in & other in a face-centered cubic structure. three crystal... |
## Stream: new members
### Topic: deep recursion with append
#### Jason Orendorff (Jun 17 2020 at 16:26):
never mind why I want to know the result of gluing all these strings together, but
#reduce "~" ++ ("∃" ++ "a" ++ ":" ++ ("S" ++ "a" ++ "=" ++ "0"))
this gives deep recursion was detected at 'replace' (potential solution: increase stack space in your system) locally (and in the browser it just seems to run forever)
Even just #reduce "~" ++ ("∃") doesn't finish in the browser for me.
#### Reid Barton (Jun 17 2020 at 16:28):
I think #reduce with characters is super slow. Would #eval work for you?
#### Reid Barton (Jun 17 2020 at 16:29):
A character is something like a natural number plus a proof that it lies in the range of valid Unicode characters
#### ROCKY KAMEN-RUBIO (Jun 17 2020 at 17:47):
I've run into surprisingly different behavior with #eval and #reduce. @Jalex Stark I remember you saying you got a noticeable speedup when you allocated more processing power to Lean on your drive . Could this be a useful simple way to test that?
#### Bryan Gin-ge Chen (Jun 17 2020 at 17:56):
#reduce and #eval are completely different. See the discussion here: https://leanprover.github.io/reference/expressions.html#computation
#### Jalex Stark (Jun 17 2020 at 18:04):
i don't see a reason why #reduce should be memory-limited. things that spawn a bunch of parallel processes might be faster with more memory, if memory was limiting the number of processes running in parallel
#### Jason Orendorff (Jun 17 2020 at 20:37):
#eval is great, but now what I want is to establish that "~" ++ ("∃" ++ "a" ++ ":" ++ ("S" ++ "a" ++ "=" ++ "0")) = "~∃a:Sa=0". Apparently refl is like reduce, not like eval
#### Mario Carneiro (Jun 17 2020 at 20:38):
simp
#### Bryan Gin-ge Chen (Jun 17 2020 at 20:41):
simp doesn't work most likely because we don't have many lemmas about string.
#### Jason Orendorff (Jun 17 2020 at 20:49):
I'm trying to write those lemmas myself now ... I don't know that I can do it without access to string_imp.
#### Bryan Gin-ge Chen (Jun 17 2020 at 20:49):
Oh, actually this works:
import data.string.basic
example : "~" ++ ("∃" ++ "a" ++ ":" ++ ("S" ++ "a" ++ "=" ++ "0")) = "~∃a:Sa=0" := begin
apply string.to_list_inj.1,
change ['~','∃','a',':','S','a','=','0'] = _,
refl
end
#### Jason Orendorff (Jun 17 2020 at 20:51):
I don't have string.to_list_inj
#### Bryan Gin-ge Chen (Jun 17 2020 at 20:57):
What version of Lean / mathlib are you using? The lemma was added about 2 years ago: https://github.com/leanprover-community/mathlib/commit/a30b7c773db17cf7d1b551ba0f24645079296628#diff-6ab0314160d20014ac5f2e53531740b1R57
#### Reid Barton (Jun 17 2020 at 20:58):
Clearly we need a norm_str tactic
#### Jason Orendorff (Jun 17 2020 at 20:58):
facepalm I was working in a scratch .lean file with no mathlib installed, silly mistake
#### Bryan Gin-ge Chen (Jun 17 2020 at 20:59):
If you don't want all of mathlib, you can also just copy and paste the theorem into your file:
namespace string
theorem to_list_inj : ∀ {s₁ s₂}, to_list s₁ = to_list s₂ ↔ s₁ = s₂
| ⟨s₁⟩ ⟨s₂⟩ := ⟨congr_arg _, congr_arg _⟩
end string
#### Jason Orendorff (Jun 17 2020 at 21:20):
Why doesn't unfold string.to_list manage to unfold "~".to_list?
#### Mario Carneiro (Jun 17 2020 at 21:40):
string has a private implementation, which is somewhat unusual for a lean data structure
Last updated: May 08 2021 at 18:17 UTC |
## Khan Academy: "Deriving Demand Curve from Tweaking Marginal Utility per Dollar"
Watch this video about deriving the demand curve from tweaking marginal utility per dollar. |
# ordinary differential equations – How to solve $dy/dx = f(g(x,y))$
Thanks for contributing an answer to Mathematics Stack Exchange! |
# AnnularMesh
For rmin>0: creates an annular mesh of QUAD4 elements. For rmin=0: creates a disc mesh of QUAD4 and TRI3 elements. Boundary sidesets are created at rmax and rmin, and given these names. If tmin!0 and tmax!2Pi, a sector of an annulus or disc is created. In this case boundary sidesets are also created a tmin and tmax, and given these names
## Description
The AnnularMesh mesh generator builds simple 2D annular and disc meshes. They are created by drawing radial lines and concentric circles, and the mesh consists of the quadrilaterals thus formed. Therefore, no sophisticated paving is used to construct the mesh.
The inner radius and the outer radius must be specified. If the inner radius is zero a disc mesh is created, while if it is positive an annulus is created. The annulus has just one subdomain (block number = 0), whereas the disc has two subdomains: subdomain zero consists of the outer quadrilaterals, while the other (block number = 1) consists of the triangular elements that eminate from the origin.
The minimum and maximum angle may also be specified. These default to zero and , respectively. If other values are chosen, a sector of an annulus, or a sector of a disc will be created. Both angles are measured anticlockwise from the axis.
The number of elements in the radial direction and the angular direction may be specified. In addition, a growth factor on the element size in the radial direction may be chosen. The element-size (in the radial direction) is multiplied by this factor for each concentric ring of elements, moving from the inner to the outer radius.
Sidesets are also created:
* Sideset 0 is called "rmin" and is the set of sides at the minimum radius (which is zero for the disc). * Sideset 1 is called "rmax" and is the set of sides at the maximum radius. * Sideset 2 is called "tmin" and is the set of sides at the minimum angle, which is created only in the case of a sector of an annulus (or disc) * Sideset 3 is called "tmax" and is the set of sides at the maximum angle, which is created only in the case of a sector of an annulus (or disc)
## Example Syntax
A full annulus with minimum radius 1 and maximum radius 5, with smaller elements near the inside of the annulus. (A disc would be created by setting rmin to zero.) !listing test/tests/mesh/mesh_generation/annulus.i block=Mesh
A sector of an annulus, sitting between and . (A sector of a disc would be created by setting rmin to zero.) !listing test/tests/mesh/mesh_generation/annulus_sector.i block=Mesh
An example of using sidesets !listing test/tests/mesh/mesh_generation/annulus_sector.i block=BCs
## Input Parameters
C++ Type:double
Options:
• rminInner radius. If rmin=0 then a disc mesh (with no central hole) will be created.
C++ Type:double
Options:
Description:Inner radius. If rmin=0 then a disc mesh (with no central hole) will be created.
• ntNumber of elements in the angular direction
C++ Type:unsigned int
Options:
Description:Number of elements in the angular direction
### Required Parameters
• allow_renumberingTrueIf allow_renumbering=false, node and element numbers are kept fixed until deletion
Default:True
C++ Type:bool
Options:
Description:If allow_renumbering=false, node and element numbers are kept fixed until deletion
• tmin0Minimum angle, measured anticlockwise from x axis
Default:0
C++ Type:double
Options:
Description:Minimum angle, measured anticlockwise from x axis
• tmax6.28319Maximum angle, measured anticlockwise from x axis. If tmin=0 and tmax=2Pi an annular mesh is created. Otherwise, only a sector of an annulus is created
Default:6.28319
C++ Type:double
Options:
Description:Maximum angle, measured anticlockwise from x axis. If tmin=0 and tmax=2Pi an annular mesh is created. Otherwise, only a sector of an annulus is created
Default:0
C++ Type:unsigned short
Options:
Description:The subdomain ID given to the QUAD4 elements
• tri_subdomain_id1The subdomain ID given to the TRI3 elements (these exist only if rmin=0, and they exist at the center of the disc
Default:1
C++ Type:unsigned short
Options:
Description:The subdomain ID given to the TRI3 elements (these exist only if rmin=0, and they exist at the center of the disc
• parallel_typeDEFAULTDISTRIBUTED: Always use libMesh::DistributedMesh REPLICATED: Always use libMesh::ReplicatedMesh DEFAULT: Use libMesh::ReplicatedMesh unless --distributed-mesh is specified on the command line
Default:DEFAULT
C++ Type:MooseEnum
Options:DISTRIBUTED REPLICATED DEFAULT
Description:DISTRIBUTED: Always use libMesh::DistributedMesh REPLICATED: Always use libMesh::ReplicatedMesh DEFAULT: Use libMesh::ReplicatedMesh unless --distributed-mesh is specified on the command line
• ghosting_patch_sizeThe number of nearest neighbors considered for ghosting purposes when 'iteration' patch update strategy is used. Default is 5 * patch_size.
C++ Type:unsigned int
Options:
Description:The number of nearest neighbors considered for ghosting purposes when 'iteration' patch update strategy is used. Default is 5 * patch_size.
• growth_r1The ratio of radial sizes of successive rings of elements
Default:1
C++ Type:double
Options:
Description:The ratio of radial sizes of successive rings of elements
• nr1Number of elements in the radial direction
Default:1
C++ Type:unsigned int
Options:
Description:Number of elements in the radial direction
• max_leaf_size10The maximum number of points in each leaf of the KDTree used in the nearest neighbor search. As the leaf size becomes larger,KDTree construction becomes faster but the nearest neighbor searchbecomes slower.
Default:10
C++ Type:unsigned int
Options:
Description:The maximum number of points in each leaf of the KDTree used in the nearest neighbor search. As the leaf size becomes larger,KDTree construction becomes faster but the nearest neighbor searchbecomes slower.
### Optional Parameters
• dim1This is only required for certain mesh formats where the dimension of the mesh cannot be autodetected. In particular you must supply this for GMSH meshes. Note: This is completely ignored for ExodusII meshes!
Default:1
C++ Type:MooseEnum
Options:1 2 3
Description:This is only required for certain mesh formats where the dimension of the mesh cannot be autodetected. In particular you must supply this for GMSH meshes. Note: This is completely ignored for ExodusII meshes!
• nemesisFalseIf nemesis=true and file=foo.e, actually reads foo.e.N.0, foo.e.N.1, ... foo.e.N.N-1, where N = # CPUs, with NemesisIO.
Default:False
C++ Type:bool
Options:
Description:If nemesis=true and file=foo.e, actually reads foo.e.N.0, foo.e.N.1, ... foo.e.N.N-1, where N = # CPUs, with NemesisIO.
• patch_update_strategyneverHow often to update the geometric search 'patch'. The default is to never update it (which is the most efficient but could be a problem with lots of relative motion). 'always' will update the patch for all slave nodes at the beginning of every timestep which might be time consuming. 'auto' will attempt to determine at the start of which timesteps the patch for all slave nodes needs to be updated automatically.'iteration' updates the patch at every nonlinear iteration for a subset of slave nodes for which penetration is not detected. If there can be substantial relative motion between the master and slave surfaces during the nonlinear iterations within a timestep, it is advisable to use 'iteration' option to ensure accurate contact detection.
Default:never
C++ Type:MooseEnum
Options:never always auto iteration
Description:How often to update the geometric search 'patch'. The default is to never update it (which is the most efficient but could be a problem with lots of relative motion). 'always' will update the patch for all slave nodes at the beginning of every timestep which might be time consuming. 'auto' will attempt to determine at the start of which timesteps the patch for all slave nodes needs to be updated automatically.'iteration' updates the patch at every nonlinear iteration for a subset of slave nodes for which penetration is not detected. If there can be substantial relative motion between the master and slave surfaces during the nonlinear iterations within a timestep, it is advisable to use 'iteration' option to ensure accurate contact detection.
• control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector
Options:
Description:Adds user-defined labels for accessing object parameters via control logic.
• enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Options:
Description:Set the enabled status of the MooseObject.
• construct_node_list_from_side_listTrueWhether or not to generate nodesets from the sidesets (usually a good idea).
Default:True
C++ Type:bool
Options:
Description:Whether or not to generate nodesets from the sidesets (usually a good idea).
• patch_size40The number of nodes to consider in the NearestNode neighborhood.
Default:40
C++ Type:unsigned int
Options:
Description:The number of nodes to consider in the NearestNode neighborhood.
• partitionerdefaultSpecifies a mesh partitioner to use when splitting the mesh for a parallel computation.
Default:default
C++ Type:MooseEnum
Options:default metis parmetis linear centroid hilbert_sfc morton_sfc
Description:Specifies a mesh partitioner to use when splitting the mesh for a parallel computation.
• centroid_partitioner_directionSpecifies the sort direction if using the centroid partitioner. Available options: x, y, z, radial
C++ Type:MooseEnum |
# Confusion regarding summation convention
In tensor calculus, I recently came across the formula for the angle between two vectors (non null) in Riemannian Space, which is as follows:
$cos \theta = \frac{g_{ij}A^iB^j}{\sqrt {g_{ij}A^iA^j}\sqrt {g_{ij}B^iB^j}}$; and the distance formula $|A|^2=g_{ij} A^iA^j$. I came across a problem related to this topic, where it says:
If $X^i =\frac{1}{\sqrt {g_{pq}Y^pY^q}}Y^i$ (where $X^i$ and $Y^i$ are vector components and $g_{ij}$ is the fundamental tensor), show that $X^i$ is a unit vector.
My question is, whether , the dummy indices in the denominators imply this:
$X^i =\frac{1}{\sqrt {\sum_p \sum_q {g_{pq}Y^pY^q}}}Y^i$
Or,
this: $X^i =\sum_p \sum_q {\frac{1}{\sqrt {g_{pq}Y^pY^q}}Y^i}$
If the first one is implied, then $|X|^2=g_{ij}X^iX^j= \frac{g_{ij}Y^iY^j}{\sqrt {g_{pq}Y^pY^q}\sqrt {g_{pq}Y^pY^q}}=\frac{g_{ij}Y^iY^j}{ {g_{pq}Y^pY^q}}= \frac{\sum_i \sum_j g_{ij}Y^iY^j }{\sum_p \sum_q g_{pq}Y^pY^q}=1$.
If my interpretation is wrong, then I don't know how to proceed. Kindly clear my doubts.
It's the first interpretation that is correct: $$X^i =\frac{1}{\sqrt {\sum_p \sum_q {g_{pq}Y^pY^q}}}Y^i$$
• Yes, $$\cos \theta = \frac{g_{ij}A^iB^j}{\sqrt {g_{ij}A^iA^j}\sqrt {g_{ij}B^iB^j}} = \frac{\sum_{ij} g_{ij}A^iB^j}{\sqrt {\sum_{ij} g_{ij}A^iA^j}\sqrt {\sum_{ij} g_{ij}B^iB^j}}$$ – md2perpe Sep 19 '18 at 11:18
The right interpretation is your first one: $$X^i =\frac{1}{\sqrt {\sum_p \sum_q {g_{pq}Y^pY^q}}}Y^i$$ |
# Wrapper for Classification Models¶
Provides class to wrap existing models in different frameworks so that they provide a unified API to the benchmarks.
KerasModel Create a Model instance from a Keras model. PyTorchModel Creates a Model instance from a PyTorch module.
class perceptron.models.classification.KerasModel(model, bounds, channel_axis=3, preprocessing=(0, 1), predicts='probabilities')[source]
Create a Model instance from a Keras model.
Parameters: model : keras.model.Model The Keras model that are loaded. bounds : tuple Tuple of lower and upper bound for the pixel values, usually (0, 1) or (0, 255). channel_axis : int The index of the axis that represents color channels. preprocessing: 2-element tuple with floats or numpy arrays Elementwises preprocessing of input; we first substract the first element of preprocessing from the input and then divide the input by the second element. predicts : str Specifies whether the Keras model predicts logits or probabilities. Logits are preferred, but probabilities are the default.
backward(self, gradient, image)[source]
Get gradients w.r.t. the original image.
batch_predictions(self, images)[source]
Batch prediction of images.
model_task(self)[source]
Get the task that the model is used for.
num_classes(self)[source]
Return number of classes.
predictions_and_gradient(self, image, label)[source]
Returns both predictions and gradients.
class perceptron.models.classification.PyTorchModel(model, bounds, num_classes, channel_axis=1, device=None, preprocessing=(0, 1))[source]
Creates a Model instance from a PyTorch module.
Parameters: model : torch.nn.Module The PyTorch model that are loaded. bounds : tuple Tuple of lower and upper bound for the pixel values, usually (0, 1) or (0, 255). num_classes : int Number of classes for which the model will output predictions. channel_axis : int The index of the axis that represents color channels. device : string A string specifying the device to do computation on. If None, will default to “cuda:0” if torch.cuda.is_available() or “cpu” if not. preprocessing: 2-element tuple with floats or numpy arrays Elementwises preprocessing of input; we first subtract the first element of preprocessing from the input and then divide the input by the second element.
backward(self, gradient, image)[source]
Get gradients w.r.t. the original image.
batch_predictions(self, images)[source]
Batch prediction of images.
model_task(self)[source]
Get the task that the model is used for.
num_classes(self)[source]
Return number of classes.
predictions_and_gradient(self, image, label)[source]
Returns both predictions and gradients. |
# Initial Configuration
### Supermassive Star
We consider a SMS model just prior to collapse, when it is described by a uniformly rotating $\Gamma=4/3$ polytrope spinning at the mass-shedding limit. The SMS is of arbitrary mass M and has an angular momentum $J/M^2=0.96$ and a rotational-to-gravitational-binding-energy ratio $T/|W|=0.009$. The equatorial radius of the star is $R_{\rm eq}=626M\approx 9.25\times 10^6\,({M}/{10^6M_{\odot}})$km; the polar radius satisfies $R_p/R_{eq}=2/3$. This model is marginally unstable to radial collapse due to GR, and this triggers the collapse. No magnetic fields are present in this case. |
Banach ̌t@@p
A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.
`Stefan Banach
https://pballew.blogspot.com/2019/03/on-this-day-in-math-march-30.html
0
^OF w
܌a@@L
N GW \AxƂƂ, ܌aɜl, dB
ẘF, ЉlN̊F͋CtāB
0
b@@L
̒V̒l ud́v uЉlv Ld [4] û߂, ̐[@v IT ̎dɊւ邱ƂłȂƂĂB
ǂN, Ɗb𗝉ȂƂȂȂǂƌ܂, Vs̒V ubv ܂̂łB
܂, オς̂, ubv ͈͍̔͂LĂƂ킯B Ⴂl͊wԂƂđςB
2
AutoPageŐVm点 |
• A
• A
• A
• ABC
• ABC
• ABC
• А
• А
• А
• А
• А
Regular version of the site
## Apéry constants of homogeneous varieties
arxiv.org. math. Cornell University, 2016. No. 1604.04652.
For Fano manifolds we define Ap\'ery constants and Ap\'ery class as particular limits of ratios of coefficients of solutions of the quantum differential equation. We do numerical computations in case of homogeneous varieties. These numbers are identified to be polynomials in the values of Riemann zeta-function with natural arguments. |
(No Ratings Yet)
## Project Euler Problem 138 Solution
#### Problem Description
Consider the isosceles triangle with base length, b = 16, and legs, L = 17.
By using the Pythagorean theorem it can be seen that the height of the triangle, h = √(172 − 82) = 15, which is one less than the base length.
With b = 272 and L = 305, we get h = 273, which is one more than the base length, and this is the second smallest isosceles triangle with the property that h = b ± 1.
Find ∑ L for the twelve smallest isosceles triangles for which h = b ± 1 and b, L are positive integers.
#### Analysis
There are two ways to solve this problem. The first is to generate some triangles that match the requirements and search for an integer sequence; which we found. It turns out that one-half of every 6th iteration of the Fibonacci sequence, starting at the 9th, yields a solution for L. Namely: 34, 610, 10946, 196418, etc. or F(6n+3), n=1..12.
The second way, and actually more intuitive, is to solve for the Diophantine quadratic equation as:
(assuming x for b/2 to achieve integer coefficient for the resulting equation)
$2x\pm1 = \sqrt{L^2-x^2} , \\ (2x\pm1)^2 = L^2-x^2 , \\ 4x^2\pm4x+1 = L^2-x^2 , \\ 5x^2\pm4x+1-L^2 = 0$
Take a run over to: http://www.alpertron.com.ar/QUAD.HTM, and plug in the coefficients to calculate:
5 x2 - y2 + 4 x +1 = 0
by Dario Alejandro Alpern
X0 = 0
Y0 = -1
and also:
X0 = 0
Y0 = 1
Xn+1 = P Xn + Q Yn + K
Yn+1 = R Xn + S Yn + L
P = -9
Q = -4
K = -4
R = -20
S = -9
L = -8
We verified this solution, but never had the need to implement it.
#### Solution
Runs < 1 second in Python.
from Euler import Fibonacci nt = 12 f = Fibonacci() print "Answer to PE138 =", sum( [f[6*n+3]/2 for n in range(1, nt+1)] )
See Sloane’s A007805.
## Discussion
### 5 Responses to “Project Euler Problem 138 Solution”
1. Clearly the constant parameter should be 2, not 1.
For 2 Dario’s website says there are no answers, care to explain how exactly you’ve verified that solution?
Posted by David | August 23, 2012, 2:46 PM
2. I used this method. Yet, my answer is still incorrect. Can you help me figure out why?
Posted by Jeffrey Robles | October 11, 2012, 1:11 AM
• The input to the solver is correct but the comment was 4x^2 ± 4x + 2 and had to correct it to 4x^2 ± 4x + 1. Sorry, that was probably throwing you off. It’s correct now.
Posted by Mike | October 29, 2012, 2:13 AM |
# Roots of the XXZ Bethe Ansatz equation
The XXZ spin chain Bethe Ansatz equations are a complicated system of rational function equation:
$\left(\frac{\lambda_j + i/2}{\lambda_j - i/2} \right)^N = \prod_{l=1, l \neq j}^M \frac{\lambda_j - \lambda_l + i}{\lambda_j - \lambda_l - i}$
Generically, since there are M equations and M unknowns generically we could say the solutions in $\lambda$ are a discrete set of points.
Is there any other logic to their roots besides the fact that they solve these equations ?
Spin chains have to do with the raising and lowering operators in the 2-dimensional representation of $SU(2)$ : $\left.\begin{array}{cccc} S^+ : & |\uparrow \rangle & \mapsto & 0 \\ & |\downarrow \rangle & \mapsto & |\uparrow \rangle \\ \hline S^- : & |\uparrow \rangle & \mapsto & |\downarrow \rangle \\ & |\downarrow \rangle & \mapsto & 0 \\ \hline \\ S^z : & |\uparrow \rangle & \mapsto & \tfrac{1}{2}|\uparrow \rangle \\ & |\downarrow \rangle & \mapsto & \tfrac{1}{2}|\downarrow \rangle \end{array}\right.$
The Hamiltonian acts on a tensor product of $SU(2)$ representations $V^{\otimes M}$.
$\mathcal{H} = - \frac{1}{2} \sum_{n=1}^M \bigg[ \sigma_n^x \sigma_{n+1}^x + \sigma_n^y \sigma_{n+1}^y + \Delta \sigma_n^z \sigma_{n+1}^z\bigg]$
Various values of $\Delta$ have different interpretations. Since $[\mathcal{H}, S^z] = 0$ we can diagonalize in terms of the $\uparrow, \downarrow$ states. The Bethe Ansatz equations are the eigenvalue equations for $\mathcal{H}$ in this basis.
None of this tells you how to solve the eigenvalue equations, here they are.
-
Sorry, S are sigmas ? Also Bethe eq. are not exactly for eigenvalues, eigenvalues can be expressed via them... – Alexander Chervov Jul 13 '13 at 15:47
If I read correctly $S^\pm = \sigma^x \pm i \sigma^y, S^z = \sigma^z$. The raising-lowering operators, being slightly different from Pauli spin matrices. – john mangual Jul 13 '13 at 16:00
The Bethe equations and Bethe ansatz are the central tools to study quantum integrable systems and there are thousands papers devoted to them. I cannot pretend to know substantial part of this research, so let me write some remarks which I am aware of.
You ask "... is there logic to their roots ... "
Yes, there is some logic - the keyword is "string hypothesis" (it is NOT related to string theory).
Let me quote from String hypothesis for gl(n|m) spin chains: a particle/hole democracy Section 3.1 page 10.
Suppose that N is large and some Bethe root $\lambda_n$ has a positive imaginary part. Then the l.h.s of (26) is exponentially large with N. To achieve this large value on the r.h.s. there should be another Bethe root $\lambda_n′ ∼ \lambda_n − i$, with the help of which the pole in the r.h.s. is created. Repeating the same arguments for $\lambda_n′$ and using the reality of solution of the Bethe Ansatz [36], we conclude that the Bethe roots are organized in the complexes of the type:
$\lambda_k = \lambda_0 + ik, ~~ k= -(s-1)/2, -(s-3)/2, ..., +(s-1)/2.$
where s is an integer. These complexes are called s-strings.
String hypothesis in its strong form states that all solutions of the Bethe Ansatz equations can be represented as a collection of strings, and that $\lambda_k$ are approximated by $\lambda_0 + ik$ values with exponential in N precision. In its strong form the string hypothesis is wrong. However there is an evidence that its weaker version is correct if the proper thermodynamic limit is taken. The weaker version states that most of the Bethe roots are organized into strings with exponential in N precision, and that the fraction of solutions which significantly differ from (27) decreases to 0 when N → ∞. We discuss in more details applicability of the string hypothesis in appendix A.
- |
# Array of 4x7 Segment displays using demultiplexer. Need PNP transistors!
My first post here! I'm that most dangerous of EE-wannabes—a software guy with theoretical knowledge: V=IR, and transistors come in two flavours: "NPN" (1=On) and "PNP" (1=Off)
## Introduction
@stevenvh almost answered my question in this beautiful answer, except that I want to gang up four 4x7 segent displays - and use two demultiplexers rather than discrete selection pins. I've repeated his diagram here:
### My basic values
• 5V supply
• Common cathode displays
• Blue LEDs with 3.0V drop @ 20mA
## My problem
My problem is that I can't get hold of 74LS238 (3-to-8 demultiplexers, decodes High), I can only get 74LS138 (3-to-8 demultiplexers, decodes Low). That is, all outputs except the decoded one are High.
That means that I need to use PNP transistors rather than NPN in his diagram, and put them in "upside down" (collector to ground).
### My questions
1. Is that all I need to change?
2. Which PNP transistor would you recommend?
• Would it not be easier to use a single 74ls154 (4 to 16 demux)? – JIm Dearden Jun 21 '16 at 14:42
• You could also run the output through a set of 8 inverters to change its polarity. Or build an inverting driver with a pair of NPN transistors. – pjc50 Jun 21 '16 at 15:48
• @JImDearden Wow! I didn't know they existed - thanks for that! (Just goes to show my age...) But it's still "decode-low", requiring PNP. Is it true that PNPs don't require base resistors? I've seen conflicting reports on the 'Net. – John Burger Jun 23 '16 at 9:54
• As regards using a (series) base resistor, it all depends on the circuit driving them (some limit the current available, others don't). As a general rule I'd go for the default position of using a resistor unless you know for certain the base current will be limited to a safe value. BTW this has nothing to do with just PNP - it also applies equally to NPN BJTs. – JIm Dearden Jun 23 '16 at 11:51
• You may also be interested in this little chip as well , the max7219, capable of driving 8 * 7 seg common cathode LED displays using only 3 control lines! sparkfun.com/datasheets/Components/General/… - saves I/O and a lot of soldering. You can even buy them as ready built modules - just google on popular shopping sites. – JIm Dearden Jun 23 '16 at 12:21
The top half of the diagram can go basically unchanged: except the resistors need to be lower to accommodate the Blue LED's different voltage.
Given a transistor drop of 0.7V, and wanting a 20mA current through the LED, figure:
5V-3V-0.7V=1.3V across the resistor, 1.3V=0.02 R, or R=65Ω.
The bottom left of the diagram would replace the bank of four resistors with both the demuxers (outputs 0-2 to A-C on both demuxers, and using output 3 as a chip-enable with ~G2A on one and G1 on the other), then put resistors on the sixteen outputs to the transistors.
The resistors could probably stay at 10kΩ. Then again, @SperoPefhany in this answer suggests that no resistor is required at all...
The bottom right of the diagram would replace the NPN transistors with PNP, with their collectors connected to ground - and there'd be sixteen of them.
Given @stevenvh recommended BC337, the PNP complement is BC327.
• Please combine it into the question – Eugene Sh. Jun 21 '16 at 14:09
• @EugeneSh. Umm, I thought I was supposed to put my answer as a candidate, and let others improve it or rubbish it? Are things different here? – John Burger Jun 21 '16 at 14:11
• A definite answer will do. A potential one is questionable.. – Eugene Sh. Jun 21 '16 at 14:13
• 'Transistor drop of 0.7V' - Are you confusing Vbe (0.6 - 0.7V) with Vce sat (0.05 - 0.2V)? – JIm Dearden Jun 21 '16 at 14:41
• I know you want to share your hard-earned wisdom, but there's hardly a need to do so. Most users here are able to drive some LEDs without your post, and the ones who can't are usually unable to apply concepts from your answer to their circuit. I'll still upvote the question though, just to reward good intentions. – Dmitry Grigoryev Jun 21 '16 at 15:26 |
# Tag: Clearly Explained
## Skewness in Wolfram Alpha: Clearly Explained!
The positional average known as the skewness allows you to assess the symmetry of a distribution. When the skewness is to zero, then the distribution is symmetric. You…
## Why is Big-O of any constant is always equal to O(1)? Clearly explained!
The Big-O notation may seem quite obscure when you see it for the first time. A good way to intuitively understand this notation is to consider the case…
## L’inférence statistique : clairement expliquée
Dans cette vidéo à but pédagogique, je tente d’expliquer de manière intuitive le concept d’inférence statistique. Il s’agit d’un concept très intuitif que l’on peut appréhender sans formule… |
# Does finding this conditional expectation boil down to finding a conditional probability?
For a random variable $T \geq 0$ with distribution function $F$, a real number $t > 0$ and $\mathcal{G}= \{ \emptyset, \{ T > t \}, \, \{ T \leq t \}, \, \Omega \}$, I need to evaluate the conditional expectation of $E|T-t||\mathcal{G}$, where $\mathcal{G}$ is my sub-$\sigma$-algebra.
Now, I asked somebody how to do this, and they told me that because the $\sigma$-algebra is so small, if I want to find this conditional expectation, all I have to do is find $P(T\leq x| T>t)$, which seems very strange to me. What is the reasoning behind this.
This person then said to apply this formula and then express everything in terms of $F$, but I don't know what the distribution is in this case, do I? So then, how would I go about doing this?
I am extremely confused...
I thank you in advance for your time and patience in helping me to become un-confused!
• Can you give what $F$ is? Or the answer should be given in terms of $F$ without specifying what $F$ is? – Shashi Dec 21 '17 at 18:19
• @Shashi the answer should be given in terms of $F$ wihtout specifying what $F$ is. .Can you do that? – ALannister Dec 21 '17 at 18:24
• One more question is $T$ a continuous random variable? I don't think it affects the answer much, but it is nice to know – Shashi Dec 21 '17 at 18:29
• @Shashi again, that is unknown. I gave all the information I had in my question above. – ALannister Dec 21 '17 at 18:31
## 1 Answer
It is given that $\mathcal G=\{\emptyset, A,A^c,\Omega\}$, where $A=\{T>t\}$ and to make it a little bit interesting let us assume $\PM(A)\in(0,1)$ otherwise this exercise is boring $(\star)$. One knows that $Y:=\E[ |T-t|\mid \mathcal G]$ is the only (up to a null set) $\mathcal G$-measurable function that satisfies: \begin{align}\tag{1} \int_B |T-t|\,d\PM = \int_B Y\,d\PM ,\hspace{15pt} \forall_{ B\in\mathcal G} \end{align} I claim the following: \begin{align} Y(\omega)= \begin{cases} \E[\mathbf{1}_A|T-t|](\PM(A))^{-1} & \text{ if } & \omega\in A\\ \E[\mathbf{1}_{A^c}|T-t|](\PM(A^c))^{-1} & \text{ if } & \omega\in A^c\\ \end{cases} \end{align} You may verify that this indeed satisfies $(1)$.
We have a problem now, because we did not put everything in terms of $F$. Let's do it now. We have: \begin{align} \E[\mathbf{1}_A |T-t|](\PM(A))^{-1}&\stackrel{?}{=}\E[\mathbf{1}_AT](\PM(A))^{-1}-\E[\mathbf{1}_At](\PM(A))^{-1}\\ &=\E[\mathbf{1}_AT](\PM(A))^{-1}-t\\ &=\frac{\E[\mathbf{1}_AT]}{1-F(t)}-t \end{align} And you think about the question mark. Similarly we get: \begin{align} \E[\mathbf{1}_{A^c}| T-t|](\PM(A^c))^{-1}=t-\frac{\E[\mathbf{1}_{A^c}T]}{F(t)} \end{align} Putting these things together yields: \begin{align} \E[|T-t|\mid \mathcal G] = \mathbf{1}_{A^c}\left(t-\frac{\E[\mathbf{1}_{A^c}T]}{F(t)}\right)+\mathbf{1}_{A}\left(\frac{\E[\mathbf{1}_AT]}{1-F(t)}-t \right) \end{align} Do you now understand why we assumed $\PM(A)\in(0,1)$ at ($\star$)? What would change? |
The shortest distance between the line
Question:
The shortest distance between the line $x-y=1$ and the curve $x^{2}=2 y$ is :
1. $\frac{1}{2}$
2. $\frac{1}{2 \sqrt{2}}$
3. $\frac{1}{\sqrt{2}}$
4. 0
Correct Option: , 2
Solution:
Shortest distance between curves is always along common normal.
$\left.\frac{d y}{d x}\right|_{P}=$ slope of line $=1$
$\Rightarrow x_{0}=1$ $\therefore \mathrm{y}_{0}=\frac{1}{2}$
$\Rightarrow \mathrm{P}\left(1, \frac{1}{2}\right)$
$\therefore$ Shortest distance $=\left|\frac{1-\frac{1}{2}-1}{\sqrt{1^{2}+1^{2}}}\right|=\frac{1}{2 \sqrt{2}}$ |
# Energy profile of Nigeria
May 10, 2013, 6:54 pm
Topics:
Nigeria is the largest oil producer in Africa and has been a member of the Organization of Petroleum Exporting Countries (OPEC) since 1971. In 2011, Nigeria produced about 2.53 million barrels per day (bbl/d) of total liquids, well below its oil production capacity of over 3 million bbl/d, due to production disruptions that have compromised portions of the country's oil for years.
Nigeria's hydrocarbon resources are the mainstay of the country's economy, but development of the oil and natural gas sectors is often constrained by instability in the Niger Delta.
The Nigerian economy is heavily dependent on its hydrocarbon sector, which accounted for more than 95 percent of export earnings and more than 75 percent of federal government revenue in 2011, according to the International Monetary Fund (IMF).
The oil industry is primarily located in the Niger Delta where it has been a source of conflict. Local groups seeking a share of the oil wealth often attack the oil infrastructure and staff, forcing companies to declare force majeure on oil shipments. At the same time, oil theft, commonly referred to as "bunkering," leads to pipeline damage that is often severe, causing loss of production, pollution, and forcing companies to shut-in production. Protest from local groups over environmental damages from oil spills and flaring undermined relations between local communities and international oil companies (IOCs). The industry has been blamed for pollution that has damaged air, soil, and water, leading to losses in arable land and decreasing fish stocks.
In addition to oil, Nigeria holds the largest natural gas reserves in Africa, but has limited infrastructure in place to develop the sector. Natural gas that is associated with oil production is mostly flared, but the development of regional pipelines, the expansion of liquefied natural gas (LNG) infrastructure, and policies to ban gas flaring are expected to accelerate growth in the sector, both for export and domestic use in electricity generation. Uncertainties in Nigeria's investment policies and regulatory framework have caused a slowdown in oil and gas exploration activity, and delays in project development, including LNG projects. However, the long-awaited and delayed Petroleum Industry Bill (PIB) could potentially iron out investment uncertainties and set a regulatory framework for the country's oil and gas industry.
The first draft of the PIB was initially introduced in 2008, with the purpose of restructuring the hydrocarbon sector, clarifying regulatory and operational roles of Nigerian energy institutions, and increasing government take and local content requirements. Passage of the PIB has been stalled by a lack of support, notably by IOCs, and also ongoing debate within the Nigerian government. Nonetheless, after several rounds of revisions, there are indications that IOCs have a more positive perception of the bill, although concerns have been expressed, most recently by Shell. The PIB has been sent to Nigeria's National Assembly.
EIA estimates that in 2010 total energy consumption was about 4.4 Quadrillion Btu (111,000 kilotons of oil equivalent). Of this, traditional biomass and waste accounted for 82 percent of total energy consumption. This high percent share represents the use of biomass to meet off-grid heating and cooking needs, mainly in rural areas. IEA data for 2009 indicate that electrification rates for Nigeria were 50 percent for the country as a whole – leaving approximately 76 million people without access to electricity in Nigeria. Other estimates place the countrywide electrification rate as low as 45 percent.
Nigeria has vast natural gas, coal, and renewable energy resources that could be used for domestic electricity generation. However, the country lacks policies to harness resources and develop new (and improve current) electricity infrastructure. The Nigerian government has had several plans to address the need for power, including a recent announcement to create 40 gigawatts (GW) of capacity by 2020 (compared to 2009 installed capacity of 6 GW). Achieving this goal will mainly depend on the ability of the Nigerian government to utilize currently flared natural gas.
# Oil
For the last nine years, the U.S. has imported between 9-11 percent of its crude oil from Nigeria; however, U.S. import data for the first half of 2012 show that Nigerian crude is down to a 5 percent share of total U.S. crude imports.
According to Oil and Gas Journal (OGJ), Nigeria has an estimated 37.2 billion barrels of proven oil reserves as of the end of 2011. The majority of reserves are found along the country's Niger River Delta and offshore in the Bight of Benin, the Gulf of Guinea, and the Bight of Bonny. Current exploration activities are mostly focused in the deep and ultra-deep offshore with some activities in the Chad basin, located in the northeast of the country.
The government hopes to increase proven oil reserves to 40 billion barrels in the next few years; however, exploration activity levels are at their lowest in a decade and only three exploratory wells were drilled in 2011, compared to over 20 in 2005. Rising security problems related to oil theft, pipeline sabotage, and piracy in the Gulf of Guinea, coupled with investment uncertainties surrounding the long-delayed PIB, have curtailed oil exploration projects and impeded the country from reaching its ongoing target to increase production to 4 million bbl/d. Instead, crude oil production averaged 2.13 million bbl/d in 2011, roughly the same as it was a decade ago, and total liquids production averaged 2.53 million that same year, which is still below the peak production of 2.63 million bbl/d reached in 2005.
## Production
In 2011, crude oil production averaged close to 2.13 million bbl/d, up from 2.05 million bbl/d in the previous year. EIA's recent estimates show that crude output rose slightly again in 2012 and averaged almost 2.15 million bbl/d for the first half of this year. The recent increase in production is due to the expansion of existing fields and new production from deepwater fields. The latest major deepwater field to come onstream was Total's Usan field, which began producing over 100,000 bbl/d in July 2012 and is expected to reach 180,000 bbl/d by the end of this year.
Oil production in Nigeria reached its peak of 2.63 million bbl/d in 2005, but began to decline significantly as violence from militant groups surged, forcing many companies to withdraw staff and shut in production. The lack of transparency of oil revenues, tensions over revenue distribution, and environmental damages from oil spills, coupled with local ethnic and religious tensions, have created a fragile situation in the oil-rich Niger Delta basin. As a result, crude oil production plummeted by more than 25 percent by 2009, four years after reaching its peak.
Towards the end of 2009, an amnesty was declared and the militants came to an agreement with the government whereby they handed over weapons in exchange for cash payments and training opportunities. The rise in oil production after 2009 was partially due to the reduction in attacks on oil facilities following the implementation of the amnesty program, which allowed companies to repair some damaged infrastructure and bring some supplies back online. Another major factor that contributed to the upward trend in output was the continued increase in new deepwater offshore production. The government began taking measures to attract investment in deepwater acreage in the 1990s in order to boost production capacity and diversify the country's oil fields, as security issues in the Niger Delta escalated. In order to incentivize investments in deepwater areas, which involve higher capital and operating costs, the government offered production-sharing contracts (PSC) in which IOCs received a greater share of revenue as the depth increased.
Although terms within the PSCs have been revised over time to provide the government with larger shares in revenue, the policy did facilitate greater investment and production in deepwater fields. The first deepwater field began production in 2003, and since then output from deepwater fields has added over 800,000 bbl/d to the country's production capacity.
As an OPEC member, Nigeria has agreed to a crude oil production quota of 1.704 million bbl/d. However, the country still plans on bringing online several projects in the next few years. Planned upstream developments, particularly deepwater projects, should increase Nigerian oil production in the medium term, but the timing of these startups will depend heavily on the passing of the PIB and the fiscal/regulatory terms it requires of the oil industry. Many of the planned projects described below have already been delayed.
Upcoming oil projects in Nigeria
Project Capacity
('000 bbl/d)
Est. Startup Sector Operator
1Expansion of existing Agbami field- drilling activities expected to continue through 2014 (Chevron).
2Production began in 2010 and is expected to ramp up to 70,000 bbl/d once all wells are drilled.
3Ofon (phase 1) is currently producing around 30,000 bbl/d and phase 2 is expected to increase capacity to 90,000 bbl/d.
Note: Deepwater projects have a water depth greater than 200 meters.
Sources: Oil and Gas Journal; IEA Medium Term Oil Market Report; IHS Cera, Wood Mackenzie; Total; Chevron; Rigzone; Business Week; OPEC Secretariat
Agbami1 100 2011-2014 Deepwater Chevron
Ebok (phase 2) 35 2012 Offshore Afren
Gbaran Ubie2 70 2012+ Onshore Shell
Ehra North (phase 2) 50 2013+ Deepwater ExxonMobil
Oberan tbd 2013+ Deepwater Eni (Agip)
Ofon (phase 2)3 90 2014 Offshore Total
Aje tbd 2014 Deepwater Yinka Folawyo Petroleum
Bonga North, Northwest 50-150 2014+ Deepwater Shell
Bonga Southwest and Aparo 140 2014+ Deepwater Shell
Egina 150-200 2014+ Deepwater Total
Bosi 135 2015 Deepwater ExxonMobil
Nsiko 100 2015+ Deepwater Chevron
Uge 110 2016 Deepwater ExxonMobil
Nkarika tbd 2019 Offshore Total
Etan/Zabazaba 110 tbd Deepwater Eni (Agip)
Okan 35 tbd Offshore Chevron
#### Security risks
Since December 2005, Nigeria has experienced increased pipeline vandalism, kidnappings, and militant takeovers of oil facilities in the Niger Delta. The Movement for the Emancipation of the Niger Delta (MEND) is the main group attacking oil infrastructure for political objectives, claiming to seek a redistribution of oil wealth and greater local control of the sector. Additionally, kidnappings of oil workers for ransom are common and security concerns have led some oil services firms to pull out of the country and oil workers unions to threaten strikes over security issues. The instability in the Niger Delta has also caused significant amounts of shut-in production at onshore and shallow offshore fields, and forced several companies to declare force majeure on oil shipments.
The amnesty program implemented in 2009 led to decreased attacks in 2009-2010 and some companies were able to repair damaged oil infrastructure. However, the lack of progress in job creation and economic development has led to increased bunkering and other attacks in 2011.
Bunkering, which in the context of Nigeria's oil industry refers to the theft and trade of stolen oil, has recently surged, and according to NNPC data, pipeline vandalism increased by 224 percent in 2011 over the previous year. Estimates from Nigeria's Ministry of Finance show that about 400,000 bbl/d of oil was stolen in April 2012, which led to a fall of about 17 percent in official oil sales. Royal Dutch Shell, Nigeria's largest producer, recently estimated that 150,000-180,000 bbl/d, or 6 percent of the country's total production, on average is lost to oil bunkering and spills.
According to information disseminated by an investigative task force in Nigeria, there are three main ways oil is bunkered: by small cargo canoes that navigate the swampy, shallow waters of the Niger Delta where culprits puncture pipelines to siphon crude into small tanks; stealing crude directly from the wellhead; or filling tankers at export terminals, which is referred to as "white collar" bunkering. Some stolen oil is taken to illegal refineries along the Niger Delta's swampy bush areas and sold domestically and regionally, while other portions make their way to the international market. Some analysts believe that white collar bunkering is not included in Shell's oil theft estimate and that the average amount stolen is actually closer to the Finance Ministry's estimate, although there is no official number.
In addition to losses in official oil sales, oil theft and illegal refineries are causing environmental damages and costing the country $7 billion a year, according to the Nigerian government. According to the Nigerian National Oil Spill Detection and Response Agency (NOSDRA) approximately 2,400 oil spills had been reported between 2006 and 2010 that resulted from sabotage, bunkering, and poor infrastructure. The amount of oil spilled in Nigeria has been estimated to be around 260,000 barrels per year for the past 50 years, according to a report cited in the New York Times. The oil spills have caused land, air, and water pollution and severely affected surrounding villages by decreasing fish stocks and contaminating water supplies and arable land. The United Nations Environment Program (UNEP) released a study on Ogoniland and the extent of environmental damage from over 50 years of oil production in the region. The study confirmed community concerns regarding oil contamination across land and water resources, stating that that the damage is ongoing and estimating that it could take 25 to 30 years to repair. Incidents of piracy in the Gulf of Guinea have posed a risk to deepwater offshore operations. According to the International Maritime Organization (IMO), there were 53 piracy related attacks in the Gulf of Guinea in 2011, up from the 47 in 2010. The attacks typically involve stolen cargo, especially crude oil, and violence against crew members. The country's deepwater offshore production has been relatively unharmed by the country's instability, as oil platforms are miles out from the coast, but piracy does pose a security risk to deepwater offshore production. Although incidences of Nigerian piracy are still less than in Somalia, Nigerian piracy is becoming more frequent and happening at greater distances away from the coast, according to IMO. ## Sector organization In 1977, Nigeria created the Nigerian National Petroleum Company (NNPC). At that time, NNPC's primary function was to oversee the regulation of the Nigerian oil industry, with secondary responsibilities for upstream and downstream developments. In 1988, the Nigerian government divided the NNPC into 12 subsidiary companies in order to better manage the country's oil industry. The majority of Nigeria's major oil and natural gas projects are managed through JVs with the NNPC. #### Recent developments The government has been planning to transform NNPC into a more profit-driven company that can seek out private financing. While these discussions have been underway for many years, a Petroleum Industry Bill (PIB) is currently being debated by the National Assembly to reform the entire hydrocarbon sector. Parts of the bill have recently been approved as standalone laws, while the different agencies and roles of the new national oil company and the NNPC have yet to be fully defined. Differing versions of the PIB are currently under debate, especially around more contentious points such as the renegotiation of contracts with international oil companies, the changes in tax and royalty structures, and clauses to ensure that companies use or lose their assets. The ongoing debate has delayed investments in both the oil and natural gas sectors. As part of the energy sector reform, in April 2010, then acting president (now president) Goodluck Jonathan signed the Nigerian Content Development Bill (NCD) into law. The bill aims to increase the role of Nigerian companies in all aspects of the oil and gas industry. The new law requires that Nigerian companies obtain contracts and win bids so long as the local company is capable, the Nigerian content is higher, and the bid is not more than 10 percent higher than the competing bid. According to the African Oil and Gas Monitor (Afroil), the NCD applies to all contracts worth over US$1 million and also applies to insurance, banking, and other sectors tied into the oil industry.
The distribution of oil revenue has been a very contentious issue in the country, since all revenue, including production proceeds, corporate tax, customs duties, and value-added tax, goes directly to the federal government account. The lack of transparency and mismanagement of oil revenue has sparked mistrust among the federal government, states, and local councils. The 1999 constitution carved out a revenue-sharing arrangement in which 13 percent of oil revenue from onshore production goes directly to the nine oil producing states in the Niger Delta, with the remaining revenue allocated to the federal government (47.2 percent), states (31.1 percent), local councils (15.2 percent), and National Priorities Services Fund (6.5 percent).
Disagreement over the current revenue-sharing arrangement is one of the main issues driving the political tension, theft, and sabotage in the Niger Delta, and groups have demanded to extend their revenue-share to offshore production and increase their onshore revenue-share to 50 percent. However, the effect that the PIB will have on the current revenue-sharing arrangement is unclear.
Petroleum Industry Bill (PIB), draft 2012
Key points
Note: These measures may not appear in the final version of the PIB.
Source: Energy Intelligence, Reuters, Financial Times, and IHS Cera.
Increase exploration activities and expand reserves
Monetize natural gas reserves and reduce flaring
Separate regulators for the upstream, midstream, and downstream sectors
Deregulate the downstream sector
Offer acreage through bid rounds
Increase government take
Higher royalties
Lower production taxes
Increase local participation through employment, related industries, and local oil & gas companies
Petroleum Host Communities Fund (PHCF)
## International oil companies
Foreign companies operating in joint ventures (JVs) or production sharing contracts (PSCs) with the NNPC include ExxonMobil, Chevron, Total, Eni, Addax Petroleum (recently acquired by Sinopec of China), ConocoPhillips, Petrobras, StatoilHydro, and others.
Shell has been working in Nigeria since 1936, and currently operates the largest nameplate crude oil production capacity, estimated to be between 1.2-1.3 million bbl/d. However, the company has been hardest hit by the instability as much of its production is in shallow water and onshore the Niger Delta. Much of Shell's crude oil production capacity is shut-in, some since as far back as early 2006. According to Shell, the total oil produced from Shell-run operations averaged 974,000 bbl/d in 2011.
Shell operates in Nigeria through the Shell Petroleum Development Company of Nigeria Limited (SPDC) and the Shell Nigeria Exploration and Production Company Limited (SNEPCo). SPDC is the largest oil and gas company in Nigeria and is a joint venture between NNPC (55%), Shell (30%), Elf Petroleum Nigeria Limited — a subsidiary of Total — (10%), and Agip (5%). SPDC's operations include a network of pipelines, nine gas plants, and two export terminals. Shell owns 100 percent of SNEPCo, which was formed in 1993 to develop Nigeria's deepwater oil and gas resources offshore. Under a PSC with NNPC, it operates the Bonga deepwater oil and gas project and is a venture partner in the Erha deepwater oil and gas project with ExxonMobil.
ExxonMobil, the second largest IOC, operates fields producing approximately 800,000 bbl/d (700,000 bbl/d of crude) in partnership with NNPC. Chevron is the third largest oil producer in Nigeria and produced an average of 516,000 bbl/d of crude oil in 2011. The company operates under its subsidiary, Chevron Nigeria Limited, and holds 40 percent interest in 13 concessions under a joint venture arrangement with NNPC. Most of its oil projects are in shallow water and onshore in the Niger Delta. Chevron also has interests in deepwater projects, particularly its largest deepwater discovery Agbami.
Total and Eni are the fourth and fifth largest oil producers in the country, producing 179,000 bbl/d and 96,000 bb/d in 2011, respectively. Total operates several offshore projects and one onshore. Total is the operator of the Usan deepwater field that came online in July 2012. Total's smaller share of production has been unaffected in recent years whereas Eni/Agip has had some incidents, specifically at the Brass River terminal that have shut-in varying volumes of production since December of 2006.
## Exports
In 2011, Nigeria exported approximately 2.2-2.3 million bbl/d of crude oil, according to an analysis of data from the Global Trade Atlas (GTA), APEX Tanker Data (Lloyd's Maritime Intelligence Unit), and OPEC. Crude production estimates are sometimes less than crude export estimates for Nigeria due to oil theft, particularly from the wellhead, that reduces the amount of official oil sales and makes it difficult to estimate production.
Nigeria is an important oil supplier to the United States. In 2011, 767,000 bbl/d of crude (33 percent) of Nigeria's crude exports were sent to the United States, making Nigeria the fourth largest foreign oil supplier to the United States. Although Nigeria's high-quality light, sweet crude is a preferred gasoline feedstock, United States imports of Nigerian crude have decreased in volume and as a share of total imports in 2011, with the trend continuing in 2012. According to an EIA article published earlier this year, although total crude imports into the United States are falling, imports from Nigeria have declined at a steeper rate. The main reasons underlying this trend are that some Gulf Coast refiners have reduced Nigerian imports in favor of domestically-produced crude, and that two refineries in the U.S. East Coast, which were significant buyers of Nigerian crude, were idled in late 2011. As a result, Nigerian crude as a share of total United States imports has fallen to 5 percent in the first half of 2012, down from 10 and 11 percent in the first half of 2011 and 2010, respectively, according to EIA.
Despite shut-in production, Nigerian oil trade patterns appear to have remained stable over the past several years, most of which can be attributed to capacity additions and shifting world demand. Other major importers of Nigerian crude oil include Europe (28 percent), India (12 percent), Brazil (8 percent), Canada (5 percent), and South Africa (3 percent). According to the Energy Intelligence Group's International Crude Oil Market Handbook, Nigeria has about 20 exported crude streams and most are light, sweet grades, with gravities ranging from API 29 — 47 degrees and low sulfur contents of 0.05 — 0.3 percent.
## Downstream
#### Refining
In 2011, Nigeria consumed approximately 286,000 bbl/d of petroleum, according to EIA estimates, and about 180,000 bbl/d of which was gasoline, according to estimates from OPEC's Annual Statistical Bulletin. The country has four refineries (Port Harcourt I and II, Warri, and Kaduna) with a combined capacity of around 445,000 bbl/d, according to OGJ. As a result of poor maintenance, theft, and fire, none of these refineries have ever been fully operational. In 2009 and part of 2010 particularly low refinery runs forced the country to import about 85 percent of its fuel needs. In 2011, the operational capacity at refineries averaged 24 percent, slightly higher than the 22 percent in the previous year. Refineries have never reached full production capacity due to operational failures and sabotage, mainly on crude pipelines feeding refineries. Refinery utilization rates may improve in 2013 if the planned turn-around maintenance is performed.
For several years, the government has planned the construction of new refineries, but the lack of financing has caused several delays. As part of the PIB energy sector reforms, the government also plans to liberalize domestic fuel prices and privatize the refining sector. In the meantime, according to Business Monitor International, NNPC has signed contracts to swap crude for products under yearly contracts with Trafigura, an oil trading company, and Ivory Coast's national refiner SIR.
#### Oil infrastructure
Nigeria has over a dozen domestic crude oil pipelines that funnel crude to export terminals and domestic refineries. The pipelines run from 31 miles to as long as 383 miles, through mostly rural or swampy areas, making them difficult to police. Most of the pipeline systems are jointly owned by the major IOCs and NNPC, while the export terminals are operated by Shell (Forcados and Bonny terminals), ExxonMobil (Qua Iboe terminal), Chevron (Escravos and Pennington terminals) and Eni (Brass terminal). There are also several floating production, storage and offloading (FPSO) vessels that facilitate exports from deepwater offshore fields.
#### Domestic fuel prices & subsidies
On January 1, 2012, the Nigerian government removed the federal government fuel subsidy on the grounds that it caused market distortions, encumbered investment in the downstream sector, supported economic inequalities (as rich fuel-importing companies were the main beneficiaries), and created a nebulous channel for fraud. However, the government quickly reversed course about two weeks later and reinstated a partial subsidy as public outcry and massive strikes organized by oil and non-oil unions threatened to shut down oil production completely. Many Nigerians consider the fuel subsidy a key benefit of living in the oil-rich country.
Prior to the subsidy removal, the pump price of fuel was 65 naira ($0.40) per liter compared to the actual cost of around 139 naira per liter. According to the United Nations, the fuel subsidy costs the Nigerian government annually 1,200 billion naira ($7.6 billion), or 2.6 percent of the country's GDP. Subsequent to the removal, the government restored a partial subsidy, requiring consumers at the pump to pay 97 naira per liter ($0.60), as opposed to the new price of 141 naira per liter. Controversy over Nigeria's fuel subsidy resurfaced in August 2012, as Nigerian oil marketing associations launched an open-ended strike over unpaid subsidies. The associations accused the government of stopping payments around March/April, though the government denies this claim. Tensions between Nigerian fuel importers and the government have been high since the government launched an investigation of the industry to mitigate subsidy mismanagement. The investigation has led to the arrest of about a dozen marketers and the suspension of seven companies that were accused of siphoning funds and inflating prices, according to IHS Cera. Debate over Nigeria's fuel subsidy will continue among government officials, oil marketing associations, unions, and citizens, especially since the most recent version of the PIB attempts to deregulate the downstream sector. However, it is unclear how the PIB will affect fuel subsidies. Meanwhile, under the 2012 budget, the government is expected to subsidize at least 104,000 bbl/d of fuel imports for the remainder of 2012, which falls short by about 160,000-170,000 bbl/d of what is needed. According to PFC Energy, the government overestimated fuel subsidy savings and underestimated subsidy arrears' claims in 2012, and may have to access the Excess Crude Account to avoid strong public discontent. Map: Niger Delta Oil Infrastructure Source: U.S. government # Natural gas Nigerian LNG exports to the U.S. substantially declined in 2011, while the country's LNG exports to Japan more than tripled in 2011. Nigeria had an estimated 180 trillion cubic feet (Tcf) of proven natural gas reserves as of the end of 2011, according to the OGJ, making Nigeria the ninth largest natural gas reserve holder in the world and the largest in Africa. Despite holding a top 10 position for proven natural gas reserves, Nigeria produced about 1 Tcf of dry natural gas in 2011 and ranked as the world's 25th largest natural gas producer. The majority of the natural gas reserves are located in the Niger Delta and, therefore, the sector is also impacted by the same security and regulatory issues affecting the oil industry. Most of Nigeria's marketed natural gas is exported as Liquefied Natural Gas (LNG), with the remainder consumed domestically and other portions exported regionally via the West African Gas Pipeline. Shell Nigeria Gas Limited (SNG), a Shell-owned gas sales and distribution company, also delivers Compressed Natural Gas (CNG) to industries as far as 62 miles away from existing pipelines. Dry natural gas production grew for most of the last decade until Shell declared a force majeure on gas supplies to the Soku gas-gathering and condensate plant in November 2008. Shell shut down the plant to repair damages to a pipeline connected to the Soku plant that was sabotaged by local groups siphoning condensate. The plant reopened nearly 5 months later, but was shut down again for most of 2009 for operational reasons. The Soku plant provides nearly half of the feed gas to Nigeria's sole LNG facility; therefore, its closure led to a reduction in Nigeria's natural gas production, particularly from Shell's fields in the Niger Delta, and a 33 percent decline in LNG exports in 2009. Gas production partially recovered in 2010 after the plant reopened. ## Sector organization For the most part, the same national regulatory bodies and international oil companies (IOCs) involved in Nigeria's oil industry are also the actors involved in the gas industry. The Nigerian Gas Company Limited (NGC), a subsidiary of NNPC, is tasked with the marketing, transmission, and distribution of gas and oversees pipeline projects. The PIB proposes to divide the NGC into two organizations: the midstream National Gas Transportation Company, and a downstream gas marketing company. Like in the oil industry, NNPC holds interest in gas projects alongside international oil companies. #### International oil companies Shell dominates gas production in the country, as the Niger Delta, which contains most of Nigeria's gas resources, also houses most of Shell's hydrocarbon assets. Shell produced 707 MMcf/d of gas in 2011 and its latest gas project, the Gbaran-Ubie integrated oil and gas project, achieved peak gas production of 1 Bcf/d in early 2011. Gbaran-Ubie's gas is delivered to domestic power plants and to NLNG for export. The second largest gas producer, Total, produced 534 MMcf/d in 2011. Total, along with Eni, is developing the Brass LNG facility, which will comprise two trains with the capacity of processing 5 million metric tons per year of LNG sometime after 2014. Eni was the third largest natural gas producer in Nigeria in 2011. The company's natural gas output grew by almost 40 percent in the last three years and reached 354 MMcf/d in 2011. Chevron produced 343 MMcf/d in 2011 and is majority owner of major gas projects in the country, such as the WAGP and the Escravos GTL plant, as noted above. ## Gas flaring Since Nigeria's oil fields lack the infrastructure to produce and market associated natural gas, much of it is flared. According to the National Oceanic and Atmospheric Administration (NOAA), Nigeria flared 536 Bcf of natural gas in 2010 – or about a third of gross natural gas produced in 2010, according to NNPC. In 2011, the NNPC claimed that flaring cost Nigeria US$2.5 billion per year in lost revenue.
The Nigerian government has been working to end natural gas flaring for several years, but the deadline to implement the policies and fine oil companies has been repeatedly postponed with the most recent deadline being December 2012, which appears unlikely to be enforced. In 2009, the Nigerian government developed a Gas Master Plan that promotes investment in pipeline infrastructure and new gas-fired power plants to help reduce gas flaring and provide much-needed electricity generation. However, progress is still limited as security risks in the Niger Delta have made it difficult for IOCs to construct infrastructure that would support gas monetization.
## Gas to liquids (GTL)
A Chevron-operated Escravos Gas to Liquids (GTL) project is currently underway. The project is a joint venture with NNPC and South Africa's Sasol and began in 2008. Escravos GTL has faced multiple delays and cost overruns, but is currently scheduled to be operational by 2013. The project will convert 325 million cubic feet of natural gas per day into 33,000 barrels of liquids, principally synthetic diesel, to supply clean-burning, low-sulfur diesel fuel for cars and trucks, according to Chevron.
## Exports
#### Liquefied natural gas (LNG)
A significant portion of Nigeria's marketed natural gas is processed into LNG. In 2010, Nigeria exported 17.97 million metric tons (875 Bcf) of LNG, making Nigeria the fifth largest LNG exporter in the world and the largest LNG exporter in the Atlantic Basin. Furthermore, Nigeria's LNG accounted for 8 percent of the total supplied to the world market and 30 percent of LNG coming from the Atlantic Basin in 2010. However, although Nigeria's market share of LNG trade in the Atlantic Basin has been increasing, mainly due to decreased LNG exports from Algeria, the country's market share in the world has decreased from the 10 percent it once held to 7 percent, as reported by Nigeria Liquefied Natural Gas (NLNG) Limited in 2012. Nigerian LNG exports rose to 18.86 million metric tons (918 Bcf) in 2011, but due to no recent capacity increases and rising production from Qatar and Australia, Nigeria's world market share of LNG is slipping. Nigeria's LNG production capacity is currently 22 million metric tons per year, and any major increase is not expected to come online before 2015.
In 2010, most of Nigeria's LNG was exported to Europe (67 percent), mainly Spain (31 percent), France (16 percent) and Portugal (12 percent), with smaller amounts to Turkey, United Kingdom, and Belgium. Other export destinations include Asia (15 percent) and North America (14 percent). The U.S. imported 0.86 million metric tons (42 Bcf) of Nigerian LNG in 2010, providing 1 percent of total U.S. LNG imports.
In 2011, U.S. imports of Nigerian LNG significantly decreased to 0.05 million metric tons (2.5 Bcf), according to EIA data, which is the lowest level recorded since Nigerian LNG exports began. In 2011, more of Nigeria's LNG exports were sent to Japan and other Asian countries due to higher demand for LNG imports in those countries. Most notably, Nigerian exports to Japan more than tripled in 2011, as Japan's LNG demand increased due to the Fukushima nuclear accident.
The Nigeria Liquefied Natural Gas (NLNG) facility on Bonny Island is Nigeria's only LNG complex. NLNG partners, including NNPC (49 percent), Shell (25.6 percent), Total (15 percent), and Eni (10.4 percent), completed the first phase of the facility in September 1999. NLNG currently has six trains and a production capacity of 22 million metric tons per year (1.1 Tcf). A seventh train is under construction to increase the facility's capacity by 8 million metric tons per year. However, regulatory and political issues, particularly regarding the long-delayed PIB, have delayed the project's start date to beyond 2014.
Three additional LNG plants with a total of seven trains were expected to come online after 2012, but their expected start dates have been postponed beyond 2016. Plans include OK LNG (4 trains), Brass LNG (2 trains), and Progress LNG (1 train). These are in varying stages of development, and investment decisions will depend heavily on security, world LNG markets, and the final outcome of the PIB. Availability of natural gas for export will also depend on Nigerian efforts to expand the use of natural gas for domestic electricity generation – efforts that are included in both the Gas Master Plan and the PIB.
#### International pipelines
Nigeria began exporting some of its natural gas via the West African Gas Pipeline (WAGP) in 2011. The pipeline is operated by the West African Gas Pipeline Company limited (WAPCo), which is owned by Chevron West African Gas Pipeline Limited (36.7%), Nigerian National Petroleum Corporation (25%), Shell Overseas Holdings Limited (18%), Takoradi Power Company Limited (16.3%), Societe Togolaise de Gaz (2%), and Societe BenGaz S.A. (2%).
The 420-mile pipeline carries natural gas from Nigeria's Escravos region to Togo, Benin, and Ghana. WAGP links into the existing Escravos-Lagos pipeline and moves offshore at an average water depth of 35 meters. According to WAPCo, roughly 85 percent of the gas is used for power generation and the remainder for industrial applications. Current recipients are Volta River Authority's Takoradi Thermal Power Plant in Ghana and Electricity Community of Benin (CEB), a company co-owned by Benin and Togo. Exports should eventually reach initial capacity of 170 million cubic feet per day (MMcf/d) and plans are underway to expand capacity to as much as 460 MMcf/d and possibly extend the pipeline further west to Cote d'Ivoire.
As of early October 2012, the pipeline is shutdown due to a loss of pressure around the Lome segment that it experienced at the end of August 2012. The WAPCo has noted that maintenance to the damaged pipeline is planned for completion at the end of December.
Nigeria and Algeria continue to discuss the possibility of constructing the Trans-Saharan Gas Pipeline (TSGP). The 2,500-mile pipeline would carry natural gas from oil fields in Nigeria's Delta region to Algeria's Beni Saf export terminal on the Mediterranean and is designed to supply gas to Europe. In 2009, the NNPC signed a memorandum of understanding (MoU) with Sonatrach, the Algerian national oil company, to proceed with plans to develop the pipeline. Several national and international companies have shown interest in the project, including Total and Gazprom. Security concerns along the entire pipeline route, increasing costs, and ongoing regulatory and political uncertainty in Nigeria have continued to delay this project.
# Electricity
Nigeria's electricity sector is relatively small. Brazil and Pakistan, two countries with similar population sizes, generate 24 times and 5 times more power than Nigeria, respectively. Bangladesh, a country slightly smaller in population and with a smaller gross domestic product (GDP) than Nigeria, produces nearly twice as much electricity as Nigeria. The latest EIA estimates show that Nigeria's net generation was 18.8 billion kilowatthours (KWh) in 2009. Installed electricity capacity has remained relatively stable over the last decade at 5.9 GW, although net generation has slightly decreased from its peak of 23 billion KWh in 2004, mainly due to a decline in hydroelectric power.
The majority of electricity generation comes from thermal power plants (77 percent), with about two-thirds of thermal power derived from natural gas and the rest from oil. Hydroelectricity (23 percent), the only other source of power generation, has decreased gradually from its peak of 8.2 billion KWh in 2002 to 4.5 billion KWh in 2009. Nigeria's electricity net consumption was 17.7 billion kWh in 2009, slightly less than generation, and exported most of the remainder to Niger through an agreement under the West African Power Pool.
According to a World Bank report, Nigeria experienced power outages on average for 46 days per year from 2007-2008, and outages lasted almost 6 hours on average. Population growth coupled with underinvestment in the electricity sector has led to increased power demand without any significant increases in capacity, in addition to inadequate maintenance, insufficient feedstock and an inadequate transmission network. Businesses often purchase costly generators to use as back-up during outages and the majority of Nigerians use traditional biomass, such as wood, charcoal, and waste, to fulfill household energy needs, such as cooking and heating.
Nigeria's electricity sector is divided into three sub-sectors: existing Federal Government of Nigeria (FGN) Power Generation facilities, Independent Power Projects (IPPs), and National Integrated Power Projects. The majority of power stations, both thermal and hydro, are FGN facilities funded by the government, while IPPs are backed by the private sector. The largest IPP and power plant in Nigeria is the 650 megawatt (MW) Afam VI Power Generating Plant owned by Shell. According to Shell, between 14-24 percent of overall generation contributes to the national grid.
The National Integrated Power Project (NIPP) is a plan launched by the Nigerian government to construct multiple new power plants and reduce gas flaring for feedstock. However, plans to bring NIPPs online have repeatedly been delayed, as government plans to privatize electricity generation and distribution companies have been slow. Two major challenges of privatization is unbundling the state-owned Power Holding Company of Nigeria, which was established to regulate pricing and competition, and publically managing the expected 88 percent rise in electricity tariffs once privatization is underway. The Nigeria Electricity Regulatory Commission has also said that new tariffs will be imposed on selected cities.
Currently, there are some IPPs under construction such as ExxonMobil's 388-MW gas-fired plant in Bonny and ABB's 450-MW gas and steam turbine in Abuja. Nonetheless, according to the government, the country needs \$10 billion of investment a year for at least a decade to meet its power sector needs. In addition to funds, Nigeria's ability to meet power demand heavily relies on its ability to reduce gas flaring, increase gas distribution infrastructure, and diversify power generation sources.
# Sources
• Afroil: Africa Oil and Gas Monitor (Newsbase)
• BBC News
• BP Statistical Review of World Energy, 2011
• Business Monitor International
• CIA World Factbook
• Chevron
• Economist
• Energy Intelligence Group
• Eni
• Eurasia Group
• ExxonMobil
• FACTS Global Energy
• Financial Times
• Global Trade Atlas
• IHS Global Insight
• International Energy Agency (IEA)
• International Maritime Organization
• International Monetary Fund
• Lloyd's APEX Database (Analysis of Petroleum Exports)
• New York Times
• Oil and Gas Journal
• OPEC Annual Statistical Bulletin
• Petroleum Africa
• Petroleum Economist
• Petroleum Intelligence Weekly
• Reuters
• Rigzone
• Shell
• Total
• World Bank
Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Energy Information Administration. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Energy Information Administration should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.
Glossary
### Citation
Administration, E. (2013). Energy profile of Nigeria. Retrieved from http://www.eoearth.org/view/article/152513
### 0 Comments
To add a comment, please Log In. |
# Volcano Watch — Glowworm glows when Earth quakes
Release Date:
Seismometers located across the island detect earthquakes and radio the electronic signals back to HVO. The arrival of the signals at HVO might seem like the end of the story, but actually it only begins the procedure of acquiring and processing earthquake data, which is an exceptionally complicated process.
For the past five years, HVO has been running a seismic data acquisition and processing system called Earthworm. First conceived in 1993, Earthworm was developed by USGS scientists and continues to be improved upon by the USGS and many other cooperating institutions.
Recently, a suite of computer programs written primarily by scientists at the Alaska and Cascades Volcano Observatories has been installed at HVO. These programs run in conjunction with Earthworm, creating an integrated volcano monitoring tool called Glowworm.
Glowworm systems have been installed and currently monitor active volcanoes in Mexico, El Salvador, Costa Rica, Nicaragua, Colombia, Ecuador, Montserrat, Papua New Guinea, Saipan (for the Anatahan Volcano), the Cascade Range, Alaska, and now Hawai'i. The majority of these installations were accomplished by members of the USGS working with the Volcano Disaster Assistance Program (VDAP). VDAP was formed almost 20 years ago to mitigate the risk to increasing numbers of people living on or near active volcanoes, mainly in developing countries.
The power of Glowworm lies in its extensive use of graphical user interfaces (GUI), enabling many tasks to be monitored at once. In many VDAP crisis responses, only one Glowworm computer has been needed to acquire, process, and archive data from a small network of seismic stations. The GUI provides displays for monitoring the computer system and program performance, allows amplitude and spectral data to be displayed in real-time, sounds alarms when significant events occur, and provides some ability to archive the data.
In addition to its seismic data applications, Glowworm systems are being used for mudflow monitoring on the slopes of Mount Rainier in the Cascades. Plans are underway to use Glowworm to alert scientists about abnormal changes in ocean-water levels in Papua New Guinea that might indicate a local tsunami.
How does Glowworm actually help monitor Hawai'i's active volcanoes? Signals from roughly 60 seismic stations on Hawaii and Maui are radioed to HVO in real time. These signals are digitized and fed to an Earthworm module that continuously calculates RSAM, or Real-time Seismic Amplitude Measurement, for each station. RSAM is the calculated average amplitude of the ground motion over a given time interval. The higher the RSAM, the stronger the ground motion. During a volcanic crisis, RSAM data processed in Glowworm can instantly display changes in volcanic tremor and frequency of earthquakes.
Volcanic tremor often lasts a relatively long time but has low amplitude, so RSAM is calculated over a 10-minute window for tremor detection. To detect earthquakes, a second RSAM is calculated over a 2.56-second window and is tuned to see larger average amplitudes. An alarm is declared if pre-assigned thresholds for amplitude and duration are exceeded; these thresholds differ for each station. If the conditions are met for enough stations, an alarm is automatically declared, and e-mail and pager notification is sent to those required to respond.
Glowworm can also display SSAM data (Spectral Seismic Amplitude Measurement), which measures the distribution of seismic energy being produced over a broad frequency spectrum. This can be useful in determining the type of process that is causing a specific seismic event. SSAM also provides data that will be particularly useful in future research.
Glowworm is now operational and monitoring the Kīlauea summit network. Additional tuning and implementation are required to optimize the Glowworm software to meet HVO's specific needs. We will accomplish this in the coming months.
Hawaii's volcanoes are arguably the most active in the world, with a growing number of people living in their presence. New monitoring tools, such as Glowworm, bolster our monitoring capability and provide opportunities to better understand these active volcanoes.
### Volcano Activity Update
Eruptive activity at Puu Oo continues. Most lava flows have been at the lower end of the rootless shield complex along the Mother's Day lava tube south of Puu Oo. Such flows have been small and short-lived but are gradually advancing toward the top of Pulama pali. Vents within the crater of Puu Oo are incandescent and sometimes visible from Mountain View and Glenwood. No lava is visible from the Chain of Craters Road.
An earthquake of magnitude 3.3 was felt in Glenwood, Hilo, and Papaikou at 6:51 p.m. April 1. It was located 6 km (4 miles) south-southwest of Puu O`o at a depth of 9 km (6 miles). That was the only earthquake reported felt in the week ending on April 7.
Mauna Loa is not erupting. The summit region continues to inflate slowly. Seismic activity remains very low, with 2 earthquakes located in the summit area during the past week. |
Volume 301 - 35th International Cosmic Ray Conference (ICRC2017) - Session Cosmic-Ray Direct. CRD- theory
Bayesian inference on the galactic magnetic field toward the south galactic pole using UHECRs
J. Kim,* H.B. Kim, D. Ryu
*corresponding author
Full text: pdf
Pre-published on: 2017-08-16 15:26:07
Published on:
Abstract
Owing to the Pierre Auger Observatory (PAO), we have an unprecedented amount of ultra-high-energy cosmic ray (UHECR) data. Using the data, we study the influence of the galactic magnetic field (GMF) on the trajectory of UHECRs with energy above $6\times10^{19}$ eV. The GMF is not uniform, and its configuration is still uncertain. Besides, most studies on the GMF have been about the disk field. In this study, we focus on the GMF toward the south galactic pole (SGP), which is thoroughly within the field of view of the PAO. We examine the effects of the GMF on the arrival direction of UHECRs by statistical tests of correlation with the large-scale structure of the universe. The deflection angle of UHECRs affected by the GMF is inferred through the Bayesian inference with Monte-Carlo simulations. We present the estimated strength of the GMF toward the SGP based on the deflection angle and discuss the implications of our results.
Open Access |
# Example: How to track a specific mode profile for variable structure dimensions
#1
Objective:
We have a silicon on insulator waveguide with ridge width of 500nm. We are interested to follow a specific mode profile found @500nm ridge width as we slowly sweep the width from 500nm to 1um.
The challenge is that the effective index and the mode number in the list of found modes will likely change as we change the structure dimensions. Therefore, to complete this objective, we will use mode overlap analysis to find out which mode from the current structure width has the best overlap with the mode from previous run.
If we keep the sweep step small enough, it should be fairly reliable strategy to follow the mode of our interest.
You can try this yourself using this simulation file and script:
Track_mode_profile_variable_structure_dimensions.lsf (1.4 KB)
Simple SOI waveguide.lms (277.6 KB)
In the example, we will follow #2 mode in the list:
Visual check of the mode profile development between 0.5um and 1um with 6 steps:
The recorded neff evolution as function of ridge width:
The script file content for quick review:
clear;
#dimension sweep
minWidth=0.5e-6;
maxWidth=1e-6;
stepNum=6;
dimSweep=linspace(minWidth,maxWidth,stepNum);
#mode number to start with in the first rounds at minimum width
firstMode=2;
#Do you want to visually check the selected modes after the sweep 1=yes 0=no
check=1;
#Initialize neff vesctor where all neff indeces from the sweep are going to be saved
neff=linspace(1,stepNum,stepNum);
for (i=1:stepNum) {
switchtolayout;
select("structure group");
set("ridgeWidth",dimSweep(i));
findmodes;
if (i==1) {
#get the e field profile of the original mode
copydcard("mode"+num2str(firstMode),"testMode");
neff(i)=getdata("mode"+num2str(firstMode),"neff");
if (check==1) {
E=getresult("FDE::data::mode"+num2str(firstMode),"E");
visualize(E);
}
} else {
#find mode with best overlap and replace theprevious dcard mode with it
bestMode=bestoverlap("testMode");
neff(i)=getdata(bestMode,"neff");
cleardcard("testMode");
copydcard(bestMode,"testMode");
if (check==1) {
E=getresult(bestMode,"E");
visualize(E);
}
} #if statement end
} #for loop end
#Purge deck
cleardcard("testMode");
#Plot the found indices (complex numbers)
plot(dimSweep*1e6,neff,"Ridge Width[um]","Effective Index"); |
## WeBWorK Main Forum
### Memory Usage & Problem Connecting with mySQL server
by Thomas Hagedorn -
Number of replies: 6
We've been getting the following error occasionally in some of our courses late at night. The server restarts at 5am and the errors then seem to be gone. We have about 500 users on a 4-processor 2gB system. We've haven't had this problem before this semester, but we upgraded from 2.1 to 2.4.(latest stable version as of Jan.1) in January. We ran a similar version (2.4.(x-1)) last semester on another server with more memory, but never saw this problem (again, we rebooted every morning). In the forum, I didn't see anyone having this problem with mySQL and the table "set_user". I looked at the answers to similar sounding problems involving mySQL and the table "problem_user" but don't think they apply here (the values in global.conf are all set using single-quotes, the values for max_user and wait_timeout seem reasonable, and the errors don't show up after a reboot).
Could this error simply be Webwork running out of memory?
-Tom
## WeBWorK error
An error occured while processing your request. For help, please send mail to this site's webmaster ([email protected]), including all of the following information as well as what what you were doing when the error occured.
Mon Feb 18 02:35:30 2008
### Error messages
error instantiating DB driver WeBWorK::DB::Driver::SQL for table set_user: DBI connect('webwork','webworkWrite',...) failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) at /local/webwork2/lib/WeBWorK/DB/Driver/SQL.pm line 65
at /local/webwork2/lib/WeBWorK.pm line 286
### Call stack
The information below can help locate the source of the problem.
• in Carp::croak called at line 208 of /local/webwork2/lib/WeBWorK/DB.pm
• in WeBWorK::DB::init_table called at line 200 of /local/webwork2/lib/WeBWorK/DB.pm
• in WeBWorK::DB::init_table called at line 169 of /local/webwork2/lib/WeBWorK/DB.pm
• in WeBWorK::DB::new called at line 286 of /local/webwork2/lib/WeBWorK.pm
### Request information
Method
GET
URI
/webwork2/Linear_Algebra_Conjura_S08/
User-Agent Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; MathPlayer 2.10b; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506) Accept */* Connection Keep-Alive UA-CPU x86 Referer Accept-Encoding gzip, deflate Cookie __utma=258717071.118389005.1179681984.1201199102.1201290702.26; __utmz=258717071.1195560589.18.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none) Accept-Language en-us Host webwork.tcnj.edu
### Re: Memory Usage & Problem Connecting with mySQL server
by Sam Hathaway -
Have you checked the MySQL error logs? Can you make sure MySQL is even running when this error occurs? Also check to see if it's accepting connections at all -- use mysql'' to connect and see if that works. I think one gets a different error if MySQL has reached its connection limit, and this message is pretty much just libmysql saying "I tried to open this socket and no one was listening".
### Re: Memory Usage & Problem Connecting with mySQL server
by Thomas Hagedorn -
Dear Sam,
Thanks for the reply. We had another crash last night and I was
able to inspect the system live. The error message was indeed
caused by mySQL not running. But it appears that memory is still
the problem as the system had run out of memory, and as part of
it's automatic recovery, the OS starts killing off user processes
to try to recover. One of those processes it killed off was mySQL.
As soon as mySQL was restarted, the system was fine. We are putting
more memory in the system today and we'll see if that helps.
Thanks,
Tom
(Edited by Michael Gage - original submission Tuesday, 19 February 2008, 11:39 AM) -- to break long line
### Re: Memory Usage & Problem Connecting with mySQL server
by Michael Gage -
Hi Tom,
How many simultaneous clients do you allow? We keep the number
pretty low to keep too many WeBWorK processes starting up. Figure
150Meg or more for each process. (I've seen large processes as well -- due to
some memory leaks in which case you want the processes to start more often.)
Here is part of our httpd.conf file for the apache server for reference:
Timeout 1200
KeepAlive On
#MaxKeepAliveRequests 100
MaxKeepAliveRequests 25
KeepAliveTimeout 10
StartServers 5
MinSpareServers 5
MaxSpareServers 10
#sam# Default is 150, but was previously set to 10. Trying 50 for now to
#sam# see if there'll be any problems.
#mike# changed back to 10 clients then inched back up to 12.
MaxClients 12
MaxRequestsPerChild 100
Hope this helps.
-- Mike
### Re: Memory Usage & Problem Connecting with mySQL server
by Thomas Hagedorn -
Hi Mike,
We have MaxClients =150, and MaxRequestsPerChild =0. I'm not sure what you mean by "simultaneous clients". From the 'ps aux' command data below, it seems we had 10 copies of apache running and each using about 9% of system memory. Does each of these count as a client? How few clients can we get away with?
-Tom
Here's the information from the 'ps aux' command:
wwserver 24950 0.4 8.1 160440 83532 ? S 18:00 1:08 /local/apache/bin/httpd
wwserver 24951 0.3 7.5 145856 77860 ? S 18:00 0:55 /local/apache/bin/httpd
wwserver 24952 0.4 7.7 169824 79052 ? S 18:00 1:02 /local/apache/bin/httpd
wwserver 24953 0.4 8.6 159736 88196 ? S 18:00 1:05 /local/apache/bin/httpd
wwserver 24954 0.4 9.5 170316 97520 ? S 18:00 1:03 /local/apache/bin/httpd
wwserver 24957 0.3 7.6 157696 78140 ? S 18:00 0:58 /local/apache/bin/httpd
wwserver 24960 0.3 9.4 165616 97188 ? S 18:00 0:58 /local/apache/bin/httpd
wwserver 24961 0.4 9.8 183220 100496 ? S 18:00 1:10 /local/apache/bin/httpd
wwserver 24962 0.3 8.6 164016 88884 ? S 18:00 0:57 /local/apache/bin/httpd
wwserver 24963 0.3 8.1 144380 83248 ? S 18:00 0:59 /local/apache/bin/httpd
### Re: Memory Usage & Problem Connecting with mySQL server
by Michael Gage -
10 processes -- each using 10% of memory is probably risky. Try setting
MaxClients to 8 and MaxRequestsPerChild to 100 (don't forget to restart the server)
See if that stops the memory problems. Because each WeBWorK process is pretty large you don't want to start too many of them or the system will spend a lot of time swapping them in and out of memory. Keeping the number of clients low keeps the number of processes low.
If you get more memory for the machine you can inch the number of MaxClients upward for faster service. |
nLab monoid axiom in a monoidal model category
Contents
Context
Model category theory
Definitions
Morphisms
Universal constructions
Refinements
Producing new model structures
Presentation of $(\infty,1)$-categories
Model structures
for $\infty$-groupoids
for ∞-groupoids
for equivariant $\infty$-groupoids
for rational $\infty$-groupoids
for rational equivariant $\infty$-groupoids
for $n$-groupoids
for $\infty$-groups
for $\infty$-algebras
general $\infty$-algebras
specific $\infty$-algebras
for stable/spectrum objects
for $(\infty,1)$-categories
for stable $(\infty,1)$-categories
for $(\infty,1)$-operads
for $(n,r)$-categories
for $(\infty,1)$-sheaves / $\infty$-stacks
Monoidal categories
monoidal categories
In higher category theory
Higher algebra
higher algebra
universal algebra
Contents
Idea
The monoid axiom is an extra condition on a monoidal model category that helps to make its model structure on monoids in a monoidal model category exist and be well behaved.
Definition
Definition
We say a monoidal model category satisfies the monoid axiom if every morphism that is obtained as a transfinite composition of pushouts of tensor products of acyclic cofibrations with any object is a weak equivalence.
Remark
In particular, the axiom in def. says that for every object $X$ the functor $X \otimes (-)$ sends acyclic cofibrations to weak equivalences.
Properties
Lemma
Let $C$ be a
Then if the monoid axiom holds for the set of generating acyclic cofibrations it holds for all acyclic cofibrations.
Theorem
If a monoidal model category satisfies the monoid axiom and
then the transferred model structure along the free-forgetful adjunction $(F \dashv U) : Mon(C) \stackrel{\overset{F}{\leftarrow}}{\underset{U}{\to}} C$ exists on its category of monoids and hence provides a model structure on monoids.
Examples
Proposition
Monoidal model categories that satisfy the monoid axiom (as well as the other conditions sufficient for the above theorem on the existence of transferred model structures on categories of monoids) include
with respect to Cartesian product
with respect to tensor product of chain complexes:
and with respect to a symmetric monoidal smash product of spectra:
References
Last revised on April 4, 2016 at 09:34:14. See the history of this page for a list of all contributions to it. |
Open access peer-reviewed chapter
# Potential of Cellulosic Ethanol to Overcome Energy Crisis in Pakistan
By Saima Mirza, Habib ur Rehman, Waqar Mahmood and Javed Iqbal Qazi
Submitted: May 26th 2016Reviewed: October 25th 2016Published: January 25th 2017
DOI: 10.5772/66534
## Abstract
Liquid biofuel industry in Pakistan may become a promising source for saving our foreign exchange and environment. Currently, bioethanol production is dependent on cane molasses, a product of sugar industry. Harnessing of more bioethanol from lignocellulosic waste crop residue has potential to respond to the fuel scarcity. Lignocellulose exists in nature as a polymer and serves as the largest sink for fixed global carbon and could be used both as a carbon source for microbial growth-assisted bioethanol production and for fabricating enzymes for more energetic simultaneous production to represent an important segment of the renewable energy sector. An exciting aspect of this research is the development of new biorefining techniques that facilitate the extraction of sugar-derived biofuel by processing of waste crop residues by employing novel nature inspired lignolytic enzyme. Further research will explore more avenues for stabilization of system in terms of process parameters for optimum bioethanol yield from enzymatically hydrolyzed lignin waste streams. The chapter can be considered as an anticipatory work and exploration of new dimensions for promotion of nature-inspired enzyme-assisted lignocellulose-based bioethanol production industry, which maximizes sustainable development opportunities especially in energy sector.
### Keywords
• crop residue conversion into biofuel
• agriculture waste bioethanol
• enzymatic ethanol
• lignin biofuel
• sustainable ethanol
## 1. Introduction
There has been a universal consensus that greenhouse gases (GHG) such as methane (CH), carbon dioxide (CO) and nitrous oxide (NO) are the main cause of global warming. This extreme apprehension forced many nations of the world to reach treaty on Japanese Protocol. Pakistan signed the Kyoto Protocol on Climate Change in 2004 and accredited the scientific invention’s potential as a possible way to control the emissions of GHG [1]. Transportation consumes approximately 27% of primary energy [2]. In EU25 countries, transportation consumes about 28% of the energy and more than 80% is for lane transport [3]. The current oil requirement is around 12 million tons a day and there is forecasting that it may increase to about 16 million tons per day in 2030. The 30% of the world oil production is used to fulfill the fuel requirements for transport. In the current energy blend in Pakistan, the share of petroleum products is about 40%. Its use has grown up suddenly, mainly by fuel oil and the gasoline [4]. The immediate focus of this chapter is to review the current developments in the dimensions of waste crop residue-based bioethanol production and applications of nature-inspired enzymes efficient production of the liquid fuel in Pakistan.
Pakistan is facing severe energy crisis in these days which is resulting in many social and economic problems. Solution of this crisis might come from alternative(s), addressing cheap and eco-friendly fuel sources. This utmost and urgent requirement is likely to be achieved by biomass resources that have been mainly ignored previously, while they are accessible in enough quantities to solve the energy crisis in the country. Agriculture has remained the basis of Pakistan’s economy as it provides employment to 45% of the population and provides feedstocks to agro-based industry. Clean energy supply is critical in agricultural areas in Pakistan where biofuels are currently not an option because of the lack of cost-effective and efficient biofuel production technologies, although villagers depend on conventional diesel for powering different agriculture machinery and gasoline for transporting agricultural goods from farm/market to end consumers. Fuel shortage has also led to a cut in electricity production. It is thus clear that the major limiting factor is energy which creates barriers for developing economies. Careful estimates show that by 2050, Pakistan’s energy needs will increase three times without a concurrent increase in supply. Pakistan plans to cut natural gas supply by around 4,247,527 m3/d to fertilizer plants and compressed natural gas (CNG) pumps to increase electricity supply to cities facing daily rolling blackouts. In 2012, Pakistan’s natural gas supply had about 15 billion m3/year deficit with increasing tendency. Large biofuel production plants that can contribute significant amounts of sustainable fuel are the only solution to supplement the power shortage in the country. Rise in conventional fuel price and its continuous depletion naturally brings great demand for the innovative biofuel production technologies as a Clean Energy Solution in Pakistan. The valuable progress starts with the beneficial use of the waste material and crop residues as feedstocks which otherwise represent environmental liabilities. The development in this sector will further provide opportunities to create multiple symbiotic partnerships among the farmers and the business community.
Ethanol is an important energy source which has the huge potential to lessen the sole dependency on fossil-derived fuels and alleviate hazards to our environment. Additionally, it is an ideal precursor molecule because of its promising potential as fuel, beverage, feedstock, antiseptic and industrial solvent. Currently, it is replacing around 3% of the gasoline that is derived from fossils throughout the world which is compatible with petroleum and recommended for transportation in both blended and pure forms. The consumption of ethanol is around 1.6 million tons and consumption of fuel ethanol can be increased up to 160,000 tons by 10% blending of bioethanol in petrol. According to Chris Somerville, director of the Energy Biosciences Institute, USA, annual production of ethanol from corn is around 13–14 billion gallons in the United States, which is equal to 10% of gasoline use. In Brazil, 40% of liquid transportation fuel is bioethanol and 15% of the nation’s electricity is deriving from it. Therefore, it is assumed that current technology is healthy enough to produce ethanol as an alternative fuel to some extent for immediate partial replacement of oil. However, corn ethanol which is already in use has several drawbacks as corn is a food crop. Furthermore, when cost-to-benefit ratio regarding equipment and processing involved in planting, harvesting and transporting corn ethanol are considered, it becomes incomparable with gasoline. Therefore, hardy, fast-growing plants, like switchgrass, elephant grass and miscanthus, are more favorable feedstocks options. These grasses can grow up to 10 feet height, thrive on marginal land and can survive even with little or no fertilizer [5]. Moreover, cellulose-rich waste material of paper and other industries and waste crop residues can also be considered attractive options for cost-effective and even zero cost bioethanol productions by using nature inspired enzymatic processes.
## 2. Biomass: a cheap and sustainable biofuel resource
Pakistan is an agricultural country and huge capacity of biomass in the form of waste crop residues such as rice straw, wheat straw, cotton stalks, maize stalks, sugarcane tops and so on are available for bioethanol production. Pakistan annually generates around 69 million tons of field-based crop residues. Field-based crop residues are generally considered useless and it has been estimated that 50 million tons of residue/waste is produced every year from major crops (including 6.88 million tons of sugarcane bagasse). These are either burned in the fields/homes or buried in the land. Direct burning of this field base left over emits carbon dioxide and smoke which are hazardous for health and a source of ozone with risks to the atmosphere. Excluding domestic consumption and commercial usage, the net available resource potential of four crops, that is (wheat, cotton, rice and corn) for biomass power generation, is estimated to be about 10.942 million tons [6]. These estimations showed that Pakistan is endowed with abundant availability of biomass resources, which can be economically exploited for developing a sustainable biomass energy system, because the country has been perennially facing power demand-supply gap, which is currently estimated at 4500–5500 MW [7]. The system is being maintained by resorting to load shedding, often extending up to 12–16 h. Pakistan has strategies to add 9700 MW of electricity generation capacity by 2030 as per the Medium Term Development Framework [8], which would partially take care of current fuel scarcities. In this context, power generation through biomass can also play an important role in bridging the overall demand-supply gap. It would be essential to expand and diversify the resource base, particularly in the context of continuous access to electricity in all regions of the country. Large numbers of industries in Pakistan are currently dependent on liquid fuels for meeting their captive demand for electricity and heat. The situation is therefore ideally suited for promoting biomass-based liquid biofuel production as a sustainable and renewable alternative for the industrial sector as well. If only field left over crop residues are used for production of bioethanol, even then a sufficient amount of bioethanol can be produced to cut oil import and improve the profitability of the farming sector.
It is known that some developed countries like the United States and Brazil are largest bioethanol producers and ethanol production in these countries is achieved by fermentation of corn glucose [9]. The production of ethanol from molasses is not new, but some areas need to be researched for enhanced yield. Until now, in Pakistan, sugar mills distilleries are operational for ethanol production using molasses, but in order to utilize molasses fully and get maximum benefit, it is important to increase the number of distillery units on one hand and assurance of possible involvement of efficient enzymes and engineered microbes on the other hand. Furthermore, application of mathematical imitations would be used to explore efficient way for optimized yield without intervention of any pilot plant [10, 11]. The major steps for large-scale microbial production of ethanol are fermentation of sugars, distillation and dehydration.
## 3. Microbial fermentation
Different ethanologenic microbes have been known to have promising qualities like limited growth requirements, genetically amenable, higher sugar and ethanol tolerance. Bioethanol production by two strains (mutant and wild) of yeast Kluyveromyces marxianus have been documented [12]. Wild strain showed maximum specific growth rate at 40°C while mutant showed maximum specific growth and ethanol formation rates at 45°C from fermentation of diluted molasses. Results of this study anticipated that large-scale production may also be economically feasible by employing these microbes. Yeast-assisted bioethanol production process is more common and commercially applicable method in Pakistan [13]. Zymomonas mobilis is also attracting more attention for bioethanol production due to less process limitations [14, 15]. Different experimental studies in this regard revealed that optimum ethanol production up to 55.8 g L−1 can be achieved by Zymomonas mobilis at 30°C after 48 h of retention time [16, 17].
Sugar beet, molasses and sugarcane juice are one of the most vital and easily accessible raw materials for the fermentative production of alcohol. The increased cost of molasses has triggered many distilleries to search alternate sources of feed stocks for the production of bioethanol in Pakistan. In starch industry, a by-product called enzose hydrol contains 5, 12, 56 and 5% of oligosaccharides, maltose, glucose and maltotriose, respectively, and is a cheap and good source of fermentable sugars. Mostly oligo-saccharides and maltotriose are not completely consumed by ethanologenic microbes and therefore need pretreatment [18]. Similarly, bioconversion of cellulose into ethanol can be accompanied by various microbes as well as by some filamentous fungi, including Neurospora crassa [19, 20] Zymomonas [21], Trichoderma viride [22], Paecilomyces sp. [23], Zygosaccharomyces rouxii [24] and Aspergillus sp. [25], termites’ gut enzymes, genetically engineered bacteria such as Escherichia coli [26] and thermophilic, anaerobic bacteria, such as Clostridium thermocellum [27]. Thus, certain possible methods need to be designed for economical production of ethanol from agricultural farm residues by employing most effective microbes [28]. Among such agro-based wastes, wheat straw is one of the most plentiful crop residues which has broadly been studied and is abundantly available too [29].
Current investigations are focusing on pretreatment of the hard biomass, that is, lignocellulosic sugarcane bagasse, rice straw and wheat straw and subsequent production of ethanol from the pretreated biomass using ethanologenic microorganism. Different fungal species have promising potential for breaking down lignin and therefore may be applied for efficient ethanol fermentation. Hypothetical yield of ethanol is 0.511 g per gram of glucose consumed. Practically, this yield cannot be achieved because part of fermentable glucose is consumed for cell maintenance, for synthesis of by-products like glycerol and lactic acid and therefore is not completely converted into ethanol. Nevertheless, at the manufacturing level, under ideal conditions, it remains 90–95% of the hypothetical yield [30]. Ethanol formation represents a specific loop of the general cellular metabolism; however, its general production route is shown in Figure 1 [31].
Generalized bioethanol production is as follows [31].
## 4. Bioethanol production potential of industrial sector in Pakistan
Few industries in Pakistan are already involved in bioethanol production from by-products or industrial effluents, but it is necessary to develop nature inspired bioethanol production on a large scale that may not only provide a solution to Pakistan’s power shortages but can also be profitable enough to render their viability in local conditions. The biomass like rice straw, sugarcane molasses, bagasse and wheat stubble are the chief resources of lignocellulose feedstocks worldwide [32]. One of the largest available biomass is rice straw which is about 7.31 × 1014 rice stubbles per year in world and 90% of its annual global production comes from the Asia [33]. Another abundantly available biomass, a by-product of sugarcane processing, is the sugarcane bagasse which represents important source for fuel generation systems and ethanol production due to its high easily accessible sugar contents for fermentation [34].
Sugar industry is the biggest agro-industry in Pakistan after textile and has been playing key role in the production of ethanol. There are about 76 sugar mills in Pakistan already which are producing seasonal ethanol from around 2.5 million metric tons (MMT) of molasses. However, being an agricultural country, the best option is second-generation ethanol. However, for this, the complex lignin-cellulose-hemicelluloses matrix of the biomass has to be broken and the carbohydrate polymers need to undergo hydrolyses to yield fermentable sugars. The important source of the livelihood of farmers is sugar industry and their 70% population is dependent on it. The yield of sugar in Pakistan is about 85.95 kg per 100 kg of sugarcane. The molasses production from sugarcane is approximately 40 kg per ton of cane from which ethanol production is approximately10 L. There is 270,000 tonnes per annum current production capacity for ethanol of fuel grade in our country which can readily be increased up to 400,000 tonnes per annum through the rise in employments and feedstock like waste crop residue [18, 35]. This molasses-to-bioethanol conversion process is conducted in distilleries. But most of the distilleries are located onsite in sugar mills which make the production cycle an integrated one. The mills, after processing sugarcane, store the molasses in storage tanks on-site and then pass it on to the distilleries for bioethanol production. Simple molecular sieve technology is used for bioethanol production in most of these mills which requires 1.5 million USD capital expenditure and can be completed in 5–6 months.
AL-Abbas sugar mill production plant is situated exactly in the center of one of the huge sugarcane growing areas of province Sindh at Mirwah Gorchani. This area is also known to be the most fruitful regarding sugarcane cultivation in Pakistan, assuring the supply of good crop of sugarcane throughout the entire season for the sugarcane plant. The plant is linked to the national highway by means of a mile of metal led road and is also accessible by a web of many other roads from different directions which facilitate transport of sugarcane from the plantation sites to the sugarcane plant. Total crushing potential of this sugarcane crushing plant is about 7500 M ton per day. AL-Abbas sugar mill established the largest ethanol distillery plant in 1999. The plant design is equipped with highly advanced French technology using multieffect vacuum distillation. The ethanol production capacity of unit-I is approximately 87,500 L per day. The growing demand of ethanol has urged the management to set up unit-II with the same capacity of ethanol production. The bioethanol-based power plant of AL-Abbas sugar mill has 15 MW electricity production capacity.
Shakarganj sugar mill is located in Jhang, Pakistan. They are producing three different types of ethanol, that is, concentrated ethyl alcohol, denatured spirit and methylated spirit for industrial and alternate source of energy usage. The mill is exporting approximately 90% of its total ethyl alcohol and is a four-time award winner for the highest export of ethyl alcohol. The unit produces anhydrous alcohol employing eco-friendly dry dehydration technology. The denatured and methylated spirit is in high demand in local wood product and paint industries.
Another bioethanol-producing sugar mill is located in Nankana, Sheikhupura, Pakistan. The ethanol production potential of this distillery is 125,000 L/day. Besides this capacity, the distillery also produces ethanol of fuel grade with 99.8% purity from the mill molasses. State-of-the-art distributed control system (DCS) which not only promises for increased steadiness and but also approachability of plant is used. The distillery system is established with fewer number of devices and lesson wiring. The distillery can cut in half the costs related to applying and sustaining the loops by incorporating the transmitter controllers into the process and by opting not to tie any critical loops back into the DCS. The distillery is equipped with ultramodern machinery and is working on International Standard Operating processes to carry them to produce high-quality products and is meeting the demands of end users.
Crystalline Chemical Industries (Pvt.) Ltd (CCI) also practices sugarcane molasses fermentation for ethanol production, located in Sargodha, Pakistan. This unit of distillation exports about 90% of its ethanol produce.
Habib sugar mills Ltd. has industrial alcohol production capacity up to 142,500 L/day. Pinnacle distilleries (Pvt.) Ltd. is producing rectified spirit for portable applications, technical alcohol, anhydrous ethanol 99.7% minimum for manufacturing use. The fuel grade alcohol is produced up to 30,000 tons per year.
Almost all sugar industries in Pakistan are producing bioethanol mainly from molasses containing feasible level of fermentable sugars. Presently, the biomass proportions which can be economically converted into ethanol are sugar (sugarcane) and starch (e.g. corn). In future, there will be plentiful industrial scale progress in the subject of lignoethanol where the hard part of a plant (cellulose) will be converted into fermentable sugars and consequently converted to bioethanol. After microbial fermentation, the produce is subjected to distillation, dehydration and then is condensed for quality improvement and water and other impurities removal. However, due to high cost in the form of energy input, this traditional process is replaced with some energy saving processes (molecular sieve) mainly to avoid distillation completely for dehydration. This process involves the use of ethanol vapors under pressure and allows these vapors to pass through molecular sieve beads bed. The energy saving by this technology of dehydration accounts for 3000 btus/gallon (840 kJ/L) than that of azeotropic distillation.
If all raw sugarcane molasses is converted to bioethanol, then it has the potential to substitute 5–7% consumption of gasoline. This will be a very important contribution in future to lessen the burdens on Pakistan economy. The government of Pakistan should make policy to endorse the blending of ethanol in transportation fuels as early as it becomes conceivable [18]. With the production of bioethanol from Pakistan’s own raw molasses, about 600 million of precious foreign exchange can be saved [36]. Besides this, other advantages of ethanol usage are good engine performance and better yields; it burns more efficiently and keeps our environment clean and more easily biodegradable, as well as consistent with the global focus on biofuel. No doubt this is a most effective way for production of bioethanol from raw/waste material; however, involvement of a variety of waste biomass or crop residue will be more optimistic for solution of energy issues. The main factor in ethanol production is the content of lignocellulose present in substrates which will be hydrolyzed by different hydrolyzing agents to provide fermentable glucose [37, 38]. The nature-inspired enzymes from wood fungus and termite may be used as an extra bonus in the presence of exiting bioethanol production technologies, which can convert the long chains of polysaccharides into monosaccharaides. Different industries like forestry, pulp and paper, agriculture and food processing including municipal solid waste (MSW) and animal wastes are major producers of lignocellulosic waste materials [39, 40].
## 5. Present challenges for bioethanol production from lignocellulosic feedstocks
Currently, lignocellulolytic enzymes are derived from fungus, gut of termite and certain bacteria [41]. Established technology for bioethanol in Pakistan is relatively of low-tech approach to meet some needs by employing molasses and some selective biomass. Such limitations with biomass make the process and yield profit limited. At the same time, the farmers and agribusinesses cannot access recent technologies that may greatly expand the use of bioethanol to meet the demand for power in many applications.
The current energy scenario warrants the demand for research and development of biomass-based biofuel production systems. Biomass, due to its renewable nature and abundance, is becoming an increasingly attractive fuel source. Lignin, the second most abundant biomass constituting aromatic biopolymer on Earth, is highly recalcitrant to depolymerization. Lignin serves as bonding for hemicellulose and cellulose and creates an obstacle for penetration of any solution or enzyme to lignocellulosic structure which is the major structural component of all plants and can be depolymerized to fermentable sugars. Microbes enhance the conversion of lignin into fermentable sugars but there are some hurdles which need to be removed first. Recalcitrant nature of lignin could be tackled through different biocatalysts due to their non-hazardous and eco-friendly nature. Therefore, lignocellulolytic microorganisms like fungi and some bacteria are considered as promising biomass degraders especially for large-scale applications due to their potential yields of extracellular synergistically acting enzymes into the environment. These enzymes can contribute significantly in degradation of lignocellulosic material by converting long chain polysaccharides into their 5- and 6-carbon sugar components [42, 43]. Although lignin resists attack by most microorganisms, basidiomycetes, white-rot fungi are able to degrade lignin efficiently [44, 45]. Lignocellulolytic enzymes-producing fungi are widespread and include species from the ascomycetes (e.g. Trichoderma reesei) and basidiomycetes phyla such as white-rot (e.g. Phanerochaete chrysosporium) and brown-rot fungi (e.g. Fomitopsis palustris) [46, 47]. Few basidiomycetes, for example, P. eryngii, P. chrysosporium and T. versicolor can act as biocatalysts for ethanol production by having potential for lignin degradation/depolymerization. Ethanol fermentation requires high concentration of sugar solutions; therefore, biocatalytic conversion of lignocellulosic material into hydrolysate containing high concentration of sugar will be incentive for decreasing production cost. Therefore, variety of lignocellulytic material (wheat straw, rice straw and rice husk) could be degraded by basidiomycetes and subject to ethanologenic fermentation for ethanol production cost-effectively. However, some strains of white-rot fungi have promising potential to degrade lignin by simultaneous attack on lignin, hemicellulose and cellulose, whereas few can selectively work just on lignin. It is pertinent here to note that synergistic biocatalytic ability of white rot fungi would be source of efficient depolymerization method and will be helpful in proving that the heteropolymer lignin represents an untapped resource of renewable aromatic chemicals [48, 49]. Lignocellulosic biofuel production is not yet economically competitive with fossil fuels; therefore, to ensure successful utilization of all sugars is important for improving the overall economy especially in terms of maximum theoretical yield. Xylose is one of the most abundant sugars in lignocellulosic hydrolysate. Therefore, over expression of xylose isomerase will facilitate complete utilization of xylose present in hydrolysate which otherwise remains to varying extent in spent culture [18, 50]. Another matter of concerns regarding lignin depolymerization and its conversion into biofuels/bioethanol is repolymerization of lignin-derived low molecular weight sp. into high molecular weight molecules which are not easy to be degraded by microbes. Repolymerization is observed to occur within few hours after onset of lignin volarization. For this purpose, organization of most effective microbial sink for immediate utilization of low molecular species for bioethanol production is the most appealing option [51, 52].
For overcoming this bottleneck, microbial sink/consortium of different microbes with xylose overexpression is an offered strategy. Preventing repolymerization of low molecular weight lignin species into high molecular weight lignin compounds and ensuring the complete utilization and conversion of available sugars into bioethanol can make the bioprocess cost-effective. The description in this chapter will lead to development of technologies that can be helpful in efficient depolymerization of lignin and its simultaneous conversion into high-valued microbial-assisted advanced biofuel. The chapter represents need for development of road map for advanced level of biofuel production from waste crop residues. Nature-inspired enzymes’ involvement is the most effective way for enhanced bioethanol production from biomass. The enzymes convert the long chains of polysaccharides into monosaccharaides. Currently, lignocellulytic enzymes used for ethanol production from cellulosic biomass are obtained from fungus, gut of termite and certain bacteria [40]. Present restrictions of enzymatic breakdown of lignocellulose-based biomass are mostly due to concern of enzymatic steadiness and vulnerability to inhibitors or by-products [53, 54]. Continuous bioengineering efforts and prospecting should provide novel enzymes with lower susceptibility to inhibitors and relatively higher specific activity [55]. Few insects such as termites have very efficient approaches to break the lignocellulose-based substrates as potential mean of bioenergy [56]. In case of lower termites, activity (cellulolytic) is normally dependent on enzymes produced by endosymbiotic, flagellated protists [57], while in case of higher termites, their guts contain lignocellulytic enzymes which combine with cellulases secreted by certain endosymbiotic gut bacteria [58, 59].
Hence, establishment of large-scale bioethanol production plant by treating waste crop residues with such novel enzyme will enhance the production and can successfully provide support to deteriorating economy of Pakistan (Figure 2). The resulting cleaner environment is another benefit that has monetary values that the government may be financially and ethically interested in. Such multiple positive benefits will attract different interested parties to involve in replication of process, each with resources and benefits to sustain and multiply. Additionally, the greenhouse gas emission will be reduced by burning bioethanol, as the net CO2 emission is zero because the amount of CO2 emitted on burning is equal to the amount of it, which is absorbed from the atmosphere by the process of plant photosynthesis which will be used for production of bioethanol [60]. In Pakistan, the current domestic production of raw oil presently satisfies only almost 25% of the country’s consumption and remaining demands are met by importing fuels from abroad. This make Pakistan’s economy vulnerable to different social and economic issues; however, incorporation of biofuel/bioethanol will reduce burden on country’s economy significantly.
Federal Cabinet, Economic Coordination Committee (ECC) of Pakistan has decided to permit marketing of Ethanol 10 as motor vehicle fuel on the trial basis. Anhydrous ethanol can be blended with gasoline in different proportions having less than 1% water content. Many of the motor vehicles having gasoline engines operate well with ethanol blend of 10% in their fuels (E 10). The Government of Pakistan enacted a 15% duty on export of molasses to prefer the use of molasses for ethanol production rather than export [61]. The government of Pakistan should make policy to enforce the blending of ethanol in transportation fuels as early as it becomes conceivable [18]. Successful implementation of large-scale waste crop residue-based bioethanol production concept will attract private sector investment and company-farm partnership to accelerate the development and commercialization of new bioenergy solution to improve emerging economies and transform the lives of at least small farmers. The concept is readily adoptable by different agricultural regions as the essential supply of the feedstock is available in the form of agricultural residues that are sustainable and typically available abundantly and locally.
## 6. Concluding remarks
To import conventional energy resources, Pakistan is spending around 7 US$billion equivalent to 40% of total imports. Careful estimates show that by 2050 Pakistan’s energy needs will be increased three times while the supplies are not very inspiring. In 2012, Pakistan natural gas supply had about 15 billion-m3/year deficit with increasing tendency. Rise in natural gas price (0.51$/m3) brings great potential to promote biomass-based biofuel production in Pakistan.
Pakistan being an agricultural country produces wheat, sugarcane and potatoes as one of its biggest crops [6264]. Consequently, large amounts of wheat straw and sugarcane bagasse are obtained as by waste products. Due to high ambient temperature in most part of the year, poor post-harvest processing and storage of thousands of tons of biomass are wasted each year. In short access of promising process for the initiation of sustainable strategies for waste crop residue-based bioethanol production while consuming starch, cellulosic and lignin loads of effluents of respective origins in the country, bestowed with suitable biomass and temperature optima for successful cultivation of ethanologenic microbes, is expected to provide sustainable supplies of biofuels. Additionally, more than three billion acres worldwide which are not suitable for agriculture purpose due to dryness could be utilized for growing drought-hardy plants for biofuels production. The only disadvantage of using these crops is that they contain lignocellulose, a hard plant material, that needs more treatment than either corn or sugarcane to be converted into alcohol. Therefore, search for ways to make the overall process more efficient by reusing materials, changing the fermenting agent and searching for better and nature-inspired enzymes will be milestone in this regard. The process development of nature inspired enzyme-assisted conversion of agricultural and food waste into bioethanol that can be used as clean biofuel is demand of time. Adoption of these new dimensions for bioethanol production will definitely reduce pressure on energy and transportation sector, entire dependence on conventional fuels and can triumph fight against climate change.
## How to cite and reference
### Cite this chapter Copy to clipboard
Saima Mirza, Habib ur Rehman, Waqar Mahmood and Javed Iqbal Qazi (January 25th 2017). Potential of Cellulosic Ethanol to Overcome Energy Crisis in Pakistan, Frontiers in Bioenergy and Biofuels, Eduardo Jacob-Lopes and Leila Queiroz Zepka, IntechOpen, DOI: 10.5772/66534. Available from:
### Related Content
#### Frontiers in Bioenergy and Biofuels
Edited by Eduardo Jacob-Lopes
Next chapter
#### Jatropha Biofuel Industry: The Challenges
By M. Moniruzzaman, Zahira Yaakob, M. Shahinuzzaman, Rahima Khatun and A.K.M. Aminul Islam
#### Biomass Production and Uses
Edited by Eduardo Jacob-Lopes
First chapter
#### Contribution to the Assessment of Green Biomass of Atriplex halimus Plantation in Arid Western Algeria (Region of Naama)
By Aman Bouzid and Benabdeli Kheloufi
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
View all Books |
# Math Help - Integration: Error Function
1. ## Integration: Error Function
Hello,
I dont have the strongest math foundation, so please excuse me if this is a dumb question.
I was integrating the following in Maple:
$\int_0^1 e^{-x^2} dx$
$\frac{1}{2} \ erf(1) \ \sqrt{\pi}$
I've read that the erf is the error function but I don't really understand what it is. How can I obtain a 'Real' value for this?
2. Originally Posted by JavaJunkie
Hello,
I dont have the strongest math foundation, so please excuse me if this is a dumb question.
I was integrating the following in Maple:
$\int_0^1 e^{-x^2} dx$
$\frac{1}{2} \ erf(1) \ \sqrt{\pi}$
I've read that the erf is the error function but I don't really understand what it is. How can I obtain a 'Real' value for this?
By a little snake-oil (i.e. you don't care about the proof of why it's true right? If you do let me know_ we have that $\int_0^1e^{-x^2}dx=\sum_{n=0}^{\infty}\int_0^1 \frac{(-1)^nx^{2n}}{n!}dx=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)n!}$
3. Well, I would like to know why its true even though I don't have to.
$\int_0^1e^{-x^2}dx=\sum_{n=0}^{\infty}\int_0^1 \frac{(-1)^nx^{2n}}{n!}dx=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)n!}$ $\ \ \ \Longrightarrow$ thats a fourier series, isn't it?
4. Originally Posted by JavaJunkie
Well, I would like to know why its true even though I don't have to.
$\int_0^1e^{-x^2}dx=\sum_{n=0}^{\infty}\int_0^1 \frac{(-1)^nx^{2n}}{n!}dx=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)n!}$ $\ \ \ \Longrightarrow$ thats a fourier series, isn't it?
Not quite! But good guess. It comes from a Maclaurin series. The difficult part comes from why the following step is justified $\int_0^1\sum_{n=0}^{\infty}\frac{(-1)^nx^{2n}}{n!}=\sum_{n=0}^{\infty}\int_0^1\frac{(-1)^n x^{2n}}{n!}dx$. The idea is that $\left|\frac{(-1)^nx^{2n}}{n!}\right|\leqslant\frac{1}{n!}$ and so by the Weierstrass M-test we may conclude that the sum is uniformly convergent and thus the interchangeability of the series and the integral is justified. |
## Print version ISSN 0100-4042On-line version ISSN 1678-7064
### Quím. Nova vol.40 no.5 São Paulo June 2017
#### https://doi.org/10.21577/0100-4042.20170029
Artigo
THE SYMMETRY BREAKING PHENOMENON IN 1,2,3-TRIOXOLENE AND C2Y3Z2 (Z= O, S, Se, Te, Z= H, F) COMPOUNDS: A PSEUDO JAHN-TELLER ORIGIN STUDY
1Department of Chemistry, Yazd Branch, Islamic Azad University, 8916871967 Yazd, Iran
ABSTRACT
1,2,3-Trioxolene (C2O3H2) is an intermediate in the acetylene ozonolysis reaction which is called primary ozonide intermediate. The symmetry breaking phenomenon were studied in C2O3H2 and six its derivatives then oxygen atoms of the molecule are substituted by sulphur, selenium, tellurium (C2Y3H2) and hydrogen ligands are replaced with fluorine atoms (C2Y3F2). Based on calculation results, all seven C2Y3Z2 considered in the series were puckered from unstable planar configuration with C2v symmetry to a Cs symmetry stable geometry. The vibronic coupling interaction between the 1A1 ground state and the first excited state 1B1 via the (1A1+1B1) ⊗b1 pseudo Jahn-Teller effect problem is the reason of the breaking symmetry phenomenon and un-planarity of the C2Y3 ring in the C2Y3Z2 series.
Keywords: symmetry breaking in five-member rings; PJTE; 1,2,3-trioxolene derivatives; non-planarity in rings; vibronic coupling constant
INTRODUCTION
The reaction with ozone is a well-known reaction in organic chemistry and many unsaturated compounds were participated in ozonolysis reactions.1-5 The ozonation mechanism first time were suggested by Criegee6 and based on his mechanism, two intermediates with five-member ring structure were presented in ozonolysis reaction with unsaturated compounds. Those intermediate structures were confirmed through 17O-NMR spectroscopy method7 (See Figure 1).
Several computational studies of unsaturated compounds ozonolysis reaction have been done to rationalize the ozonolysis reaction mechanism.8-10 Additionally, activation enthalpy of cycloaddition reaction between ozone and acetylene was investigated through different calculation methods such as CCSD(T), CASPT2, and B3LYP-DFT with 6-311+G(2d,2p) basis set.11 Moreover, thiozone adducts on single-walled carbon nanotubes, fullerene (C60) and graphene sheet and geometry optimization of minima and transition structures have been investigated.12
By quantum-chemical simulations, we are able to reveal the electronic states of heterocyclic systems such as ground and excited states and their coupling. In all of the above experimental and theoretical studies, some important features of the structure and properties of acetylene ozonolysis reaction and their intermediates have been analyzed, but less attention has been paid to the origin of their common features which should be explained through pseudo Jahn-Teller effect (PJTE). The PJTE includes excited states in the vibronic coupling interactions and is the only possible source of the instability of planarity of cyclic systems in nondegenerate states. It is also a powerful tool to rationalize symmetry breaking phenomenon in the compounds with a symmetrical structure.13,14 Through folding of rings in heterocyclic compounds, symmetry breaking phenomenon has been reported in many studies and the instability of the ground state in planar configuration of those molecules and their coupling with excited states have been explained via PJTE theorem.15-27 Restoring planarity in the systems that are puckered from their planar configurations due to the symmetry breaking phenomenon has been investigated through the PJTE. To do so, their planar configuration restores ether by coordinating two anions, cations, or rings up and down to the nonplanar systems28-30 or influences the parameters of the PJTE instability by removing or adding electrons;31 they show that symmetry breaking phenomenon is suppressed in the folded systems. In other applications of the PJTE, the origin of puckering in tricyclic compounds to a pseudo Jahn-Teller (PJT) problem has been traced.32,33
Recently, buckling distortion in the hexa-germabenzene and triazine-based graphitic carbon nitride sheets is rationalized based on the PJT distortion.34,35 Structural transition from non-planar of silabenzenes structures to planar benzene-like structures,36 Chair like puckering investigation, binding energies, HOMO-LUMO gaps and polarizabilities in the silicene clusters toward found the hydrogen storage materials,37 and origin instability of cylindrical configuration of [6]cycloparaphenylene have analyzed thorough the PJTE,38 are some more application of the PJT theorem in chemistry.
COMPUTATION DETAILS
An imaginary frequency along b1 normal coordinate was observed due to optimization and frequency calculations of seven C2Y3Z2 (Y= O, S, Se, Te , Z= H, F) derivatives in planar configuration and it confirms that all C2Y3 five-member rings in the C2Y3Z2 series are unstable in their planar configuration. The Molpro 2010 package39 were carried out in these geometrical optimization and vibrational frequency calculations of the series and the state-average complete active space self-consistent field (SA-CASSCF) wavefunctions40-42 have been employed to calculate the APES along the Qb1 puckering normal coordinates. The B3LYP method level of Density Function Theory43 with cc-pVTZ basis set44-46 was employed in all steps of optimization, vibrational frequency, and SA-CASSCF calculations (except in C2Te3H2 which cc-pVTZ-pp basis set was used).
SYMMETRY BREAKING PHENOMENON IN THE C2Y3Z2 SERIES
The optimization and follow-up frequency calculations of C2Y3Z2 series illuminated that, the C2Y3 ring is folded along b1 nuclear displacement in all seven C2Y3Z2 under consideration and they are unstable in their C2v high-symmetry planar configuration. Therefore, symmetry breaking phenomenon occurs in the C2Y3Z2 series and all systems are puckered to lower Cs symmetry with less symmetry elements. Two different side views of unstable planar configuration with C2v symmetry and Cs symmetry equilibrium geometry in the C2Y3Z2 series illustrates in Figure 2.
Geometrical parameters provided in the form of bonds length, angles, and dihedral angles for similar displacements of atoms in planar and equilibrium configurations, imaginary frequency and normal modes displacements of non-planarity in Cartesian X coordinates together and they are presented in Table 1.
Table 1 Calculated structural parameters of C2Y3Z2 (Y= O, S, Se, Te, Z= H, F) series in planar and equilibrium configurations, normal modes of planar instability in Cartesian coordinates displacements X and imaginary frequency values
Geometry parameters Molecules
C2O3Z2 C2S3Z2 C2Se3Z2 C2Te3Z2
Planar (C2v) Equilibrium (Cs) Planar (C2v) Equilibrium (Cs) Planar (C2v) Equilibrium (Cs) Planar (C2v) Equilibrium (Cs)
Z=H Z=F Z=H Z=F Z=H Z=F Z=H Z=F Z=H Z=F Z=H Z=F Z=H Z=H
Bond length (Å) Y-Y 1.43 1.43 1.44 1.44 1.72 1.68 2.06 2.15 1.80 1.80 2.30 2.40 2.02 2.70
Y-C 1.39 1.40 1.38 1.37 1.52 1.52 1.71 1.78 1.74 1.75 1.84 1.90 1.78 2.03
C=C 1.34 1.35 1.36 1.32 1.35 1.36 1.39 1.33 1.36 1.37 1.40 1.39 1.37 1.40
C-Z 1.08 1.25 1.07 1.29 1.07 1.25 1.08 1.32 1.08 1.24 1.08 1.33 1.08 1.09
Angle (Degree) Y-Y-Y 109.9 111.8 107.0 107.8 101.4 103.2 98.9 99.6 106.9 107.3 95.4 96.9 103.0 91.0
Y-Y-C 103.7 102.8 103.1 101.1 107.0 104.4 99.5 94.6 106.9 103.7 100.2 94.6 106.1 98.4
Y-C=C 111.2 111.3 111.6 112.2 112.3 114.0 119.6 123.8 109.7 112.6 121.6 126.7 112.4 125.8
Z-C=C 129.5 128.3 131.7 132.3 126.8 127.3 127.1 122.1 126.5 127.9 122.4 120.9 125.9 119.5
Dihedral angle (Degree) Y-Y-Y-C 0.0 0.0 ±24.5 ±24.0 0.0 0.0 ±13.8 ±11.2 0.0 0.0 ±8.8 ±6.4 0.0 ±6.8
Y-Y-C=C 0.0 0.0 ±16.0 ±15.5 0.0 0.0 ±9.7 ±8.7 0.0 0.0 ±6.7 ±5.3 0.0 ±5.6
Y-Y-C-Z 180 180 ±168 ±170 180 180 ±177 ±176 180 180 ±176 ±177 180 ±175
Y-C=C-Y 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Z-C=C-Z 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Y-C-C-Z 180 180 ±175 ±173 180 180 ±172 ±174 180 180 ±177 ±177 180 ±179
Imaginary freq. b1(cm-1) 135.3 201.2 —— 304.7 150.1 —— 179.3 74.8 —— 148.4 ——
Normal modes in Cartesian X XY-C +0.1551 +0.1704 —— +0.0942 +0.1096 —— +0.0483 +0.0645 —— + 0.0300 ——
XC -0.1269 -0.1259 -0.0861 -0.0907 -0.0463 -0.0544 -0.0308
XZ +0.0571 +0.0244 +0.0966 +0.0426 +0.1304 +0.0449 +0.1480
XY +0.1032 +0.0188 +0.0885 +0.0337 +0.1834 +0.0638 +0.2399
From Table 1 illuminate that although the bond lengths and angles in planar and equilibrium configurations for the C2Y3Z2 (Z = H, F) were almost similar (except parameter in Y atom contributing) but the variety of dihedral values in their planar and equilibrium configurations were different.
The Y-Y-Y-C and Y-Y-C=C dihedral angles were the most important parameters to show the folding in the C2Y3 rings. While the absolute values of Y-Y-Y-C dihedral angle (which is 0.0 in planar configuration) were decreased in the equilibrium configuration from 24.5 degrees in C2O3H2 to 13.8 (C2S3H2), 8.8 (C2Se3H2), 6.8 (C2Te3H2) degrees in the C2Y3H2 series.
Replacing H ligands in the C2Y3H2 series by F ligands were also decreased Y-Y-Y-C, Y-Y-C=C, Y-Y-C-Z and Y-C-C-Z dihedral angle varieties of C2Y3F2 in comparison with their values in equilibrium configuration.
ACTIVE SPACE IN SA-CASSCF CALCULATION
Several active spaces have been checked in SA-CASSCF calculations and the result of calculations were compared together. The results revealed that ten electrons and eight active orbitals which was composed the CAS(10,8) active space is appropriate in under considered C2Y3Z2 series. Additionally, the CAS(10,8) results comparison to the smaller CAS(4,3), CAS(6,6), and CAS(8,8) and lager CAS(12,12) active spaces that previously have been applied in similar PJTE origin studies;22,24,27,31 was proved that CAS(10,8) active space sufficiently is good in present study. From Table 2 is also appeared that the eight orbitals included 2a1, 2b1, 2b2, and 2a2 were being contributed to the electron excitations of the C2Y3Z2 series. The main electronic configuration calculation for all C2Y3Z2 considered compounds (see Table 3) was showed that b1'→a1', electron excitation occurred and it proved that (1A1+1B1) ⊗b1 PJTE problems are the reason of the breaking symmetry phenomenon and non-planarity of C2Y3 rings in the series.
Table 2 Arrangement of HOMO and LUMO energies level and their symmetries, and electron excitation in the C2Y3Z2 series contributing in the (1A1+1B1) ⊗b1 PJTE through SA CASSCF calculation with (10,8) active space
C2O3H2 C2S3H2 C2Se3H2 C2Te3H2 C2O3F2 C2S3F2 C2Se3F2
Arrangement of MO’s
energy level
a1 HOMO-3 a1 HOMO-4 a1 HOMO-3 a1 HOMO-3 a1 HOMO-4 a1 HOMO-4 a1 HOMO-3
a1' LUMO+2 a1' LUMO+1 a1' LUMO+2 a1' LUMO+1 a1' LUMO+2 a1' LUMO+2 a1' LUMO+2
a2 HOMO a2 HOMO-1 a2 HOMO-4 a2 HOMO-4 a2 HOMO a2 HOMO-1 a2 HOMO-4
a2' LUMO a2' LUMO+2 a2' LUMO a2' LUMO+2 a2' LUMO+1 a2' LUMO a2' LUMO
b1 HOMO-1 b1 HOMO b1 HOMO b1 HOMO-1 b1 HOMO-1 b1 HOMO b1 HOMO
b1' HOMO-4 b1' HOMO-3 b1' HOMO-2 b1' HOMO-2 b1' HOMO-3 b1' HOMO-3 b1' HOMO-2
b2 HOMO-2 b2 OMO-2 b2 HOMO-1 b2 HOMO b2 HOMO-2 b2 HOMO-2 b2 HOMO-1
b2' LUMO+1 b2' LUMO b2' LUMO+1 b2' LUMO b2' LUMO b2' LUMO+1 b2' LUMO+1
Electron excitation b1'® a1' b1'® a1' b1'® a1' b1'® a1' b1'® a1' b1'® a1' b1'® a1'
HOMO-4
to
LUMO+2
HOMO-3
to
LUMO+1
HOMO-2
to
LUMO+2
HOMO-2
to
LUMO+1
HOMO-3
to
LUMO+2
HOMO-3
to
LUMO+2
HOMO-2
to
LUMO+2
Table 3 Main electronic configuration and their weight coefficients in wave-functions of ground state (1A1) and 1B1 excited state of C2Y3Z2 series in planar configuration
State symmetry Main electronic configuration Weight Coefficients
C2O3H2 C2S3H2 C2Se3H2 C2Te3H2 C2O3F2 C2S3F2 C2Se3F2
A1 a12 a22 b12 b1' 2 b22 0.9946 0.9930 0.9917 0.9922 0.9935 0.9944 0.9919
a11 a1' -1 a22 b12 b22 b2' 2 -0.0836 -0.0901 0.0899 -0.0850 0.0798 0.0935 0.0832
a1-1 a1' 1 a22 b12 b22 b2' 2 0.0836 0.0901 -0.0899 0.0850 -0.0798 -0.0935 -0.0832
a12 a22 b12 b1' 2b21 b2' -1 0.0792 0.0801 -0.0820 0.0802 -0.0813 -0.0788 -0.0795
a12 a22 b12 b1' 2b2-1 b2' 1 -0.0792 -0.0801 0.0820 -0.0802 0.0813 0.0788 0.0795
a12 a22 a2' 2 b12 b22 -0.1035 -0.1041 -0.1080 -0.1106 -0.1091 -0.1033 -0.1076
B1 a12 a1' 1 a22 b12 b1' -1 b22 -0.7013 -0.6988 -0.7019 -0.7023 -0.7012 -0.7009 -0.7007
a12 a1'- 1 a22 b12 b1' 1 b22 0.7013 0.6988 0.7019 0.7023 0.7012 0.7009 0.7007
a12 a1' 1 a22 b12 b1' -1 b2-1 b2' 1 0.0628 0.0631 0.0645 0.0622 0.0625 0.0634 0.0601
a12 a1' -1 a22 b12 b1' 1 b21 b2'- 1 0.0628 0.0631 0.0645 0.0622 0.0625 0.0634 0.0601
a12 a1' 1 a21 a2' -1 b12 b1' -1 b22 -0.0611 -0.0613 -0.0603 -0.0614 -0.0596 -0.0602 -0.0606
a12 a1' 1 a2-1 a2' 1 b12 b1' -1 b22 -0.06211 -0.0613 -0.0603 -0.0614 -0.0596 -0.0602 -0.0606
THE PJTE DUE TO PUCKERING
If Γ was supposed as the ground state and first non-degenerate excited state (Γ') was separated with Δ energy gap, |Γ〉 and |Γ'〉 will be denoted as the wave-functions of those mixing states. Based on the PJTE theorem13, the ground state in the nuclear displacements direction (Q) is instable if
Δ<FΓΓ'2K0 (1)
where K0 and F(ΓΓ') are the primary force constant of the ground state and the vibronic coupling constant. With respect to Hamiltonian, H, and the electron-nucleon interaction operator, V, the K0, K0' (the primary force constant of excited state) and F(ΓΓ') constants indicate by Equations (2) and (3):
K0=ΓHQΓ,K'0=Γ'HQΓ' (2)
FΓΓ'=ΓVQΓ' (3)
From these definitions for 2×2 case of two interacting states, it is revealed that the PJTE substantially involves excited states, specifically the energy gap to them and the strength of their influence of the ground state via the vibronic coupling constant F(ΓΓ').
The vibronic coupling between the two states reduces and adds the force constant of the ground and excited states by an amount of (F(ΓΓ'))2/Δ and making the total constant of the ground and excited states as followed equalities.14
K=K0FΓΓ'2Δ,K'=K0'+FΓΓ'2Δ (4)
This leads to the condition of instability in Equation (1) at which the curvature K of the adiabatic potential energy surface (APES) for the 1A1 ground state in Qb1 direction becomes negative but for the effective PJT interaction, the curvature K’ of the APES for the 1B1 excited (which is the first one in all of C2Y3Z2 compounds in the series) must be positive.
Since an B1 excited state contribute to the A1 ground state instability in the C2Y3Z2 series planar configuration, the PJTE emerges directly from the secular equation for the vibronically coupled the ground and excited state that is formulated by two-level PJTE problem as a 2×2 secular Equation (5) and quadratic Equation (6).
12KQ2εFQFQ12K'Q2+Δε=0 (5)
ε212K+K'Q2+Δε+14KK'Q4+Δ2KQ2F2Q2=0 (6)
In above Equations, Δ is the energy gap between the ground and excited states and for simplicity, the Qb1 and 〖F〗(ΓΓ') are abbreviated to Q and F. The solution of Equation (6) around Q = 0 of the APES for the ground and excited states around the planar configuration bring us the fallowing solutions in Equation (7):
ε1=12K2F2ΔQ2F22Δ2KK'2F2ΔQ4+...ε2=12K'+2F2ΔQ2+Δ+F22Δ2KK'2F2ΔQ4+... (7)
With respect to the ab initio calculation, it was founded that the lowest excited state with 1B1 symmetry is interacting with the ground state (in 1A1 symmetry). For all under study C2Y3Z2 compounds, the condition of the PJTE ground state instability in Eq. (1 (are observable from the APES profiles of C2Y3H2 and C2Y3F2 (see Figures 3 and 4) and it appears that the ground state in all C2Y3Z2 under consideration compounds are unstable in Q = 0 of the APES (planar configuration) along the b1 puckering direction. Additionally, In Figures 3 and 4, the numerical fitting of the energies obtained from Equation (7) for C2Y3H2 (Y = O, S, Se, Te, Z -= H, F) series were compared with theirs ab initio calculated energy profiles and the first and second A' states in puckered stable geometry with Cs symmetry were denoted by AI' and AII', respectively.
As from Figures 3 and 4 were illuminated, instability in planar configuration occurs in all C2Y3Z2 (Y= O, S, Se, Te, Z= H, F) compounds and the origin of the PJTE in the series are may explained via solution of the PJTE (1A1+1B1) ⊗b1 problem. For explanation of origin of the PJTE in the series, K, K' and Fparameters were estimated by the numerical fitting of Equation (7) with the APES energy profiles along the twisting direction (Qb1) which is revealed in Table 4. Since small values of higher order of Q4 parameters in comparison with Q2 and Q4, the numerical fitting of the equation was done up to the second term of the series in Equation (7).
Table 4 Total constant constants in ground and excited state, K and K', PJTE coupling constants, F, and energy gaps, Δ, of the C2Y3Z2 (Y= O, S, Se, Te, Z = H, F ) series for (1A1+1B1) ⊗b1 problem
Molecules K
in eV/Å2
K'
in eV/Å2
F
in eV/Å
D
in eV
C2O3H2 0.82 1.47 1.03 4.35
C2S3H2 3.20 1.45 2.70 2.84
C2Se3H2 1.92 1.34 2.22 2.28
C2Te3H2 3.33 1.75 2.83 1.85
C2O3F2 1.93 1.43 2.28 3.50
C2S3F2 2.02 1.50 2.34 2.30
C2Se3F2 1.89 1.38 2.43 2.15
By the estimated parameters and the energy gaps (Δ) in Table 4 and with respect to the PJTE the ground state instability in Equation (1), the PJTE origin and value of instability in the C2Y3Z2 series were evaluated. The highest and lowest PJTE coupling constants for the C2Y3H2 series are belonged to C2Te3H2 and C2O3H2 compounds and the F values for the C2Y3F2 series were increased from 2.28 ev / Å, in C2O3F2 to 2.43 ev / Å, in C2Se3F2 compound, respectively.
CONCLUSIONS
An imaginary frequency coordinate in out-of-plane of the molecules was observed through the ab initio DFT optimization and following frequency calculations in planar configuration of C2Y3Z2 (Y = O, S, Se, Te, Z = H, F) series. It provides very important information about the symmetry breaking phenomenon in all under consideration compounds in the series. In other hand all those compounds in the C2Y3Z2 series do not hold their planarity due to the PJTE and they distort from their unstable planar configuration with C2v symmetry to the stable Cs symmetry geometry. The APES cross sections of the series were revealed that the ground state (1A1) and the excited 1B1 state along b1 nuclear displacements could interact vibronically and the (1A1+1B1) ⊗b1 problem is the reason of instability of the C2Y3Z2 series in their planar configuration. Estimation of the vibronic coupling constant values, F, in the C2Y3H2 series illuminate that the most unstable planar configuration in the series is corresponded to C2Te3H2 five-member ring compound. It is also clear that planar stability in the systems rises by decreasing the Y atoms size in the series, except in C2Se3H2. Additionally, the molecules puckering in both C2S3F2 and C2Se3F2 compounds were decreased by replacing the H atoms with F ligands in the C2Y3H2 series, although C2O3F2 compound shows opposite behavior in the C2Y3F2 under consideration series.
REFERENCES
1 Komissarov, V. D.; Komissarova, I. N.; Russ. Chem. Bull. 1973, 22, 656. [ Links ]
2 Privett, O. S.; Nickell, E. C.; J. Lipid Res. 1963, 4, 208. [ Links ]
3 Rebrovic, L.; J. Am. Oil Chem. Soc. 1992, 69, 159. [ Links ]
4 Nickell, E. C.; Albi, M.; Privett, O. S.; Chem. Phys. Lipids 1976, 17, 378. [ Links ]
5 Lee, J. W.; Carrascon, V.; Gallimore, P. J.; Fuller, S. J.; Bjorkeqren, A.; Spring, D. R.; Pope, F.D.; Kalberer, M.; Phys. Chem. Chem. Phys. 2012, 14, 8023. [ Links ]
6 Criegee, R.; Angew. Chem., Int. Ed. 1975, 87, 745. [ Links ]
7 Geletneky, C.; Berger, S.; Eur. J. Org. Chem. 1998, 8, 1625. [ Links ]
8 Sun, Y.; Cao, H.; Han, D.; Li, J.; He; M.; Wang, C.; Chem. Phys. 2012, 402, 6. [ Links ]
9 Kuwata, K. T.; Templeton, K. L.; Hasson, A. S.; J. Phys. Chem. A 2003, 107, 11525. [ Links ]
10 Anglada, J. M.; Crehuet, R.; Bofill, J. M.; Chem. - Eur. J. 1999, 5, 1809. [ Links ]
11 Cremer, D.; Crehuet, R.; Anglada, J.; J. Am. Chem. Soc. 2001, 123, 6127. [ Links ]
12 Castillo, A.; Lee, L.; Greer, A.; J. Phys. Org. Chem. 2012, 25, 42. [ Links ]
13 Bersuker, I. B.; The Jahn−Teller effect, Cambridge University Press: Cambridge, 2006. [ Links ]
14 Bersuker, I. B.; Chem. Rev. 2013, 113, 1351. [ Links ]
15 Bersuker, I. B.; Chem. Rev. 2001, 101, 1067. [ Links ]
16 Blancafort, L.; Bearpark, M. J.; Robb, M. A.; Mol. Phys. 2006, 104, 2007. [ Links ]
17 Kim, J. H.; Lee, Z.; Appl. Microscopy 2014, 44, 123. [ Links ]
18 Gromov, E. V.; Trofimov, A. B.; Vitkovskaya, N. M.; Schirmer, J.; Koppel, H.; J. Chem. Phys. 2003, 119, 737. [ Links ]
19 Soto, J. R.; Molina, B.; Castro, J. J.; Phys. Chem. Chem. Phys. 2015, 17, 7624. [ Links ]
20 Monajjemi, M.; Theor. Chem. Acc. 2015, 134, 1. [ Links ]
21 Liu, Y.; Bersuker, I. B.; Garcia-Fernandez, P.; Boggs, J. E.; J. Phys. Chem. A 2012, 116, 7564. [ Links ]
22 Hermoso, W.; Ilkhani, A. R.; Bersuker, I. B.; Comput. Theo. Chem. 2014, 1049, 109. [ Links ]
23 Liu, Y.; Bersuker, I. B.; Boggs, J. E.; Chem. Phys. 2013, 417, 26. [ Links ]
24 Ilkhani, A. R.; Hermoso, W.; Bersuker, I. B.; Chem. Phys. 2015, 460, 75. [ Links ]
25 Jose; D.; Datta, A.; Phys. Chem. Chem. Phys. 2011, 13, 7304. [ Links ]
26 Monajjemi, M.; Bagheri, S.; Moosavi, M. S.; Moradiyeh, N.; Zakeri, M.; Attarikhasraghi, N.; Saghayimarouf, N.; Niyatzadeh, G.; Shekarkhand, M.; Khalilimofrad, M. S.; Ahmadin, H.; Ahadi, M.; Molecules 2015, 20, 21636. [ Links ]
27 Ilkhani, A. R.; Monajjemi, M.; Comput. Theor. Chem. 2015, 1074, 19. [ Links ]
28 Jose, D.; Datta, A.; J. Phys. Chem. C 2012, 116, 24639. [ Links ]
29 Ilkhani, A.R.; J. Mol. Struc. 2015, 1098, 21. [ Links ]
30 Ivanov, A. S.; Bozhenko, K. V.; Boldyrev, A. I.; Inorg. Chem. 2012, 57, 8868. [ Links ]
31 Ilkhani, A.R.; Gorinchoy, N. N.; Bersuker, I. B.; Chem. Phys. 2015, 460, 106. [ Links ]
32 Pratik, S. M.; Chowdhury, C.; Bhattacharjee, R.; Jahiruddin, S.; Datta, A.; Chem. Phys. 2015, 460, 101. [ Links ]
33 Ivanov, A. S.; Miller, E.; Boldyrev, A. I.; Kameoka, Y.; Sato, T.; Tanaka, K.; J. Phys. Chem. C 2015, 119, 12008. [ Links ]
34 Ivanov, A. S.; Boldyrev, A. I.; J. Phys. Chem. A 2012, 116, 9591. [ Links ]
35 Nijamudheen, A.; Bhattacharjee, R.; Choudhury, S.; Datta, A.; J. Phys. Chem. C 2015, 119, 3802. [ Links ]
36 Pratik, S. M.; Datta, A.; J. Phys. Chem. C, 2015, 119, 15770. [ Links ]
37 Jose, D.; Datta, A.; Phys. Chem. Chem. Phys. 2011, 13, 7304. [ Links ]
38 Kameoka, Y.; Sato, T.; Koyama, T.; Tanaka, K.; Kato, T.; Chem. Phys. Lett. 2014, 598, 69. [ Links ]
39 Werner, H. J.; Knowles, P. J.; Manby, F. R.; Schutz, M.; MOLPRO, version 2010.1, a package of ab initio programs, Availaible from: http://www.molpro.net. Accessed in February 2017. [ Links ]
40 Werner, H. J.; Meyer, W.; J. Chem. Phys. 1981, 74, 5794. [ Links ]
41 Werner, H. J.; Meyer, W.; J. Chem. Phys. 1980, 73, 2342. [ Links ]
42 Werner, H. J.; Knowles, P. J.; J. Chem. Phys. 1985, 82, 5053. [ Links ]
43 Polly, R.; Werner, H. J.; Manby, F. R.; Knowles, P. J.; Mol. Phys. 2004, 102, 2311. [ Links ]
44 Wilson, A. K.; Woon, D. E.; Peterson, K. A.; J. Chem. Phys. 1999, 110, 7667. [ Links ]
45 Dunning, T. H.; J. Chem. Phys. 1989, 90, 1007. [ Links ]
46 Woon, D. E.; Dunning, T. H.; J. Chem. Phys. 1993, 98, 1358. [ Links ]
Received: July 16, 2016; Accepted: January 23, 2017
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License which permits unrestricted non-commercial use, distribution, and reproduction in any medium provided the original work is properly cited. |
# Logarithmic Equation
## Logarithmic Equation
Transcendental equation in which the variable appears only as the argument of a logarithm.
### Example
The equation log2x = 256 is a logarithmic equation, since the variable x appears as the argument of a logarithm. |
# American Institute of Mathematical Sciences
2015, 2015(special): 455-463. doi: 10.3934/proc.2015.0455
## Well-posedness for a class of nonlinear degenerate parabolic equations
1 Post Doc Istituto Nazionale di Alta Matematica (INdAM) "F. Severi", Dipartimento di Matematica, Università di Roma "Tor Vergata", I-00161 Roma, Italy
Received September 2014 Revised August 2015 Published November 2015
In this paper we obtain well-posedness for a class of semilinear weakly degenerate reaction-diffusion systems with Robin boundary conditions. This result is obtained through a Gagliardo-Nirenberg interpolation inequality and some embedding results for weighted Sobolev spaces.
Citation: Giuseppe Floridia. Well-posedness for a class of nonlinear degenerate parabolic equations. Conference Publications, 2015, 2015 (special) : 455-463. doi: 10.3934/proc.2015.0455
##### References:
show all references
##### References:
[1] Doyoon Kim, Hongjie Dong, Hong Zhang. Neumann problem for non-divergence elliptic and parabolic equations with BMO$_x$ coefficients in weighted Sobolev spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4895-4914. doi: 10.3934/dcds.2016011 [2] Ping Li, Pablo Raúl Stinga, José L. Torrea. On weighted mixed-norm Sobolev estimates for some basic parabolic equations. Communications on Pure & Applied Analysis, 2017, 16 (3) : 855-882. doi: 10.3934/cpaa.2017041 [3] Xiaojun Li, Xiliang Li, Kening Lu. Random attractors for stochastic parabolic equations with additive noise in weighted spaces. Communications on Pure & Applied Analysis, 2018, 17 (3) : 729-749. doi: 10.3934/cpaa.2018038 [4] Li Ma. Blow-up for semilinear parabolic equations with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1103-1110. doi: 10.3934/cpaa.2013.12.1103 [5] Alberto Fiorenza, Anna Mercaldo, Jean Michel Rakotoson. Regularity and uniqueness results in grand Sobolev spaces for parabolic equations with measure data. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 893-906. doi: 10.3934/dcds.2002.8.893 [6] Chérif Amrouche, Mohamed Meslameni, Šárka Nečasová. Linearized Navier-Stokes equations in $\mathbb{R}^3$: An approach in weighted Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 901-916. doi: 10.3934/dcdss.2014.7.901 [7] Jun Yang, Yaotian Shen. Weighted Sobolev-Hardy spaces and sign-changing solutions of degenerate elliptic equation. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2565-2575. doi: 10.3934/cpaa.2013.12.2565 [8] Tahar Z. Boulmezaoud, Amel Kourta. Some identities on weighted Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 427-434. doi: 10.3934/dcdss.2012.5.427 [9] Daniel Coutand, Steve Shkoller. Turbulent channel flow in weighted Sobolev spaces using the anisotropic Lagrangian averaged Navier-Stokes (LANS-$\alpha$) equations. Communications on Pure & Applied Analysis, 2004, 3 (1) : 1-23. doi: 10.3934/cpaa.2004.3.1 [10] Claudia Anedda, Giovanni Porru. Boundary estimates for solutions of weighted semilinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (11) : 3801-3817. doi: 10.3934/dcds.2012.32.3801 [11] Peter Weidemaier. Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed $L_p$-norm. Electronic Research Announcements, 2002, 8: 47-51. [12] Hua Chen, Huiyang Xu. Global existence and blow-up of solutions for infinitely degenerate semilinear pseudo-parabolic equations with logarithmic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1185-1203. doi: 10.3934/dcds.2019051 [13] Younghun Hong, Yannick Sire. On Fractional Schrödinger Equations in sobolev spaces. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2265-2282. doi: 10.3934/cpaa.2015.14.2265 [14] Jiabao Su, Rushun Tian. Weighted Sobolev embeddings and radial solutions of inhomogeneous quasilinear elliptic equations. Communications on Pure & Applied Analysis, 2010, 9 (4) : 885-904. doi: 10.3934/cpaa.2010.9.885 [15] Wenxiong Chen, Chao Jin, Congming Li, Jisun Lim. Weighted Hardy-Littlewood-Sobolev inequalities and systems of integral equations. Conference Publications, 2005, 2005 (Special) : 164-172. doi: 10.3934/proc.2005.2005.164 [16] Flank D. M. Bezerra, Jacson Simsen, Mariza Stefanello Simsen. Convergence of quasilinear parabolic equations to semilinear equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020258 [17] Takesi Fukao, Masahiro Kubo. Nonlinear degenerate parabolic equations for a thermohydraulic model. Conference Publications, 2007, 2007 (Special) : 399-408. doi: 10.3934/proc.2007.2007.399 [18] Young-Sam Kwon. Strong traces for degenerate parabolic-hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1275-1286. doi: 10.3934/dcds.2009.25.1275 [19] Jiebao Sun, Boying Wu, Jing Li, Dazhi Zhang. A class of doubly degenerate parabolic equations with periodic sources. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1199-1210. doi: 10.3934/dcdsb.2010.14.1199 [20] A. V. Rezounenko. Inertial manifolds with delay for retarded semilinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 829-840. doi: 10.3934/dcds.2000.6.829
Impact Factor: |
e04us is designed to minimize an arbitrary smooth sum of squares function subject to constraints (which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints) using a sequential quadratic programming (SQP) method. As many first derivatives as possible should be supplied by you; any unspecified derivatives are approximated by finite differences. See the description of the optional parameter Derivative Level, in [Description of the Optional Parameters]. It is not intended for large sparse problems.
e04us may also be used for unconstrained, bound-constrained and linearly constrained optimization.
# Syntax
C#
```public static void e04us(
int m,
int n,
int nclin,
int ncnln,
double[,] a,
double[] bl,
double[] bu,
double[] y,
E04..::..E04US_CONFUN confun,
E04..::..E04US_OBJFUN objfun,
out int iter,
int[] istate,
double[] c,
double[,] cjac,
double[] f,
double[,] fjac,
double[] clamda,
out double objf,
double[,] r,
double[] x,
E04..::..e04usOptions options,
out int ifail
)```
Visual Basic
```Public Shared Sub e04us ( _
m As Integer, _
n As Integer, _
nclin As Integer, _
ncnln As Integer, _
a As Double(,), _
bl As Double(), _
bu As Double(), _
y As Double(), _
confun As E04..::..E04US_CONFUN, _
objfun As E04..::..E04US_OBJFUN, _
<OutAttribute> ByRef iter As Integer, _
istate As Integer(), _
c As Double(), _
cjac As Double(,), _
f As Double(), _
fjac As Double(,), _
clamda As Double(), _
<OutAttribute> ByRef objf As Double, _
r As Double(,), _
x As Double(), _
options As E04..::..e04usOptions, _
<OutAttribute> ByRef ifail As Integer _
)```
Visual C++
```public:
static void e04us(
int m,
int n,
int nclin,
int ncnln,
array<double,2>^ a,
array<double>^ bl,
array<double>^ bu,
array<double>^ y,
E04..::..E04US_CONFUN^ confun,
E04..::..E04US_OBJFUN^ objfun,
[OutAttribute] int% iter,
array<int>^ istate,
array<double>^ c,
array<double,2>^ cjac,
array<double>^ f,
array<double,2>^ fjac,
array<double>^ clamda,
[OutAttribute] double% objf,
array<double,2>^ r,
array<double>^ x,
E04..::..e04usOptions^ options,
[OutAttribute] int% ifail
)```
F#
```static member e04us :
m : int *
n : int *
nclin : int *
ncnln : int *
a : float[,] *
bl : float[] *
bu : float[] *
y : float[] *
confun : E04..::..E04US_CONFUN *
objfun : E04..::..E04US_OBJFUN *
iter : int byref *
istate : int[] *
c : float[] *
cjac : float[,] *
f : float[] *
fjac : float[,] *
clamda : float[] *
objf : float byref *
r : float[,] *
x : float[] *
options : E04..::..e04usOptions *
ifail : int byref -> unit
```
#### Parameters
m
Type: System..::..Int32
On entry: $m$, the number of subfunctions associated with $F\left(x\right)$.
Constraint: ${\mathbf{m}}>0$.
n
Type: System..::..Int32
On entry: $n$, the number of variables.
Constraint: ${\mathbf{n}}>0$.
nclin
Type: System..::..Int32
On entry: ${n}_{L}$, the number of general linear constraints.
Constraint: ${\mathbf{nclin}}\ge 0$.
ncnln
Type: System..::..Int32
On entry: ${n}_{N}$, the number of nonlinear constraints.
Constraint: ${\mathbf{ncnln}}\ge 0$.
a
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint: $\mathrm{dim1}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{nclin}}\right)$
Note: the second dimension of the array a must be at least ${\mathbf{n}}$ if ${\mathbf{nclin}}>0$, and at least $1$ otherwise.
On entry: the $\mathit{i}$th row of a contains the $\mathit{i}$th row of the matrix ${A}_{L}$ of general linear constraints in (1). That is, the $\mathit{i}$th row contains the coefficients of the $\mathit{i}$th general linear constraint, for $\mathit{i}=1,2,\dots ,{\mathbf{nclin}}$.
If ${\mathbf{nclin}}=0$, the array a is not referenced.
bl
Type: array<System..::..Double>[]()[][]
An array of size [${\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$]
On entry: must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first $n$ elements of each array must contain the bounds on the variables, the next ${n}_{L}$ elements the bounds for the general linear constraints (if any) and the next ${n}_{N}$ elements the bounds for the general nonlinear constraints (if any). To specify a nonexistent lower bound (i.e., ${l}_{j}=-\infty$), set ${\mathbf{bl}}\left[j-1\right]\le -\mathit{bigbnd}$, and to specify a nonexistent upper bound (i.e., ${u}_{j}=+\infty$), set ${\mathbf{bu}}\left[j-1\right]\ge \mathit{bigbnd}$; the default value of $\mathit{bigbnd}$ is ${10}^{20}$, but this may be changed by the optional parameter Infinite Bound Size. To specify the $j$th constraint as an equality, set ${\mathbf{bl}}\left[j-1\right]={\mathbf{bu}}\left[j-1\right]=\beta$, say, where $\left|\beta \right|<\mathit{bigbnd}$.
Constraints:
• ${\mathbf{bl}}\left[\mathit{j}-1\right]\le {\mathbf{bu}}\left[\mathit{j}-1\right]$, for $\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$;
• if ${\mathbf{bl}}\left[j-1\right]={\mathbf{bu}}\left[j-1\right]=\beta$, $\left|\beta \right|<\mathit{bigbnd}$.
bu
Type: array<System..::..Double>[]()[][]
An array of size [${\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$]
On entry: must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first $n$ elements of each array must contain the bounds on the variables, the next ${n}_{L}$ elements the bounds for the general linear constraints (if any) and the next ${n}_{N}$ elements the bounds for the general nonlinear constraints (if any). To specify a nonexistent lower bound (i.e., ${l}_{j}=-\infty$), set ${\mathbf{bl}}\left[j-1\right]\le -\mathit{bigbnd}$, and to specify a nonexistent upper bound (i.e., ${u}_{j}=+\infty$), set ${\mathbf{bu}}\left[j-1\right]\ge \mathit{bigbnd}$; the default value of $\mathit{bigbnd}$ is ${10}^{20}$, but this may be changed by the optional parameter Infinite Bound Size. To specify the $j$th constraint as an equality, set ${\mathbf{bl}}\left[j-1\right]={\mathbf{bu}}\left[j-1\right]=\beta$, say, where $\left|\beta \right|<\mathit{bigbnd}$.
Constraints:
• ${\mathbf{bl}}\left[\mathit{j}-1\right]\le {\mathbf{bu}}\left[\mathit{j}-1\right]$, for $\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$;
• if ${\mathbf{bl}}\left[j-1\right]={\mathbf{bu}}\left[j-1\right]=\beta$, $\left|\beta \right|<\mathit{bigbnd}$.
y
Type: array<System..::..Double>[]()[][]
An array of size [m]
On entry: the coefficients of the constant vector $y$ of the objective function.
confun
Type: NagLibrary..::..E04..::..E04US_CONFUN
confun must calculate the vector $c\left(x\right)$ of nonlinear constraint functions and (optionally) its Jacobian ($\text{}=\frac{\partial c}{\partial x}$) for a specified $n$-element vector $x$. If there are no nonlinear constraints (i.e., ${\mathbf{ncnln}}=0$), confun will never be called by e04us and confun may be the dummy method E04UDM. (E04UDM is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
A delegate of type E04US_CONFUN.
Note: confun should be tested separately before being used in conjunction with e04us. See also the description of the optional parameter Verify.
objfun
Type: NagLibrary..::..E04..::..E04US_OBJFUN
objfun must calculate either the $i$th element of the vector $f\left(x\right)={\left({f}_{1}\left(x\right),{f}_{2}\left(x\right),\dots ,{f}_{m}\left(x\right)\right)}^{\mathrm{T}}$ or all $m$ elements of $f\left(x\right)$ and (optionally) its Jacobian ($\text{}=\frac{\partial f}{\partial x}$) for a specified $n$-element vector $x$.
A delegate of type E04US_OBJFUN.
Note: objfun should be tested separately before being used in conjunction with e04us. See also the description of the optional parameter Verify.
iter
Type: System..::..Int32%
On exit: the number of major iterations performed.
istate
Type: array<System..::..Int32>[]()[][]
An array of size [${\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$]
On entry: need not be set if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, the elements of istate corresponding to the bounds and linear constraints define the initial working set for the procedure that finds a feasible point for the linear constraints and bounds. The active set at the conclusion of this procedure and the elements of istate corresponding to nonlinear constraints then define the initial working set for the first QP subproblem. More precisely, the first $n$ elements of istate refer to the upper and lower bounds on the variables, the next ${n}_{L}$ elements refer to the upper and lower bounds on ${A}_{L}x$, and the next ${n}_{N}$ elements refer to the upper and lower bounds on $c\left(x\right)$. Possible values for ${\mathbf{istate}}\left[j-1\right]$ are as follows:
${\mathbf{istate}}\left[j-1\right]$ Meaning 0 The corresponding constraint is not in the initial QP working set. 1 This inequality constraint should be in the working set at its lower bound. 2 This inequality constraint should be in the working set at its upper bound. 3 This equality constraint should be in the initial working set. This value must not be specified unless ${\mathbf{bl}}\left[j-1\right]={\mathbf{bu}}\left[j-1\right]$.
The values $-2$, $-1$ and $4$ are also acceptable but will be modified by the method. If e04us has been called previously with the same values of n, nclin and ncnln, istate already contains satisfactory information. (See also the description of the optional parameter Warm Start.) The method also adjusts (if necessary) the values supplied in x to be consistent with istate.
Constraint: $-2\le {\mathbf{istate}}\left[\mathit{j}-1\right]\le 4$, for $\mathit{j}=1,2,\dots ,{\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$.
On exit: the status of the constraints in the QP working set at the point returned in x. The significance of each possible value of ${\mathbf{istate}}\left[j-1\right]$ is as follows:
${\mathbf{istate}}\left[j-1\right]$ Meaning $-2$ This constraint violates its lower bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem. $-1$ This constraint violates its upper bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem. $\phantom{-}0$ The constraint is satisfied to within the feasibility tolerance, but is not in the QP working set. $\phantom{-}1$ This inequality constraint is included in the QP working set at its lower bound. $\phantom{-}2$ This inequality constraint is included in the QP working set at its upper bound. $\phantom{-}3$ This constraint is included in the QP working set as an equality. This value of istate can occur only when ${\mathbf{bl}}\left[j-1\right]={\mathbf{bu}}\left[j-1\right]$.
c
Type: array<System..::..Double>[]()[][]
An array of size [$\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{ncnln}}\right)$]
On exit: if ${\mathbf{ncnln}}>0$, ${\mathbf{c}}\left[\mathit{i}-1\right]$ contains the value of the $\mathit{i}$th nonlinear constraint function ${c}_{\mathit{i}}$ at the final iterate, for $\mathit{i}=1,2,\dots ,{\mathbf{ncnln}}$.
If ${\mathbf{ncnln}}=0$, the array c is not referenced.
cjac
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint: $\mathrm{dim1}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{ncnln}}\right)$
Note: the second dimension of the array cjac must be at least ${\mathbf{n}}$ if ${\mathbf{ncnln}}>0$, and at least $1$ otherwise.
On entry: in general, cjac need not be initialized before the call to e04us. However, if ${\mathbf{Derivative Level}}=3$, you may optionally set the constant elements of cjac (see parameter nstate in the description of confun). Such constant elements need not be re-assigned on subsequent calls to confun.
On exit: if ${\mathbf{ncnln}}>0$, cjac contains the Jacobian matrix of the nonlinear constraint functions at the final iterate, i.e., ${\mathbf{cjac}}\left[\mathit{i}-1,\mathit{j}-1\right]$ contains the partial derivative of the $\mathit{i}$th constraint function with respect to the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,{\mathbf{ncnln}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{n}}$. (See the discussion of parameter cjac under confun.)
If ${\mathbf{ncnln}}=0$, the array cjac is not referenced.
f
Type: array<System..::..Double>[]()[][]
An array of size [m]
On exit: ${\mathbf{f}}\left[\mathit{i}-1\right]$ contains the value of the $\mathit{i}$th function ${f}_{\mathit{i}}$ at the final iterate, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
fjac
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, n]
Note: dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{m}}$
On entry: in general, fjac need not be initialized before the call to e04us. However, if ${\mathbf{Derivative Level}}=3$, you may optionally set the constant elements of fjac (see parameter nstate in the description of objfun). Such constant elements need not be re-assigned on subsequent calls to objfun.
On exit: the Jacobian matrix of the functions ${f}_{1},{f}_{2},\dots ,{f}_{m}$ at the final iterate, i.e., ${\mathbf{fjac}}\left[\mathit{i}-1,\mathit{j}-1\right]$ contains the partial derivative of the $\mathit{i}$th function with respect to the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{n}}$. (See also the discussion of parameter fjac under objfun.)
clamda
Type: array<System..::..Double>[]()[][]
An array of size [${\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$]
On entry: need not be set if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, ${\mathbf{clamda}}\left[\mathit{j}-1\right]$ must contain a multiplier estimate for each nonlinear constraint with a sign that matches the status of the constraint specified by the istate array, for $\mathit{j}={\mathbf{n}}+{\mathbf{nclin}}+1,\dots ,{\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$. The remaining elements need not be set. Note that if the $j$th constraint is defined as ‘inactive’ by the initial value of the istate array (i.e., ${\mathbf{istate}}\left[j-1\right]=0$), ${\mathbf{clamda}}\left[j-1\right]$ should be zero; if the $j$th constraint is an inequality active at its lower bound (i.e., ${\mathbf{istate}}\left[j-1\right]=1$), ${\mathbf{clamda}}\left[j-1\right]$ should be non-negative; if the $j$th constraint is an inequality active at its upper bound (i.e., ${\mathbf{istate}}\left[j-1\right]=2$, ${\mathbf{clamda}}\left[j-1\right]$ should be non-positive. If necessary, the method will modify clamda to match these rules.
On exit: the values of the QP multipliers from the last QP subproblem. ${\mathbf{clamda}}\left[j-1\right]$ should be non-negative if ${\mathbf{istate}}\left[j-1\right]=1$ and non-positive if ${\mathbf{istate}}\left[j-1\right]=2$.
objf
Type: System..::..Double%
On exit: the value of the objective function at the final iterate.
r
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, n]
Note: dim1 must satisfy the constraint: $\mathrm{dim1}\ge {\mathbf{n}}$
On entry: need not be initialized if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, r must contain the upper triangular Cholesky factor $R$ of the initial approximation of the Hessian of the Lagrangian function, with the variables in the natural order. Elements not in the upper triangular part of r are assumed to be zero and need not be assigned.
On exit: if ${\mathbf{Hessian}}='\mathrm{NO}'$, r contains the upper triangular Cholesky factor $R$ of ${Q}^{\mathrm{T}}\stackrel{~}{H}Q$, an estimate of the transformed and reordered Hessian of the Lagrangian at $x$ (see (6) in e04uf). If ${\mathbf{Hessian}}='\mathrm{YES}'$, r contains the upper triangular Cholesky factor $R$ of $H$, the approximate (untransformed) Hessian of the Lagrangian, with the variables in the natural order.
x
Type: array<System..::..Double>[]()[][]
An array of size [n]
On entry: an initial estimate of the solution.
On exit: the final estimate of the solution.
options
Type: NagLibrary..::..E04..::..e04usOptions
An Object of type E04.e04usOptions. Used to configure optional parameters to this method.
ifail
Type: System..::..Int32%
On exit: ${\mathbf{ifail}}={0}$ unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).
# Description
e04us is designed to solve the nonlinear least squares programming problem – the minimization of a smooth nonlinear sum of squares function subject to a set of constraints on the variables. The problem is assumed to be stated in the following form:
$minimizex∈RnFx=12∑i=1myi-fix2 subject to l≤xALxcx≤u,$ (1)
where $F\left(x\right)$ (the objective function) is a nonlinear function which can be represented as the sum of squares of $m$ subfunctions $\left({y}_{1}-{f}_{1}\left(x\right)\right),\left({y}_{2}-{f}_{2}\left(x\right)\right),\dots ,\left({y}_{m}-{f}_{m}\left(x\right)\right)$, the ${y}_{i}$ are constant, ${A}_{L}$ is an ${n}_{L}$ by $n$ constant matrix, and $c\left(x\right)$ is an ${n}_{N}$ element vector of nonlinear constraint functions. (The matrix ${A}_{L}$ and the vector $c\left(x\right)$ may be empty.) The objective function and the constraint functions are assumed to be smooth, i.e., at least twice-continuously differentiable. (The method of e04us will usually solve (1) if any isolated discontinuities are away from the solution.)
Note that although the bounds on the variables could be included in the definition of the linear constraints, we prefer to distinguish between them for reasons of computational efficiency. For the same reason, the linear constraints should not be included in the definition of the nonlinear constraints. Upper and lower bounds are specified for all the variables and for all the constraints. An equality constraint can be specified by setting ${l}_{i}={u}_{i}$. If certain bounds are not present, the associated elements of $l$ or $u$ can be set to special values that will be treated as $-\infty$ or $+\infty$. (See the description of the optional parameter Infinite Bound Size.)
You must supply an initial estimate of the solution to (1), together with methods that define $f\left(x\right)={\left({f}_{1}\left(x\right),{f}_{2}\left(x\right),\dots ,{f}_{m}\left(x\right)\right)}^{\mathrm{T}}$, $c\left(x\right)$ and as many first partial derivatives as possible; unspecified derivatives are approximated by finite differences.
The subfunctions are defined by the array y and objfun, and the nonlinear constraints are defined by confun. On every call, these methods must return appropriate values of $f\left(x\right)$ and $c\left(x\right)$. You should also provide the available partial derivatives. Any unspecified derivatives are approximated by finite differences for a discussion of the optional parameter Derivative Level. Note that if there are any nonlinear constraints, then the first call to confun will precede the first call to objfun.
For maximum reliability, it is preferable for you to provide all partial derivatives (see Chapter 8 of Gill et al. (1981) for a detailed discussion). If all gradients cannot be provided, it is similarly advisable to provide as many as possible. While developing objfun and confun, the optional parameter Verify should be used to check the calculation of any known gradients.
# References
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems 187 Springer–Verlag
# Error Indicators and Warnings
Note: e04us may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the method:
Some error messages may refer to parameters that are dropped from this interface (LDA, LDCJ, LDFJ, LDR) In these cases, an error in another parameter has usually caused an incorrect value to be inferred.
${\mathbf{ifail}}<0$
A negative value of ifail indicates an exit from e04us because you set ${\mathbf{mode}}<0$ in objfun or confun. The value of ifail will be the same as your setting of mode.
${\mathbf{ifail}}=1$
The final iterate $x$ satisfies the first-order Kuhn–Tucker conditions (see [] in e04uf) to the accuracy requested, but the sequence of iterates has not yet converged. e04us was terminated because no further improvement could be made in the merit function (see [Description of the Printed Output]).
This value of ifail may occur in several circumstances. The most common situation is that you ask for a solution with accuracy that is not attainable with the given precision of the problem (as specified by the optional parameter Function Precision ($\text{default value}={\epsilon }^{0.9}$, where $\epsilon$ is the machine precision)). This condition will also occur if, by chance, an iterate is an ‘exact’ Kuhn–Tucker point, but the change in the variables was significant at the previous iteration. (This situation often happens when minimizing very simple functions, such as quadratics.)
If the four conditions listed in [Parameters] for ${\mathbf{ifail}}={0}$ are satisfied, $x$ is likely to be a solution of (1) even if ${\mathbf{ifail}}={1}$.
${\mathbf{ifail}}=2$
e04us has terminated without finding a feasible point for the linear constraints and bounds, which means that either no feasible point exists for the given value of the optional parameter Linear Feasibility Tolerance ($\text{default value}=\sqrt{\epsilon }$, where $\epsilon$ is the machine precision), or no feasible point could be found in the number of iterations specified by the optional parameter Minor Iteration Limit ($\text{default value}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,3\left(n+{n}_{L}+{n}_{N}\right)\right)$). You should check that there are no constraint redundancies. If the data for the constraints are accurate only to an absolute precision $\sigma$, you should ensure that the value of the optional parameter Linear Feasibility Tolerance is greater than $\sigma$. For example, if all elements of ${A}_{L}$ are of order unity and are accurate to only three decimal places, Linear Feasibility Tolerance should be at least ${10}^{-3}$.
${\mathbf{ifail}}=3$
No feasible point could be found for the nonlinear constraints. The problem may have no feasible solution. This means that there has been a sequence of QP subproblems for which no feasible point could be found (indicated by I at the end of each line of intermediate printout produced by the major iterations; see [Description of the Printed Output]). This behaviour will occur if there is no feasible point for the nonlinear constraints. (However, there is no general test that can determine whether a feasible point exists for a set of nonlinear constraints.) If the infeasible subproblems occur from the very first major iteration, it is highly likely that no feasible point exists. If infeasibilities occur when earlier subproblems have been feasible, small constraint inconsistencies may be present. You should check the validity of constraints with negative values of istate. If you are convinced that a feasible point does exist, e04us should be restarted at a different starting point.
${\mathbf{ifail}}=4$
The limiting number of iterations (as determined by the optional parameter Major Iteration Limit ($\text{default value}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,3\left(n+{n}_{L}\right)+10{n}_{N}\right)$) has been reached.
If the algorithm appears to be making satisfactory progress, then Major Iteration Limit may be too small. If so, either increase its value and rerun e04us or, alternatively, rerun e04us using the optional parameter Warm Start. If the algorithm seems to be making little or no progress however, then you should check for incorrect gradients or ill-conditioning as described under ${\mathbf{ifail}}={6}$.
Note that ill-conditioning in the working set is sometimes resolved automatically by the algorithm, in which case performing additional iterations may be helpful. However, ill-conditioning in the Hessian approximation tends to persist once it has begun, so that allowing additional iterations without altering ${\mathbf{r}}$ is usually inadvisable. If the quasi-Newton update of the Hessian approximation was reset during the latter major iterations (i.e., an occurs at the end of each line of intermediate printout; see [Description of the Printed Output]), it may be worthwhile to try a Warm Start at the final point as suggested above.
${\mathbf{ifail}}=5$
Not used by this method.
${\mathbf{ifail}}=6$
$x$ does not satisfy the first-order Kuhn–Tucker conditions (see [] in e04uf), and no improved point for the merit function (see [Description of the Printed Output]) could be found during the final linesearch.
This sometimes occurs because an overly stringent accuracy has been requested, i.e., the value of the optional parameter Optimality Tolerance ($\text{default value}={\epsilon }_{r}^{0.8}$, where ${\epsilon }_{r}$ is the value of the optional parameter Function Precision ($\text{default value}={\epsilon }^{0.9}$, where $\epsilon$ is the machine precision)) is too small. In this case you should apply the four tests described under ${\mathbf{ifail}}={0}$ to determine whether or not the final solution is acceptable (see Gill et al. (1981), for a discussion of the attainable accuracy).
If many iterations have occurred in which essentially no progress has been made and e04us has failed completely to move from the initial point then user-supplied delegates objfun and/or confun may be incorrect. You should refer to comments under ${\mathbf{ifail}}={7}$ and check the gradients using the optional parameter Verify ($\text{default value}=0$). Unfortunately, there may be small errors in the objective and constraint gradients that cannot be detected by the verification process. Finite difference approximations to first derivatives are catastrophically affected by even small inaccuracies. An indication of this situation is a dramatic alteration in the iterates if the finite difference interval is altered. One might also suspect this type of error if a switch is made to central differences even when Norm Gz and Violtn (see [Description of the Printed Output]) are large.
Another possibility is that the search direction has become inaccurate because of ill-conditioning in the Hessian approximation or the matrix of constraints in the working set; either form of ill-conditioning tends to be reflected in large values of Mnr (the number of iterations required to solve each QP subproblem; see [Description of the Printed Output]).
If the condition estimate of the projected Hessian (Cond Hz; see [Description of Monitoring Information]) is extremely large, it may be worthwhile rerunning e04us from the final point with the optional parameter Warm Start. In this situation, istate and clamda should be left unaltered and ${\mathbf{r}}$ should be reset to the identity matrix.
If the matrix of constraints in the working set is ill-conditioned (i.e., Cond T is extremely large; see [Description of Monitoring Information]), it may be helpful to run e04us with a relaxed value of the optional parameter Feasibility Tolerance ($\text{default value}=\sqrt{\epsilon }$, where $\epsilon$ is the machine precision). (Constraint dependencies are often indicated by wide variations in size in the diagonal elements of the matrix $T$, whose diagonals will be printed if ${\mathbf{Major Print Level}}\ge 30$).
${\mathbf{ifail}}=7$
The user-supplied derivatives of the subfunctions and/or nonlinear constraints appear to be incorrect.
Large errors were found in the derivatives of the subfunctions and/or nonlinear constraints. This value of ifail will occur if the verification process indicated that at least one Jacobian element had no correct figures. You should refer to the printed output to determine which elements are suspected to be in error.
As a first-step, you should check that the code for the subfunction and constraint values is correct – for example, by computing the subfunctions at a point where the correct value of $F\left(x\right)$ is known. However, care should be taken that the chosen point fully tests the evaluation of the subfunctions. It is remarkable how often the values $x=0$ or $x=1$ are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless.
Special care should be used in this test if computation of the subfunctions involves subsidiary data communicated in storage. Although the first evaluation of the subfunctions may be correct, subsequent calculations may be in error because some of the subsidiary data has accidentally been overwritten.
Gradient checking will be ineffective if the objective function uses information computed by the constraints, since they are not necessarily computed before each function evaluation.
Errors in programming the subfunctions may be quite subtle in that the subfunction values are ‘almost’ correct. For example, a subfunction may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the subfunction depends. A common error on machines where numerical calculations are usually performed in double precision is to include even one single precision constant in the calculation of the subfunction; since some compilers do not convert such constants to double precision, half the correct figures may be lost by such a seemingly trivial error.
${\mathbf{ifail}}=8$
Not used by this method.
${\mathbf{ifail}}=9$
An input parameter is invalid.
$\mathbf{\text{overflow}}$
If overflow occurs then either an element of $C$ is very large, or the singular values or singular vectors have been incorrectly supplied.
${\mathbf{ifail}}=-9000$
An error occured, see message report.
${\mathbf{ifail}}=-6000$
Invalid Parameters $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-4000$
Invalid dimension for array $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-8000$
Negative dimension for array $〈\mathit{\text{value}}〉$
${\mathbf{ifail}}=-6000$
Invalid Parameters $〈\mathit{\text{value}}〉$
# Accuracy
If ${\mathbf{ifail}}={0}$ on exit, then the vector returned in the array x is an estimate of the solution to an accuracy of approximately Optimality Tolerance ($\text{default value}={\epsilon }^{0.8}$, where $\epsilon$ is the machine precision).
None.
# Description of the Printed Output
This section describes the intermediate printout and final printout produced by e04us. The intermediate printout is a subset of the monitoring information produced by the method at every iteration (see [Description of Monitoring Information]). You can control the level of printed output (see the description of the optional parameter Major Print Level). Note that the intermediate printout and final printout are produced only if ${\mathbf{Major Print Level}}\ge 10$ (the default for e04us, by default no output is produced by ). (by default no output is produced by e04us).
The following line of summary output ($\text{}<80$ characters) is produced at every major iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj is the major iteration count. Mnr is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be $1$ in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see [Algorithmic Details] in e04uf). Note that Mnr may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase. Step is the step ${\alpha }_{k}$ taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., ${\alpha }_{k}=1$) will be taken as the solution is approached. Merit Function is the value of the augmented Lagrangian merit function (see (12) in e04uf) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see [] in e04uf). As the solution is approached, Merit Function will converge to the value of the objective function at the solution. If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or e04us terminates with ${\mathbf{ifail}}={3}$ (no feasible point could be found for the nonlinear constraints). If there are no nonlinear constraints present (i.e., ${\mathbf{ncnln}}=0$) then this entry contains Objective, the value of the objective function $F\left(x\right)$. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints. Norm Gz is $‖{Z}^{\mathrm{T}}{g}_{\mathrm{FR}}‖$, the Euclidean norm of the projected gradient (see [] in e04uf). Norm Gz will be approximately zero in the neighbourhood of a solution. Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution. Cond Hz is a lower bound on the condition number of the projected Hessian approximation ${H}_{Z}$ (${H}_{Z}={Z}^{\mathrm{T}}{H}_{\mathrm{FR}}Z={R}_{Z}^{\mathrm{T}}{R}_{Z}$; see (6) and (11) in e04uf). The larger this number, the more difficult the problem. M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see [] in e04uf). I is printed if the QP subproblem has no feasible point. C is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that $x$ is close to a Kuhn–Tucker point (see [] in e04uf). L is printed if the linesearch has produced a relative change in $x$ greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value. R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of $R$ indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, $R$ is modified so that its diagonal condition estimator is bounded.
The final printout includes a listing of the status of every variable and constraint.
The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
Varbl gives the name (V) and index $\mathit{j}$, for $\mathit{j}=1,2,\dots ,n$, of the variable.
State gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance, State will be ++ or -- respectively.
A key is sometimes printed before State.
A Alternative optimum possible. The variable is active at one of its bounds, but its Lagrange multiplier is essentially zero. This means that if the variable were allowed to start moving away from its bound then there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange multipliers might also change. D Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds. I Infeasible. The variable is currently violating one of its bounds by more than the Feasibility Tolerance.
Value is the value of the variable at the final iteration.
Lower Bound is the lower bound specified for the variable. None indicates that ${\mathbf{bl}}\left[j-1\right]\le -\mathit{bigbnd}$.
Upper Bound is the upper bound specified for the variable. None indicates that ${\mathbf{bu}}\left[j-1\right]\ge \mathit{bigbnd}$.
Lagr Mult is the Lagrange multiplier for the associated bound. This will be zero if State is FR unless ${\mathbf{bl}}\left[j-1\right]\le -\mathit{bigbnd}$ and ${\mathbf{bu}}\left[j-1\right]\ge \mathit{bigbnd}$, in which case the entry will be blank. If $x$ is optimal, the multiplier should be non-negative if State is LL and non-positive if State is UL.
Slack is the difference between the variable Value and the nearer of its (finite) bounds ${\mathbf{bl}}\left[j-1\right]$ and ${\mathbf{bu}}\left[j-1\right]$. A blank entry indicates that the associated variable is not bounded (i.e., ${\mathbf{bl}}\left[j-1\right]\le -\mathit{bigbnd}$ and ${\mathbf{bu}}\left[j-1\right]\ge \mathit{bigbnd}$).
The meaning of the printout for linear and nonlinear constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, ${\mathbf{bl}}\left[j-1\right]$ and ${\mathbf{bu}}\left[j-1\right]$ are replaced by ${\mathbf{bl}}\left[n+j-1\right]$ and ${\mathbf{bu}}\left[n+j-1\right]$ respectively, and with the following changes in the heading:
L Con gives the name (L) and index $\mathit{j}$, for $\mathit{j}=1,2,\dots ,{n}_{L}$, of the linear constraint. N Con gives the name (N) and index ($\mathit{j}-{n}_{L}$), for $\mathit{j}={n}_{L}+1,\dots ,{n}_{L}+{n}_{N}$, of the nonlinear constraint.
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.
# Example
This example is based on Problem 57 in Hock and Schittkowski (1981) and involves the minimization of the sum of squares function
$Fx=12∑i=144yi-fix2,$
where
$fix=x1+0.49-x1e-x2ai-8$
and
$iyiaiiyiai10.498230.412220.498240.402230.4810250.422440.4710260.402450.4810270.402460.4710280.412670.4612290.402680.4612300.412690.4512310.4128100.4312320.4028110.4514330.4030120.4314340.4030130.4314350.3830140.4416360.4132150.4316370.4032160.4316380.4034170.4618390.4136180.4518400.3836190.4220410.4038200.4220420.4038210.4320430.3940220.4122440.3942$
subject to the bounds
$x1≥-0.4x2≥-4.0$
to the general linear constraint
$x1+x2≥1.0$
and to the nonlinear constraint
$0.49x2-x1x2≥0.09.$
The initial point, which is infeasible, is
$x0=0.4,0.0T$
and $F\left({x}_{0}\right)=0.002241$.
The optimal solution (to five figures) is
$x*=0.41995,1.28484T,$
and $F\left({x}^{*}\right)=0.01423$. The nonlinear constraint is active at the solution.
Example program (C#): e04use.cs
Example program data: e04use.d
Example program results: e04use.r
# Algorithmic Details
e04us implements a sequential quadratic programming (SQP) method incorporating an augmented Lagrangian merit function and a BFGS (Broyden–Fletcher–Goldfarb–Shanno) quasi-Newton approximation to the Hessian of the Lagrangian, and is based on e04wd. The documents for e04nce04uf and e04wd should be consulted for details of the method.
# Description of Monitoring Information
This section describes the long line of output ($\text{}>80$ characters) which forms part of the monitoring information produced by e04us. (See also the description of the optional parameters Major Print Level, Minor Print Level and Monitoring File.) You can control the level of printed output.
When ${\mathbf{Major Print Level}}\ge 5$ and ${\mathbf{Monitoring File}}\ge 0$, the following line of output is produced at every major iteration of e04us on the unit number specified by optional parameter Monitoring File. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj is the major iteration count.
Mnr is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be $1$ in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see [Algorithmic Details] in e04uf).
Note that Mnr may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase.
Step is the step ${\alpha }_{k}$ taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., ${\alpha }_{k}=1$) will be taken as the solution is approached.
Nfun is the cumulative number of evaluations of the objective function needed for the linesearch. Evaluations needed for the estimation of the gradients by finite differences are not included. Nfun is printed as a guide to the amount of work required for the linesearch.
Merit Function is the value of the augmented Lagrangian merit function (see (12) in e04uf) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see [] in e04uf). As the solution is approached, Merit Function will converge to the value of the objective function at the solution.
If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or e04us terminates with ${\mathbf{ifail}}={3}$ (no feasible point could be found for the nonlinear constraints).
If there are no nonlinear constraints present (i.e., ${\mathbf{ncnln}}=0$) then this entry contains Objective, the value of the objective function $F\left(x\right)$. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints.
Norm Gz is $‖{Z}^{\mathrm{T}}{g}_{\mathrm{FR}}‖$, the Euclidean norm of the projected gradient (see [] in e04uf). Norm Gz will be approximately zero in the neighbourhood of a solution.
Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution.
Nz is the number of columns of $Z$ (see [] in e04uf). The value of Nz is the number of variables minus the number of constraints in the predicted active set; i.e., $\mathtt{Nz}=n-\left(\mathtt{Bnd}+\mathtt{Lin}+\mathtt{Nln}\right)$.
Bnd is the number of simple bound constraints in the current working set.
Lin is the number of general linear constraints in the current working set.
Nln is the number of nonlinear constraints in the predicted active set (not printed if ncnln is zero).
Penalty is the Euclidean norm of the vector of penalty parameters used in the augmented Lagrangian merit function (not printed if ncnln is zero).
Cond H is a lower bound on the condition number of the Hessian approximation $H$.
Cond Hz is a lower bound on the condition number of the projected Hessian approximation ${H}_{Z}$ (${H}_{Z}={Z}^{\mathrm{T}}{H}_{\mathrm{FR}}Z={R}_{Z}^{\mathrm{T}}{R}_{Z}$; see (6) and (11) in e04uf). The larger this number, the more difficult the problem.
Cond T is a lower bound on the condition number of the matrix of predicted active constraints.
Conv is a three-letter indication of the status of the three convergence tests (2)–(4) defined in the description of the optional parameter Optimality Tolerance. Each letter is T if the test is satisfied and F otherwise. The three tests indicate whether:
(i) the sequence of iterates has converged; (ii) the projected gradient (Norm Gz) is sufficiently small; and (iii) the norm of the residuals of constraints in the predicted active set (Violtn) is small enough.
If any of these indicators is F when e04us terminates with ${\mathbf{ifail}}={0}$, you should check the solution carefully.
M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see [] in e04uf).
I is printed if the QP subproblem has no feasible point.
C is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that $x$ is close to a Kuhn–Tucker point (see [] in e04uf).
L is printed if the linesearch has produced a relative change in $x$ greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value.
R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of $R$ indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, $R$ is modified so that its diagonal condition estimator is bounded. |
# cyanoFilter
## Introduction
Flow cytometry is a well-known technique for identifying cell populations contained in a biological smaple. It is largely applied in biomedical and medical sciences for cell sorting, counting, biomarker detections and protein engineering. The technique also provides an energy efficient alternative to microscopy that has long been the standard technique for cell population identification. Cyanobacteria are bacteria phylum believe to contribute more than 50% of atmospheric oxygen via oxygen and are found almost everywhere. These bacteria are also one of the known oldest life forms known to obtain their energy via photosynthesis.
## Illustrations
We load the package and necessary dependencies below. We also load tidyverse for some data cleaning steps that we need to carry out.
library(dplyr)
library(magrittr)
library(tidyr)
library(purrr)
library(flowCore)
library(flowDensity)
library(cyanoFilter)
To illustrate the funtions contained in this package, we use two datafiles contained by default in the package. These are just demonstration dataset, hence are not documented in the helpfiles.
metadata <- system.file("extdata", "2019-03-25_Rstarted.csv",
package = "cyanoFilter",
mustWork = TRUE)
check.names = TRUE)
#columns containing dilution, $\mu l$ and id information
metafile <- metafile %>%
dplyr::select(Sample.Number,
Sample.ID,
Number.of.Events,
Dilution.Factor,
Original.Volume,
Cells.L)
Each row in the csv file corresponds to a measurement from two types of cyanobacteria cells carried out at one of three dilution levels. The columns contain information about the dilution level, the number of cells per micro-litre ($$cell/\mu l$$), number of particles measured and a unique identification code for each measurement. The Sample.ID column is structured in the format cyanobacteria_dilution. We extract the cyanobacteria part of this column into a new column and also rename the $$cell/\mu l$$ column with the following code:
#extract the part of the Sample.ID that corresponds to BS4 or BS5
metafile <- metafile %>% dplyr::mutate(Sample.ID2 =
stringr::str_extract(metafile$Sample.ID, "BS*[4-5]") ) #clean up the Cells.muL column names(metafile)[which(stringr::str_detect(names(metafile), "Cells."))] <- "CellspML" ### Good Measurements To determine the appropriate data file to read from a FCM datafile, the desired minimum, maximum and column containing the $$cell\mu l$$ values are supplied to the goodfcs() function. The code below demonstrates the use of this function for a situation where the desired minimum and maximum for $$cell/\mu l$$ is 50 and 1000 respectively. metafile <- metafile %>% mutate(Status = cyanoFilter::goodFcs(metafile = metafile, col_cpml = "CellspML", mxd_cellpML = 1000, mnd_cellpML = 50) ) knitr::kable(metafile) Sample.Number Sample.ID Number.of.Events Dilution.Factor Original.Volume CellspML Sample.ID2 Status 1 BS4_20000 6918 20000 10 62.02270 BS4 good 2 BS4_10000 6591 10000 10 116.76311 BS4 good 3 BS4_2000 6508 2000 10 517.90008 BS4 good 4 BS5_20000 5976 20000 10 48.31036 BS5 bad 5 BS5_10000 5844 10000 10 90.51666 BS5 good 6 BS5_2000 5829 2000 10 400.72498 BS5 good The function adds an extra column, Status, with entries good or bad to the metafile. Rows containing $$cell/\mu l$$ values outside the desired minimum and maximum are labelled bad. Note that the Status column for the fourth row is labelled bad, because it has a $$cell/\mu l$$ value outside the desired range. ### Files to Retain Although any of the files labelled good can be read from the FCM file, the retain() function can help select either the file with the highest $$cell/\mu l$$ or that with the smallest $$cell/\mu l$$ value. To do this, one supplies the function with the status column, $$cell/\mu l$$ column and the desired decision. The code below demonstrates this action for a case where we want to select the file with the maximum $$cell/\mu l$$ from the good measurements for each unique sample ID. broken <- metafile %>% group_by(Sample.ID2) %>% nest() metafile$Retained <- unlist(map(broken$data, function(.x) { retain(meta_files = .x, make_decision = "maxi", Status = "Status", CellspML = "CellspML") }) ) knitr::kable(metafile) Sample.Number Sample.ID Number.of.Events Dilution.Factor Original.Volume CellspML Sample.ID2 Status Retained 1 BS4_20000 6918 20000 10 62.02270 BS4 good No! 2 BS4_10000 6591 10000 10 116.76311 BS4 good No! 3 BS4_2000 6508 2000 10 517.90008 BS4 good Retain 4 BS5_20000 5976 20000 10 48.31036 BS5 bad No! 5 BS5_10000 5844 10000 10 90.51666 BS5 good No! 6 BS5_2000 5829 2000 10 400.72498 BS5 good Retain This function adds another column, Retained, to the metafile. The third and sixth row in the metadata are with the highest $$cell/\mu l$$ values, thus one can proceed to read the fourth and sixth file from the corresponding FCS file for BS4 and BS5 respectively. This implies that we are reading in only two FCS files rather than the six measured files. ### Flow Cytometer File Processing To read B4_18_1.fcs file into R, we use the read.FCS() function from the flowCore package. The dataset option enables the specification of the precise file to be read. Since this datafile contains one file only, we set this option to 1. If this option is set to 2, it gives an error since text.fcs contains only one datafile. flowfile_path <- system.file("extdata", "B4_18_1.fcs", package = "cyanoFilter", mustWork = TRUE) flowfile <- read.FCS(flowfile_path, alter.names = TRUE, transformation = FALSE, emptyValue = FALSE, dataset = 1) flowfile #> flowFrame object ' B4_18_1' #> with 8729 cells and 11 observables: #> name desc range minRange maxRange #>$P1 FSC.HLin Forward Scatter (FSC.. 1e+05 0.00000 99999
#> $P2 SSC.HLin Side Scatter (SSC-HL.. 1e+05 -34.47928 99999 #>$P3 GRN.B.HLin Green-B Fluorescence.. 1e+05 -21.19454 99999
#> $P4 YEL.B.HLin Yellow-B Fluorescenc.. 1e+05 -10.32744 99999 #>$P5 RED.B.HLin Red-B Fluorescence (.. 1e+05 -5.34720 99999
#> $P6 NIR.B.HLin Near IR-B Fluorescen.. 1e+05 -4.30798 99999 #>$P7 RED.R.HLin Red-R Fluorescence (.. 1e+05 -25.49018 99999
#> $P8 NIR.R.HLin Near IR-R Fluorescen.. 1e+05 -16.02002 99999 #>$P9 SSC.ALin Side Scatter Area (S.. 1e+05 0.00000 99999
#> $P10 SSC.W Side Scatter Width (.. 1e+05 -111.00000 99999 #>$P11 TIME Time 1e+05 0.00000 99999
#> 368 keywords are stored in the 'description' slot
The R object flowfile contains measurements about 8729 cells across 10 channels since the time channel does not contain any information about the properties of the measured cells.
### Transformation and visualisation
To examine the need for transformation, a visual representation of the information in the expression matrix is of great use. The ggpairsDens() function produces a panel plot of all measured channels. Each plot is also smoothed to show the cell density at every part of the plot.
flowfile_nona <- noNA(x = flowfile)
ggpairsDens(flowfile_nona, notToPlot = "TIME")
We obtain Figure above by using the ggpairsDens() function after removing all NA values from the expression matrix with the nona() function. There is a version of the function, pairs_plot() that produces standard base scatter plots also smoothed to indicate cell density.
flowfile_noneg <- noNeg(x = flowfile_nona)
flowfile_logtrans <- lnTrans(x = flowfile_noneg,
notToTransform = c("SSC.W", "TIME"))
ggpairsDens(flowfile_logtrans, notToPlot = "TIME") |
# Crossedwords and ciphers and riddles oh joy!
Nothing too special, a crossword to complete, with some layers beneath.
All you'll need lies within the puzzle. Cross-words are cool aren't they?
Click on it to enlarge image.
Normally you'd use 4 before 7, but this time I'd suggest doing the opposite.
Optpr asmz nsqg lsa siyriv gzy skkmzp xukw emdwnii gce, xugvw qd sag javlp ggwl nzv lqy lw cifqpnm. Nlreo lpp wresfl xifueym hmgj cgcc wresfl nsqg.
So...
R ub bai kqcco rsl byts, mlv cltilyin sdc wijrlt, bai vwx dn mlzb jjhspv. Fbpb tq Z?
The grid:
To solve the ciphers,
notice that there is an unusual hyphen in the comment from the poser: "All you'll need lies within the puzzle. Cross-words are cool aren't they?". In conjunction with the anagram tag, this hyphen clues us to the fact that we should look at anagrams of the cross letters. This yields FIRST LETTERS.
Next,
as Deusovi notes, taking first letters from the down clues we get CLSENIE, which anagrams to SILENCE; and from the across clues we get TJRPUIE, which anagrams to JUPITER.
Applying these to the first and second ciphers respectively yields:
While your code has helped you figure this message out, there is one final test for you to resolve. Check the second message with your second code.
and
I am the thing you seek, the treasure you desire, the end of this puzzle. What am I?
(M Oehm decrypted these using analysis.)
The solution to this riddle is:
Constructive criticism:
Some of the cluing was bit unfair, in my opinion. Answers like "dietary", "expenditure" and "lavish" violate the constraint that definitions agree grammatically with the words they are cluing; "Cool numbers that like the number 1" doesn't make much sense as a clue for "primary"; assuming 11 is correct, "in a raging frenzy" may not be an established enough construction to warrant a missing-word clue (although it does have some currency); and, assuming "serendipitous" is correct, the definition is a bit loose. I would recommend using more checked letters in future crosswords as well as adhering to the grammatical agreement rule in clues.
The first cipher is a bit superfluous: It tells you to use a keyword that you already have on ciphertext that is already available to you.
• If you assume that the riddle in the last blockquote has the form "I am the ...; What am I?", you can determine the Vigenère key. It's rot13(WHCVGRE). – M Oehm Mar 5 '17 at 10:36
• @n_palum, No problem. I look forward to your next creation! – GrimGrom Mar 5 '17 at 16:38
• @Silenus Ha thanks, although Moehm didn't technically answer the riddle, so I don't know who to give answer to yet – n_plum Mar 5 '17 at 16:41
• (If I may chime in on the criticism, I've got a mild complaint about the presentation: It would be nice if you could present the clues as text and have the image contain only the grid in future puzzles. I think that would be more readable. It also allows for easy copying and pasting for the answer. Apart from that, it's a nice multi-layer puzzle.) – M Oehm Mar 5 '17 at 18:03
• I'll second @MOehm's last comment. I actually used the same site you did to build the grid for the PSE Assessment Exam puzzle, but cleaned up the grid and added the clues as text below the grid image instead of using the clue layout provided by that site (which looks clunky and cuts clues short anyway). Either paying for the site to get nicer output, or doing something more than just screen-shotting it, would give much improved presentation. – Rubio Mar 5 '17 at 18:28
This answer decodes the two messages at the bottom of the puzzle by analysing the messages. That's clearly not the intended way, so it could ba called cheating. It might still help other solvers to find the correct way to obtain the cipher keys. (I haven't got a clue at the moment, despite Silenius's good work on the crossword.)
The second message
This message ends in a question and the puzzle has the tag. Many riddles have the form "I am (the) so and so. What am I?". The letter counts fit and the first shot is a hit. The riddle reads:
I am the thing you seek, the treasure you desire, the end of this puzzle. What am I?
The key is JUPITER.
The first message
This message doesn't have an obvious structure. There are many three-letter words, all different. But there are two ocurrences of the word NSQG, 105 letters apart, and two occurrences of WRESFL, 21 letters apart. If these occurrences are really the same words, a likely key length is 7 letters. (21 letters would also fit, but let's go with the shorter key first.)
The word NSQG is at an offset of 2, covering • • NSGQ • and the word WRESFL is at an offset of 3, covering FL • WRES. Both words cover the whole key.
I've written a small script to attack this problem and it turns out that the words are "code" and "second". The decoded message reads:
While your code has helped you figure this message out, there is one final test
for you to resolve. Check the second message with your second code.
The key is SILENCE.
• They're anagrams of the clues' starting letters. – Deusovi Mar 5 '17 at 15:40
• @Deusovi You would be correct, and if people appropriately got my hints at "Crossed-words" you'd check the intersecting words, and get an anagram for First Letters, then use the hint for Down then Across and you'd get SIlence and Jupiter – n_plum Mar 5 '17 at 16:37
• Oh, I see. I had seen the anagram tag, but wondered where the J should come from. – M Oehm Mar 5 '17 at 16:37
• @MOehm Yeah, and that's why Sil had some trouble, I stretched the clues a bit to fit the letters I needed. – n_plum Mar 5 '17 at 16:41
• Aw man, I picked up on the "cross-word" hint yesterday and even publicly conjectured that we needed to look at anagrams of the intersecting letters. Unfortunately, I didn't notice the anagram. – GrimGrom Mar 5 '17 at 17:14 |
# Collisions in Two Dimensions | Definition, Formulas – Work, Energy and Power
Two Dimensional Collision Physics or Oblique Collision Definition:
If the initial and final velocities of colliding bodies do not lie along the same line, then the collision is called two dimensional or oblique collision.
We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts.
## Collisions in Two Dimensions | Definition, Formulas – Work, Energy and Power
In horizontal direction,
m1u1 cos α1 + m2u2 cos α2 = m1v1 cos β1 + m2v2 cos β2
In vertical direction,
m1u1 sin α1 – m2u2 sin α2 = m1v1 sin β1 – m2v2 sin β2
If m1 = m2 and
α1 + α2 = 90°
then
β1 + β2 = 90°
If a particle A of mass m1 is moving along X-axis with a speed u and makes an elastic collision with another stationary body B of mass m2, then
From conservation law of momentum,
m1u = m1v1 cos α + m2v2 cos β
0 = m1v1 sin α – m2v2 sin β
Work, Energy and Power:
Work, energy and power are the three quantities which are inter-related to each other. The rate of doing work is called power. An equal amount of energy is consumed to do a work. So, basically the power is the rate at which energy is consumed to complete a work. |
## Square root of a squared block matrix
Hi everybody,
I’m trying to compute the square root of the following squared block matrix:
$$M=\begin{bmatrix} A &B\\ C &D\\ \end{bmatrix}$$
(that is M^(1/2))as function of A,B,C, D wich are all square matrices.
Can you help me?
I sincerely thank you! :)
All the best
GoodSpirit
PhysOrg.com science news on PhysOrg.com >> 'Whodunnit' of Irish potato famine solved>> The mammoth's lament: Study shows how cosmic impact sparked devastating climate change>> Curiosity Mars rover drills second rock target
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Hi GoodSpirit! Have you tried transforming it into the form $$M=\begin{bmatrix} P &0\\ 0 &Q\\ \end{bmatrix}$$
Hi tiny-tim, Thank you for answering. That´s an interesting idea but how do you do that...? It is not easy... I must say that there is more... M is a typical covariance matrix so it is symmetric and semi-positive definite. A and D are symmetric and positive semi-definite (covariance matrices too) and $$B=C^T$$ and B is the cross covariance matrix of A and D. My attempt is based on eigendecomposition $$M=Q \Lambda Q^T$$ and $$M=\begin{bmatrix} a & b \\ c & d \\ \end{bmatrix} \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix}$$ But it lead to something very complicated. I really thank you all for your answer!:) All the best GoodSpirit |
Posted on Categories:图形模型, 计算机代写
# 计算机代写|图形模型代考Graphical Models代写|CS228 A Brief History
## avatest™帮您通过考试
avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试,包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您,创造模拟试题,提供所有的问题例子,以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试,我们都能帮助您!
•最快12小时交付
•200+ 英语母语导师
•70分以下全额退款
## 计算机代写|图形模型代考Graphical Models代写|A Brief History
From an artificial intelligence perspective, we can consider the following stages in the development of uncertainty management techniques:
Beginnings (1950s and 60s)—artificial intelligence (AI) researchers focused on solving problems such as theorem proving, games like chess, and the “blocks world” planning domain, which do not involve uncertainty, making it unnecessary to develop techniques for managing uncertainty. The symbolic paradigm dominated $\mathrm{AI}$ in the beginnings.
Ad hoc techniques (1970s) – the development of expert systems for realistic applications such as medicine and mining, required the development of uncertainty management approaches. Novel ad hoc techniques were developed for specific expert systems, such as MYCIN’s certainty factors [16] and Prospector’s pseudo-probabilities [4]. It was later shown that these techniques had a set of implicit assumptions which limited their applicability [6]. Also in this period, alternative theories were pro- posed to manage uncertainty in expert systems, including fuzzy logic [18] and the Dempster-Shafer theory [15].
Resurgence of probability (1980s)—probability theory was used in some initial expert systems, however it was later discarded because its application in naive ways implies a high computational complexity (see Sect. 1.3). New developments, in particular Bayesian networks [10], make it possible to build complex probabilistic systems in an efficient manner, starting a new era for uncertainty management in AI.
Diverse formalisms (1990s)_Bayesian networks continued and were consolidated with the development of efficient inference and learning algorithms. Meanwhile, other techniques such as fuzzy and non-monotonic logics were considered as alternatives for reasoning under uncertainty.
Probabilistic graphical models (2000s) —several techniques based on probability and graphical representations were consolidated as powerful methods for representing, reasoning and making decisions under uncertainty, including Bayesian networks, Markov networks, influence diagrams and Markov decision processes, among others.
## 计算机代写|图形模型代考Graphical Models代写|Basic Probabilistic Models
Probability theory provides a well established foundation for managing uncertainty, therefore it is natural to use it for reasoning under uncertainty. However, if we apply probability in a naive way to complex problems, we are soon deterred by computational complexity
In this section we will show how we can model a problem using a naive probabilistic approach based on a flat representation; and then how we can use this representation to answer some probabilistic queries. This will help to understand the limitations of the basic approach, motivating the development of probabilistic graphical models. ${ }^{1}$
Many problems can be formulated as a set of variables, $X_{1}, X_{2}, \ldots X_{n}$ such that we know the values for some of these variables and the others are unknown. For instance, in medical diagnosis, the variables might represent certain symptoms and the associated diseases; usually we know the symptoms and we want to find the most probable disease(s). Another example could be a financial institution developing a system to help decide the amount of credit given to a certain customer. In this case the relevant variables are the attributes of the customer, i.e. age, income, previous credits, etc.; and a variable that represents the amount of credit to be given. Based on the customer attributes we want to determine, for instance, the maximum amount of credit that is safe to give to the customer. In general there are several types of problems that can be modeled in this way, such as diagnosis, classification, and perception problems; among others.
# 图形模型代写
## 计算机代写|图形模型代考Graphical Models代写|A Brief History
Ad hoc 技术(1970 年代)——为医学和采矿等实际应用开发专家系统,需要开发不确定性管理方法。为特定的专家系统开发了新的 ad hoc 技术,例如 MYCIN 的确定性因素 [16] 和 Prospector 的伪概率 [4]。后来表明,这些技术有一组隐含的假设,限制了它们的适用性[6]。同样在这一时期,提出了替代理论来管理专家系统中的不确定性,包括模糊逻辑 [18] 和 Dempster-Shafer 理论 [15]。
## 计算机代写|图形模型代考Graphical Models代写|Basic Probabilistic Models
avatest.org 为您提供可靠及专业的论文代写服务以便帮助您完成您学术上的需求,让您重新掌握您的人生。我们将尽力给您提供完美的论文,并且保证质量以及准时交稿。除了承诺的奉献精神,我们的专业写手、研究人员和校对员都经过非常严格的招聘流程。所有写手都必须证明自己的分析和沟通能力以及英文水平,并通过由我们的资深研究人员和校对员组织的面试。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 |
# How do I find the surface area of the solid defined by revolving r = 3sin(theta) about the polar axis?
Feb 24, 2015
Write :
$r = 3 \sin \left(\theta\right)$
$r = 3 \frac{y}{r}$ because $y = r \sin \left(t\right)$
${r}^{2} = 3 y$
${x}^{2} + {y}^{2} = 3 y$ because $r = \sqrt{{x}^{2} + {y}^{2}}$.
${x}^{2} + {\left(y - \frac{3}{2}\right)}^{2} = \frac{9}{4}$
You recognize a circle of radius $\frac{3}{2}$. The area is $\pi {\left(\frac{3}{2}\right)}^{2} = 9 \frac{\pi}{4}$. |
# Alternative to Hamming distance for permutations
I have two strings, where one is a permutation of the other. I was wondering if there is an alternative to Hamming distance where instead of finding the minimum number of substitutions required, it would find the minimum number of translocations required to go from string a to string b.
My strings are always of the same size and I know there are no errors/substitutions.
Example:
1 2 3 4 5
3 2 5 4 1
This would give me two:
3 2 5 4 1 (start)
-> 3 2 1 4 5
-> -> 1 2 3 4 5
If this is already implemented in R that would be even better.
• Looks like you want the edit-distance ( aka Levenshtein distance) ? – Arnab Oct 12 '12 at 17:00
• – The Unfun Cat Oct 12 '12 at 19:13
• In your particular example where the characters of the string have an implied order, you might want to count inversions. en.wikipedia.org/wiki/Inversion_(discrete_mathematics) – Joe Oct 12 '12 at 19:26
• It might be disingenuous to call all of those distance functions metrics, as many may not obey the triangle inequality. – Nicholas Mancuso Oct 12 '12 at 19:41
• By translocation do you mean taking the mirror image of part of the sequence? – highBandWidth Oct 12 '12 at 21:03
"Given two signed multi-chromosomal genomes Pi and Gamma with the same gene set, the problem of sorting by translocations (SBT) is to find a shortest sequence of translocations transforming Pi to Gamma, where the length of the sequence is called the translocation distance between Pi and Gamma. In 1996, Hannenhalli gave the formula of the translocation distance for the first time, based on which an $O(n^3)$ algorithm for SBT was given. In 2005, Anne Bergeron et al. revisited this problem and gave an elementary proof for the formula of the translocation distance which leads to a new $O(n^3)$ algorithm for SBT."
We need to find the minimum number of transpositions that take one string $$a$$ to another string $$b$$, where $$a, b$$ are permutations. It looks like you are looking for the minimum distance between two given vertices $$a, b \in S_n$$ in the complete transposition graph, which is the Cayley graph of $$S_n$$ generated by the set of all transpositions. |
## 1.8 Linear regression
Remember the distinction between a mathematical world and a statistical world? In a mathematical world, we are interested in how the value of one variable is associated with another variable. In a statistical world, we are interested in how the distribution of one variable is associated with another variable.
### 1.8.1 Summarizing a liner assocation in a mathematical world
In a mathematical world, we can summarize a linear relationship between two variables with a linear equation. For example, the graph below shows the height of a pool over time as it is being filled. There is a linear relationship between the height of the water and time.
Looking at the graph, we can see a few things. First of all, at $$time=0$$, the height of the pool was 4ft. Then, for every 1 hours that passes, the height of the water increases by 3ft. We can summarize this using an equation:
$height=4+3 \times hours$ In this equation:
• the $$4$$ is called the intercept. It represents the hight of the pool when $$time=0$$. This is sometimes called the “starting value.”
• The $$3$$ is called the slope. It represents the rate at which the pool height is increasing.
### 1.8.2 Summarizing a liner assocation in a statistical world
In a statistical world, we are interested in how the distribution of one variable is associated with another. For example, how is the distribution of reaction times associated with alcohol consumption?
Even though we can’t connect the points with a straight line, the distributions seem to be following a liner pattern. It seems that as the number of drinks increases by 1, the reaction increases by about 10cm. This is even more apparent if we look at the typical value (the mean) of each distribution.
We can summarize this linear relationship using a linear regression. This finds the straight line that is the best summary of the statistical association. The figure below shows the regression line for the alcohol and reaction time data:
Notice that the line doesn’t fit the mean of the distributions perfectly, but it’s pretty close. The deviations from the line to the means are marked in red and blue. The linear regression line balances these deviations (if you add up the red deviations, it would be equal to the blue deviations). This means that the line is not biased towards overestimating the means or underestimating the means. It is often called a “line of best fit.”
The equation for this line is:
$reaction = 11.3 + 9.7 \times drinks$ Just like in the mathematical world, there is an intercept and a slope. These have a similar interpretation in the statistical world. However, we have to remember that we are summarizing the behavior of distributions. We communicate that using words like “approximately” and “on average” and “we expect.”
• The intercept is 11.3. This means that, on average, we expect the reaction to be approximately 11.3cm when the participant has had 0 drinks.
• The slope is 9.7. This means that, on average, when the number of drinks increases by 1, we expect the reaction to increase by approximately 9.7cm.
##### Linear regression
• The linear regression is a mathematical model of a liner statistical association
• The line is called a “line of best fit”
• The intercept tells us the typical value of the y-variable, when the x-variable is 0
• The slope tells us how much we expect the y-variable to increase, on average, for every 1-unit increase in the x-variable. |
# Water profile of Oman
Source: FAO
Topics:
## Geography and population
The Sultanate of Oman occupies the south-eastern corner of the Arabian Peninsula and has a total area of 312,500 km2. It is bordered in the north-west by the United Arab Emirates, in the west by Saudi Arabia and in the south-west by Yemen. A detached area of Oman, separated from the rest of the country by the United Arab Emirates, lies at the tip of the Musandam Peninsula, on the southern shore of the Strait of Hormuz. The country has a coastline of almost 1,700 km, from the Strait of Hormuz in the north to the borders of the Republic of Yemen in the south-west, overlooking three seas: the Persian Gulf, the Gulf of Oman and the Arabian Sea.
Oman can be divided into the following physiographic regions:
Map of Oman. (Source: FAO)
• The whole coastal plain. The most important parts are the Batinah Plain in the north, which is the principal agricultural area, and the Salalah Plain in the south. The elevation ranges between zero near the sea to 500 meters (m) further inland.
• The mountain ranges, which occupy 15% of the total area of the country. The mountain range that runs in the north close to the Batinah Plain is the Jebel Al Akhdar with a peak at 3,000 meters. Other mountains are located in the Dhofar province, in the extreme southern part of the country, with peaks from 1,000 to 2,000 meters.
• The internal regions. Between the coastal plain and the mountains in the north and south lie the internal regions, consisting of several plains with elevations not exceeding 500 meters
The cultivable area has been estimated at 2.2 million hectares (ha), which is 7% of the total area of the country. The cultivated area was 61,550 ha in 1993, of which 18,550 ha consisted of annual crops and 43,000 ha consisted of permanent crops. Over half the agricultural area is located in the Batinah Plain in the north which has a total area representing about 3% of the area of the country.
The total population is about 2.16 million (1995), of which 87% is rural according to United Nations (UN) estimates.
According to the national population census of 1993, 28% of the total population was rural. The difference between the two figures is explained by the fact that the UN standards for Oman consider as rural all the inhabitants of the country, except those of the two cities: Muscat and Matrah. The annual demographic growth rate is estimated at 3.7%. While agriculture and fisheries employed about 37% of the total labor force in 1993, they accounted for only 3.3% of gross domestic product (GDP).
## Climate and water resources
### Climate
The climate differs from one region to another. It is hot and humid during summer in the coastal areas and hot and dry in the interior regions with the exception of some higher lands and the southern Dhofar region, where the climate remains moderate throughout the year. In the north and center of Oman, rainfall occurs during the winter (November-April), while in the south and some internal parts of the country it is a result of seasonal summer storms (June-September). Average annual rainfall has been estimated at 55 mm, varying from less than 20 mm in the internal desert regions to over 300 mm in the mountain areas.
### Water resources
A great deal of uncertainty lies in the assessment of Oman's water resources. Internal renewable water resources have been evaluated at 985 million m3/year. Surface water resources are scarce. In nearly all wadis, surface runoff occurs only for some hours or up to a few days after a storm, in the form of rapidly rising and falling flood flows. Since 1985, 15 major recharge dams have been constructed together with many smaller structures, in order to retain a portion of the peak flows, thus allowing more opportunity for groundwater recharge. In addition, several flood-control dams produce significant recharge benefits. In 1996, the total dam capacity is 58 million m3. Groundwater recharge is estimated at 955 million m3/year.
### Non-conventional water sources
In 1995, the total produced wastewater was estimated at 58 million m3. Only 28 million m3 was treated, of which 26 million m3 was reused. Also in 1995, the quantity of desalinated water was 34 million m3.
### Water withdrawal
In 1995, total water withdrawal was 1,223 million m3, of which 93.9% for agricultural purposes (4.6% is withdrawn for domestic use and 1.5% for industrial use). The treated wastewater was reused mainly for the irrigation of trees along the roads, while the desalinated water was used for domestic purposes. At present, groundwater depletion is thus estimated at around 240 million m3/year.
## Irrigation and drainage development
All agriculture in Oman is irrigated and since the 1970s the equipped area increased from about 28,000 ha to 61,550 ha in 1993, of which 34,930 ha, or almost 57%, is located in the Al Batinah province in the north. Although 2.2 million ha are considered to be suitable for agriculture, there are no figures on the irrigation potential, as no reliable data are available on groundwater availability in the deep aquifers. At present, groundwater depletion already takes place, especially in coastal areas, leading to seawater intrusion and a deterioration in water quality.
The falaj system ('aflaj' in the plural) is the traditional method developed centuries ago for supplying water for irrigation and domestic purposes. Many of the systems currently in use are estimated to be over a thousand years old. The falaj comprises the entire system: the source, which might be a qanat, a spring or the upper reaches of flowing wadis from which water is diverted; the conveyance system, which is usually an open-earth or cement-lined ditch; and the delivery system. The falaj has assumed social significance, and well established rules of usage, maintenance and administration have evolved.
Originally, the falaj developed where higher-elevation water sources such as springs, qanats or surface water could be intercepted by diversion or small catchment dams and then conveyed by gravity to the point of use. More recently, however, dug wells have been used to supplement the falaj water. This is especially the case in the coastal areas where many hand-dug wells and tubewells have been constructed. For 47% of the total number of 62,411 households involved in irrigation, wells are now the main source of water, 39% rely on falaj water, while the remaining 14% have access to both sources.
Of the total area of 61,550 ha equipped for irrigation, all of which is power-irrigated using groundwater (wells, falaj), only 1,640 ha, or 2.7%, benefit from sprinkler irrigation and 2,090 ha, or 3.4%, from micro-irrigation techniques. Although the Ministry of Agriculture and Fisheries (MAF) is making efforts to introduce modern irrigation techniques, the traditional flood system remains the most common irrigation technique. In order to encourage farmers to take up the new techniques, MAF has approved a financial subsidy varying between 75% for small-scale schemes (less than 10 feddans or 4.2 ha), 50% for medium-scale schemes and 25% for large-scale schemes (more than 50 feddans or 21 ha). Most of the area consists of small schemes.
In 1996, the cost of irrigation development was estimated at US$3,250/ha for medium and large schemes and US$4,415/ha for small schemes. These costs represent the average cost of installing sprinkler irrigation and micro-irrigation systems. The average annual operation and maintenance costs are US\$845, 1,170 and 1,820/ha for large, medium and small schemes respectively.
Date palm is the main crop grown in Oman, occupying about half the total cropped area. Other crops are fodder crops (mainly alfalfa), other fruit trees (citrus, bananas, mangoes, coconuts) vegetables and cereals (mainly barley, wheat and sorghum).
No reliable information on the area salinized by irrigation is available. A study done in 1994 on the salinity of soils in general in Oman, states that an area of 11.7 million ha, which is 35% of the total area of Oman, is affected by salinity. No drainage is practiced.
## Institutional environment
Until May 2001, the Ministry of Water Resources (MOWR) was in charge of water resources assessment, whereas the Ministry of Agriculture and Fisheries (MAF) was in charge of irrigation. However, in May 2001, the Ministry of Water Resources was cancelled and its activities were transferred to the Ministry of Regional Municipality and Environment and Water Resource.
In 1988, Royal Decree No. 83/88 declared the water resources of Oman a national resource. This is the most far-reaching and important piece of legislation on water resources. Oman has several laws on water resources and the main measures taken for water management and conservation are:
• no wells may be constructed within 3.5 km of the mother well of the falaj;
• permits are required for the construction of new wells, for deepening existing wells, for changes in use and for installing a pump;
• all drilling and well digging contractors are required to register with MWR on a yearly basis;
• MWR has the cooperation of other government agencies such as the Ministry of Interior and the Royal Oman Police in dealing with offenders.
## Trends in water resources management
Three broadly-based programs have been set up by the government for:
• the improvement of data collection;
• a detailed assessment of the water resources;
• a study of water demand and its spatial distribution.
In addition to the above measures taken for water management and conservation, the government has recently initiated programs to relocate some of the large-scale farms in the Batinah and Salalah Plains, where the water resources are over-utilized, to areas with underutilized water resources. Several water conservation initiatives have been developed, like leakage control in municipal water supply schemes and the improvement of irrigation methods through subsidy programs. Public awareness of water resources issues has created a general and focused understanding of the overall situation and of the specific contribution each citizen can make.
The main issues and strategies that the government will address in the coming years are:
• creating and cultivating conservation awareness;
• matching water use to water availability;
• establishing an integrated program for the conservation and management of the resources at basin level;
• controlling saline intrusion by reducing abstraction below the long-term recharge;
• adopting improved irrigation techniques and selecting appropriate crops to reduce agricultural water use;
• controlling urban water losses;
• increasing the use of treated wastewater and desalinated water;
• protecting the groundwater resources in qualitative as well as quantitative terms;
• constructing new groundwater recharge dams.
• Improve and Development of Irrigation Water Management in aflaj system, Salim Al-Mamari, 2002
• Department of Agricultural Statistics. 1995. Agricultural Census 1992-93. Ministry of Agriculture and Fisheries.
• Ministry of Agriculture and Fisheries. [?]. Development and optimization of the use of water resources in the Sultanate of Oman.
• Ministry of Water Resources. 1991. National Water resources Master Plan, Oman.
• World Bank. 1988. Sultanate of Oman: Recent economic developments and prospects. Report No 6899-OM. Washington DC.
Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Food and Agriculture Organization. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Food and Agriculture Organization should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.
Glossary |
## What is $sin(90-θ)$ and $cos(90-θ)$?
Updated on 10-Oct-2022 09:36:22
## Why is acid harmful for us?
Updated on 10-Oct-2022 09:36:02
Strong acids are corrosive in nature and are a strong dehydrating agent. When corrosive acids react with the skin, they hydrolyze the fats. Strong acids are harmful to the skin, eyes, and respiratory tract. Acids are harmful to the digestive system when swallowed.Acids react violently with water and are harmful in the presence of moisture in the mouth or eyes. Also, vapours of some acids are soluble in water and can cause damage to the human eyes, nasal passages, throat, and lungs. The above image shows a hand after 2 seconds of contact with nitric acid. Symptoms: ... Read More
## Can a myotropic eye see distant objects clearly by seeing a reflection in the plane mirror?
Updated on 10-Oct-2022 09:36:00
In a plane mirror, object distance ( u ) = Image distance ( v ) The rays that are coming from a far object when they get reflected from the mirror, these reflected rays they appear to be coming from an image (when rays are extended back) that is formed behind the mirror. So originally, these rays are coming from the same distance. Hence though the mirror gives us an illusion of the object being close to us, but actually the rays from the image are coming from the same distance as the ... Read More
## Solve 53% of 53
Updated on 10-Oct-2022 09:36:00
To solve such problems we simply replace % sign with 1/100 and 'of' will be replaced by multiplication sign so,53% of 53 will be 53 x (1/100) x 53 = 28.09 |
Detection of a gamma-ray flare from the high-redshift blazar DA 193
Paliya, Vaidehi S.; Ajello, M.; Ojha, R.; Angioni, R.; Cheung, C. C.; Tanada, K.; Pursimo, T.; Galindo, P.; Losada, I. R.; Siltala, L.; Djupvik, A. A.; Marcotulli, L.; Hartmann, D.
eprint arXiv:1812.07350
12/2018
ABSTRACT
High-redshift ($z>2$) blazars are the most powerful members of the blazar family. Yet, only a handful of them have both X-ray and $\gamma$-ray detection, thereby making it difficult to characterize the energetics of the most luminous jets. Here, we report, for the first time, the Fermi-Large Area Telescope detection of the significant $\gamma$-ray emission from the high-redshift blazar DA 193 ($z=2.363$). Its time-averaged $\gamma$-ray spectrum is soft ($\gamma$-ray photon index = $2.9\pm0.1$) and together with a relatively flat hard X-ray spectrum (14$-$195 keV photon index = $1.5\pm0.4$), DA 193 presents a case to study a typical high-redshift blazar with inverse Compton peak being located at MeV energies. An intense GeV flare was observed from this object in the first week of 2018 January, a phenomenon rarely observed from high-redshift sources. What makes this event a rare one is the observation of an extremely hard $\gamma$-ray spectrum (photon index = $1.7\pm0.2$), which is somewhat unexpected since high-redshift blazars typically exhibit a steep falling spectrum at GeV energies. The results of our multi-frequency campaign, including both space- (Fermi, NuSTAR, and Swift) and ground-based (Steward and Nordic Optical Telescope) observatories, are presented and this peculiar $\gamma$-ray flare is studied within the framework of a single-zone leptonic emission scenario. |
# Fixed points for Chatterjea contractions on a metric space with a graph
Document Type: Research Paper
Authors
1 Department of Mathematics, Payame Noor University, P.O. Box 19395-3697, Tehran, Iran
2 Department of Mathematics, Payame Noor University, P.O. Box 19395-3697, Tehran, Iran
Abstract
In this work, we formulate Chatterjea contractions using graphs in metric spaces endowed with a graph and investigate the existence of fixed points for such mappings under two different hypotheses. We also discuss the uniqueness of the fixed point. The given result is a generalization of Chatterjea's fixed point theorem from metric spaces to metric spaces endowed with a graph.
Keywords |
Article Contents
Article Contents
# Random time ruin probability for the renewal risk model with heavy-tailed claims
• In this paper, we investigate the asymptotic behavior of the random time ruin probability for the renewal risk model with heavy-tailed claim sizes. Under the assumption that the claim sizes are independent and long-tailed, we give the equivalent conditions on asymptotic behavior for the random time ruin probability, where the independent or dependent structure among the inter-arrival times is not needed. While, under the assumption that the claim sizes are of some negative dependence structure and consistently varying tails, we obtain the sufficient condition of asymptotic behavior for the random time ruin probability which will require some negative dependence structure among the inter-arrival times.
Mathematics Subject Classification: Primary: 62P05; Secondary 62E20, 60F10.
Citation:
• on this site
/ |
# The following chemical reaction takes place in aqueous solution: ZnCl_2(aq) + Na_2S(aq) => ZnS(s)...
## Question:
The following chemical reaction takes place in aqueous solution:
ZnCl{eq}_2 {/eq}(aq) + Na{eq}_2 {/eq}S(aq) {eq}\to {/eq} ZnS(s) + 2NaCl(aq)
write the net ionic equation for this reaction.
## Ionic Equations:
An aqueous solution is a homogeneous mixture where liquid water acts as the solvent for one or more dissolved species (solutes). In many cases these solutes have a net positive or negative charge. Examples include species involved in acid-base, redox or precipitation reactions. The ionic equation is distinct from a conventional molecular form by writing highly dissociated species in terms of their ionic components, with such species possibly present on both sides of the equation. If there are ionic species that do not take part in the reaction of interest, then they are spectators and omitted in the net form of the ionic equation.
The given molecular reaction equation is:
{eq}\rm ZnCl_2 (aq) + Na_2S (aq) \rightarrow ZnS (s) + 2NaCl (aq) {/eq}
This is a precipitation reaction where ZnS is the solid product. It is formed by constituent ions provided by the two different salt compound reactant solutions. The sodium cations and chloride anions remain in solution. They are spectators in the precipitation process, so they are omitted in the net ionic equation:
{eq}\boxed{\rm Zn^{2+} (aq) + S^{2-} (aq) \rightarrow ZnS (s) }{/eq} |
Subsets and Splits